Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Testing can';t使用sasl.jaas.config让testcontainers Kafka测试ACL是否正常工作_Testing_Apache Kafka_Acl_Sasl_Testcontainers - Fatal编程技术网

Testing can';t使用sasl.jaas.config让testcontainers Kafka测试ACL是否正常工作

Testing can';t使用sasl.jaas.config让testcontainers Kafka测试ACL是否正常工作,testing,apache-kafka,acl,sasl,testcontainers,Testing,Apache Kafka,Acl,Sasl,Testcontainers,我正试图利用一些自动化单元测试在本地测试Kafka。我无法测试授权 我的目标是测试 (1) 如果此测试容器中没有ACL,则不允许KafkaProducer向其写入(目前,即使没有创建acl,只要制作人配置正确,它也可以发送到主题-我认为将kafka env变量allow.everybody.if.no.acl.found设置为false会起作用-但情况似乎并非如此) (2) 测试KafkaProducer是否未使用正确的sasl.jaas.config(即不正确的apiKey和pasword)来

我正试图利用一些自动化单元测试在本地测试Kafka。我无法测试授权

我的目标是测试

(1) 如果此测试容器中没有ACL,则不允许
KafkaProducer
向其写入(目前,即使没有创建acl,只要制作人配置正确,它也可以发送到主题-我认为将kafka env变量
allow.everybody.if.no.acl.found
设置为false会起作用-但情况似乎并非如此)

(2) 测试
KafkaProducer
是否未使用正确的
sasl.jaas.config
(即不正确的apiKey和pasword)来拒绝访问测试主题,即使为所有主体设置了ACL

下面是我的代码。我可以让它“工作”,但测试上面两个我还没有弄清楚的场景。我想我可能没有真正创建ACL,就像我在创建ACL后添加一行(
adminClient.describeAcls(AclBindingFilter.ANY).values().get()一样;
我得到一个
代理上没有配置授权人
错误)->查看类似的帖子,我认为这意味着实际上没有创建ACL绑定

import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.containers.Network;
import org.testcontainers.utility.DockerImageName;
import java.util.ArrayList;
import java.util.List;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.KafkaAdminClient;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.acl.AccessControlEntry;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.acl.AclPermissionType;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourceType;
import org.apache.kafka.common.serialization.StringSerializer;

        String topicName = "this-is-a-topic";
        String confluentVersion = "5.5.1";
        network = Network.newNetwork();
        String jaasTemplate = "org.apache.kafka.common.security.plain.PlainLoginModule required %s=\"%s\" %s=\"%s\";";
        String jaasConfig = String.format(jaasTemplate, "username", "apiKey", "password", "apiPassword");
        kafka = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:" + confluentVersion))
                .withNetwork(network)
                .withEnv("KAFKA_AUTO_CREATE_TOPICS_ENABLE", "false")
                .withEnv("KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND", "false")
                .withEnv("KAFKA_SUPER_USERS", "User:OnlySuperUser")
                .withEnv("KAFKA_SASL_MECHANISM", "PLAIN")
                .withEnv("KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM", "http")
                .withEnv("KAFKA_SASL_JAAS_CONFIG", jaasConfig);

        kafka.start();
        schemaRegistryContainer = new SchemaRegistryContainer(confluentVersion).withKafka(kafka);
        schemaRegistryContainer.start();

        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers());
        properties.put("input.topic.name", topicName);
        properties.put("input.topic.partitions", "1");
        adminClient = KafkaAdminClient.create(properties);
        AclBinding ACL = new AclBinding(new ResourcePattern(ResourceType.TOPIC, topicName, PatternType.LITERAL),
                new AccessControlEntry( "User:*", "*", AclOperation.WRITE, AclPermissionType.ALLOW));
        var acls = adminClient.createAcls(List.of(ACL)).values();


        List<NewTopic> topics = new ArrayList<>();
        topics.add(
                new NewTopic(topicName,
                        Integer.parseInt(properties.getProperty("input.topic.partitions")),
                        Short.parseShort(properties.getProperty("input.topic.replication.factor")))
        );
        adminClient.createTopics(topics);

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

        props.put("input.topic.name", topicName);
        props.put("security.protocol", "PLAINTEXT");
        props.put("input.topic.partitions", "1");
        props.put("input.topic.replication.factor", "1");
        props.put("metadata.fetch.timeout.ms", "10000");
        props.put("sasl.jaas.config", jaasConfig);

        producer = new KafkaProducer<>(props);

        String key = "testContainers";
        String value = "AreAwesome";
        ProducerRecord<String, String> record = new ProducerRecord<>(
                        (String) props.get("input.topic.name"), key, value);
        try {
             RecordMetadata o = (RecordMetadata) producer.send(record).get();
             System.out.println(o.toString());
        } catch (Exception e) {
             e.printStackTrace();
        }
import org.testcontainers.containers.kafkancontainer;
导入org.testcontainers.containers.Network;
导入org.testcontainers.utility.DockerImageName;
导入java.util.ArrayList;
导入java.util.List;
导入org.apache.kafka.clients.admin.AdminClient;
导入org.apache.kafka.clients.admin.KafkaAdminClient;
导入org.apache.kafka.clients.admin.NewTopic;
导入org.apache.kafka.clients.producer.ProducerConfig;
导入org.apache.kafka.common.acl.AccessControlEntry;
导入org.apache.kafka.common.acl.AclBinding;
导入org.apache.kafka.common.acl.acl操作;
导入org.apache.kafka.common.acl.AclPermissionType;
导入org.apache.kafka.common.resource.PatternType;
导入org.apache.kafka.common.resource.ResourcePattern;
导入org.apache.kafka.common.resource.ResourceType;
导入org.apache.kafka.common.serialization.StringSerializer;
String topicName=“this-is-a-topic”;
字符串confluentVersion=“5.5.1”;
network=network.newNetwork();
字符串jaasTemplate=“org.apache.kafka.common.security.plain.PlainLoginModule所需的%s=\%s\%s=\%s\”;
String jaasConfig=String.format(jaasTemplate,“用户名”、“apiKey”、“密码”、“apiPassword”);
卡夫卡=新卡夫卡容器(dockeriagename.parse(“confluentinc/cp卡夫卡:+confluentVersion))
.withNetwork(网络)
.withEnv(“卡夫卡自动创建主题启用”、“假”)
.withEnv(“卡夫卡允许所有人,如果没有找到ACL”,“错误”)
.withEnv(“卡夫卡超级用户”,“用户:仅超级用户”)
.withEnv(“卡夫卡-萨苏机制”,“平原”)
.withEnv(“KAFKA\u SSL\u ENDPOINT\u IDENTIFICATION\u ALGORITHM”,“http”)
.withEnv(“卡夫卡•萨斯勒•贾亚斯•配置”,jaasConfig);
kafka.start();
schemaRegistryContainer=新的schemaRegistryContainer(合流版本)。带有卡夫卡(卡夫卡);
schemaRegistryContainer.start();
属性=新属性();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,kafka.getbootstrapserver());
properties.put(“input.topic.name”,topicName);
properties.put(“input.topic.partitions”,“1”);
adminClient=KafkaAdminClient.create(属性);
AclBinding ACL=new AclBinding(新的ResourcePattern(ResourceType.TOPIC、topicName、PatternType.LITERAL),
新的AccessControlEntry(“用户:*”、“*”、AclOperation.WRITE、AclPermissionType.ALLOW));
var acls=adminClient.createAcls(列表(ACL)).values();
列表主题=新建ArrayList();
topics.add(
新NewTopic(主题名称,
Integer.parseInt(properties.getProperty(“input.topic.partitions”),
parseShort(properties.getProperty(“input.topic.replication.factor”))
);
adminClient.createTopics(主题);
Properties props=新属性();
put(ProducerConfig.BOOTSTRAP\u SERVERS\u CONFIG,bootstrapServer);
put(ProducerConfig.KEY\u SERIALIZER\u CLASS\u CONFIG,StringSerializer.CLASS);
put(ProducerConfig.VALUE\u SERIALIZER\u CLASS\u CONFIG,StringSerializer.CLASS);
props.put(“input.topic.name”,topicName);
props.put(“安全协议”、“明文”);
props.put(“input.topic.partitions”,“1”);
props.put(“input.topic.replication.factor”,“1”);
put(“metadata.fetch.timeout.ms”,“10000”);
put(“sasl.jaas.config”,jaasConfig);
制作人=新卡夫卡制作人(道具);
String key=“testContainers”;
String value=“AreAwesome”;
生产记录=新生产记录(
(字符串)props.get(“input.topic.name”)、键、值);
试一试{
RecordMetadata o=(RecordMetadata)producer.send(record.get();
System.out.println(o.toString());
}捕获(例外e){
e、 printStackTrace();
}

找到解决方案了吗?似乎在识别SASL_机制方面遇到了一些问题-这也是我遇到的-还没有时间找到解决方案。但是如果我找到了-我会发布,如果你也找到了,请发布:)谢谢回复!您好@hhprogram,如果您不介意我利用您的知识/经验,您是否必须使用
with copyFileToContainer
复制容器中的任何
jaas
文件,例如,
kafka_server\u jaas.conf
?因为仅仅用env设置
,我无法让它工作(这是有意义的)。谢谢