为了复制这个问题,运行docker运行zookeeper然后docker-compose up在下面的yaml文件上.
我正在使用zookeeper最新图片,wurstmeister / kafka:0.9.0.0-1和sheepkiller/kafka-manager:latest.我运行了docker-compose up和finallzy让它工作但现在我收到以下错误:
我已经搜索了git并且无法使用堆栈.在我保存群集之前,一切看起来都很好.在Kafka日志中,我得到:
[warn] o.a.z.ClientCnxn – Session 0x0 for server null,unexpected
error,closing socket connection and attempting reconnect
kafka-manager_1 | java.net.ConnectException: Connection refused
kafka-manager_1 | at
sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
~[na:1.8.0_151] kafka-manager_1 | at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
~[na:1.8.0_151] kafka-manager_1 | at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
~[org.apache.zookeeper.zookeeper-3.4.6.jar:3.4.6-1569965]
kafka-manager_1 | at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
~[org.apache.zookeeper.zookeeper-3.4.6.jar:3.4.6-1569965]
我也看到了这个:
INFO Got user-level KeeperException when processing sessionid:0x160fb22e9f50000 type:create cxid:0x2a zxid:0x3e txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids (org.apache.zookeeper.server.PrepRequestProcessor)
这可以通过启动zookeeper并获取下面的yaml文件配置并运行docker-compose来复制.我已经休息了一个星期,我不知道为什么它不起作用.
Yaml文件:
zookeeper:
image: confluent/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka:0.9.0.0-1
ports:
- "9092:9092"
links:
- zookeeper:zk
environment:
- KAFKA_ADVERTISED_HOST_NAME
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_DELETE_TOPIC_ENABLE=true
- KAFKA_LOG_RETENTION_HOURS=1
- KAFKA_MESSAGE_MAX_BYTES=10000000
- KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
- KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
- KAFKA_NUM_PARTITIONS=2
- KAFKA_DELETE_RETENTION_MS=1000
kafka-manager:
image: sheepkiller/kafka-manager:latest
ports:
- "9000:9000"
links:
- zookeeper
- kafka
environment:
ZK_HOSTS: zookeeper:2181
APPLICATION_SECRET: letmein
KM_ARGS: -Djava.net.preferIPv4Stack=true
簇:
更多群集设置:
然后:
我运行docker run -it -d zookeeper,然后在该yml文件上进行docker-compose.它启动但在我创建集群时崩溃.
Docker配置:
Containers: 53 Running: 2 Paused: 0 Stopped: 51 Images: 13 Server
Version: 17.12.0-ce Storage Driver: overlay2 Backing Filesystem:
extfs Supports d_type: true Native Overlay Diff: true Logging
Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local
Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd
gcplogs gelf journald json-file logentries splunk syslog Swarm:
inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729 runc
version: b2567b37d7b75eb4cf325b77297b140ea686ce8f init version:
949e6fa Security Options: seccomp Profile: default Kernel Version:
4.9.60-linuxkit-aufs Operating System: Docker for Windows OSType: linux Architecture: x86_64 cpus: 2 Total Memory: 1.934GiB Name:
linuxkit-00155dc95329 ID:
YJE3:ZJKS:BJCF:TY4W:BU2Y:U7ZO:5P4B:PYMQ:SLVH:KTXD:V2OS:XKCD Docker
Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode
(server): true File Descriptors: 19 Goroutines: 36 System Time:
2018-01-23T09:47:18.7698506Z EventsListeners: 1 Registry:
07006 Labels: Experimental: true Insecure
Registries:
127.0.0.0/8 Live Restore Enabled: false
> kafka-manager.broker-view-thread-pool-size =< 3 * number_of_brokers>
> kafka-manager.broker-view-max-queue-size =< 3 *所有主题的分区总数>
> kafka-manager.broker-view-update-seconds =< kafka-manager.broker-view-max-queue-size /(10 * number_of_brokers)>
> kafka-manager.offset-cache-thread-pool-size =<默认是处理器数量>
> kafka-manager.offset-cache-max-queue-size =<默认值是1000>
> kafka-manager.kafka-admin-client-thread-pool-size =<默认是处理器数量>
> kafka-manager.kafka-admin-client-max-queue-size =<默认值是1000>