1. 程式人生 > >在Hyperledger fabric中搭建小型Kafka叢集

在Hyperledger fabric中搭建小型Kafka叢集

kafka叢集部署

1.準備工作

名稱 IP地址 Hostname 組織結構
zk1 192.168.247.101 zookeeper1
zk2 192.168.247.102 zookeeper2
zk3 192.168.247.103 zookeeper3
kafka1 192.168.247.201 kafka1
kafka2 192.168.247.202 kafka2
kafka3 192.168.247.203 kafka3
kafka4 192.168.247.204 kafka4

為了保證整個叢集的正常工作, 需要給叢集中的各個節點設定工作目錄, 我們要保證各個節點工作目錄是相同的

# 在以上各個節點的家目錄建立工作目錄:
$ mkdir ~/kafka

2. 生成證書檔案

2.1 編寫配置檔案

# crypto-config.yaml
OrdererOrgs:
  - Name: Orderer
    Domain: test.com
    Specs:
      - Hostname: orderer0	# 第1個排序節點: orderer0.test.com
      - Hostname: orderer1	# 第2個排序節點: orderer1.test.com
      - Hostname: orderer2  # 第3個排序節點: orderer2.test.com
PeerOrgs: - Name: OrgGo Domain: orggo.test.com Template: Count: 2 # 當前go組織兩個peer節點 Users: Count: 1 - Name: OrgCpp Domain: orgcpp.test.com Template: Count: 2 # 當前cpp組織兩個peer節點 Users: Count: 1

2.2 生成證書

$ cryptogen generate --config=crypto-config.yaml
$ tree ./ -L 1
./
├── `
crypto-config`
-> 證書檔案目錄 └── crypto-config.yaml

3. 生成創始塊和通道檔案

3.1 編寫配置檔案


---
################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:
    - &OrdererOrg
        Name: OrdererOrg
        ID: OrdererMSP
        MSPDir: crypto-config/ordererOrganizations/test.com/msp

    - &go_org
        Name: OrgGoMSP
        ID: OrgGoMSP
        MSPDir: crypto-config/peerOrganizations/orggo.test.com/msp
        AnchorPeers:
            - Host: peer0.orggo.test.com
              Port: 7051

    - &cpp_org
        Name: OrgCppMSP
        ID: OrgCppMSP
        MSPDir: crypto-config/peerOrganizations/orgcpp.test.com/msp
        AnchorPeers:
            - Host: peer0.orgcpp.test.com
              Port: 7051

################################################################################
#
#   SECTION: Capabilities
#
################################################################################
Capabilities:
    Global: &ChannelCapabilities
        V1_1: true
    Orderer: &OrdererCapabilities
        V1_1: true
    Application: &ApplicationCapabilities
        V1_2: true

################################################################################
#
#   SECTION: Application
#
################################################################################
Application: &ApplicationDefaults
    Organizations:

################################################################################
#
#   SECTION: Orderer
#
################################################################################
Orderer: &OrdererDefaults
    # Available types are "solo" and "kafka"
    OrdererType: kafka
    Addresses:
        # 排序節點伺服器地址
        - orderer0.test.com:7050
        - orderer1.test.com:7050
        - orderer2.test.com:7050

    BatchTimeout: 2s
    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 99 MB
        PreferredMaxBytes: 512 KB
    Kafka:
        Brokers: 
            # kafka伺服器地址
            - 192.168.247.201:9092
            - 192.168.247.202:9092
            - 192.168.247.203:9092
            - 192.168.247.204:9092
    Organizations:

################################################################################
#
#   Profile
#
################################################################################
Profiles:
    OrgsOrdererGenesis:
        Capabilities:
            <<: *ChannelCapabilities
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
            Capabilities:
                <<: *OrdererCapabilities
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *go_org
                    - *cpp_org
    OrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *go_org
                - *cpp_org
            Capabilities:
                <<: *ApplicationCapabilities 

3.2 生成通道和創始塊檔案

  • 生成創始塊檔案
# 我們先建立一個目錄 channel-artifacts 儲存生成的檔案, 目的是為了和後邊的配置檔案模板的配置項保持一致
$ mkdir channel-artifacts
# 生成通道檔案
$ configtxgen -profile OrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
  • 生成通道檔案
# 生成創始塊檔案
$ configtxgen -profile OrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID testchannel

4. Zookeeper設定

4.1 基本概念

  • zookeeper 的運作流程
    在配置之前, 讓我們先了解一下 Zookeeper 的基本運轉流程:
  • 選舉Leader

    • 選舉Leader過程中演算法有很多,但要達到的選舉標準是一致的
    • Leader要具有最高的執行ID,類似root許可權。
    • 叢集中大多數的機器得到響應並跟隨選出的Leader。
  • 資料同步

  • Zookeeper的叢集數量

    Zookeeper 叢集的數量可以是 3, 5, 7, 它值需要是一個奇數以避免腦裂問題(split-brain)的情況。同時選擇大於1的值是為了避免單點故障,如果叢集的數量超過7個Zookeeper服務將會無法承受。

4.2 zookeeper配置檔案模板

  • 配置檔案模板

下面我們來看一個示例配置檔案, 研究下zookeeper如何配置:

version: '2'
services:
  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=1
      # server.x=hostname:prot1:port2
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.24.201
      - zookeeper2:192.168.24.202
      - zookeeper3:192.168.24.203
      - kafka1:192.168.24.204
      - kafka2:192.168.24.205
      - kafka3:192.168.24.206
      - kafka4:192.168.24.207
  • 相關配置項解釋:
    1. docker 的restart策略

      • no – 容器退出時不要自動重啟,這個是預設值。
      • on-failure[:max-retries] – 只在容器以非0狀態碼退出時重啟, 例如:on-failure:10
      • always – 不管退出狀態碼是什麼始終重啟容器
      • unless-stopped – 不管退出狀態碼是什麼始終重啟容器,不過當daemon啟動時,如果容器之前已經為停止狀態,不要嘗試啟動它。
    2. 環境變數

      • ZOO_MY_ID

        zookeeper叢集中的當前zookeeper伺服器節點的ID, 在叢集中這個只是唯一的, 範圍: 1-255

      • ZOO_SERVERS

        • 組成zookeeper叢集的伺服器列表
        • 列表中每個伺服器的值都附帶兩個埠號
          • 第一個: 追隨者用來連線 Leader 使用的
          • 第二個: 使用者選舉 Leader
    3. zookeeper伺服器中三個重要埠:

      • 訪問zookeeper的埠: 2181
      • zookeeper叢集中追隨者連線 Leader 的埠: 2888
      • zookeeper叢集中選舉 Leader 的埠: 3888
    4. extra_hosts

      • 設定伺服器名和其指向的IP地址的對應關係
      • zookeeper1:192.168.24.201
        • 看到名字zookeeper1就會將其解析為IP地址: 192.168.24.201

4.3 各個zookeeper節點的配置

zookeeper1 配置

# zookeeper1.yaml
version: '2'

services:

  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper:latest
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=1
      # server.x=[hostname]:nnnnn[:nnnnn]
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

zookeeper2 配置

# zookeeper2.yaml
version: '2'

services:

  zookeeper2:
    container_name: zookeeper2
    hostname: zookeeper2
    image: hyperledger/fabric-zookeeper:latest
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=2
      # server.x=[hostname]:nnnnn[:nnnnn]
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

zookeeper3 配置

# zookeeper3.yaml
version: '2'

services:

  zookeeper3:
    container_name: zookeeper3
    hostname: zookeeper3
    image: hyperledger/fabric-zookeeper:latest
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=3
      # server.x=[hostname]:nnnnn[:nnnnn]
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

5. Kafka設定

5.1 基本概念

Katka是一個分散式訊息系統,由LinkedIn使用scala編寫,用作LinkedIn的活動流(Activitystream)和運營資料處理管道(Pipeline)的基礎。具有高水平擴充套件和高吞吐量。

在Fabric網路中,資料是由Peer節點提交到Orderer排序服務,而Orderer相對於Kafka來說相當於上游模組,且Orderer還兼具提供了對資料進行排序及生成符合配置規範及要求的區塊。而使用上游模組的資料計算、統計、分析,這個時候就可以使用類似於Kafka這樣的分散式訊息系統來協助業務流程。

有人說Kafka是一種共識模式,也就是說平等信任,所有的HyperLedger Fabric網路加盟方都是可信方,因為訊息總是均勻地分佈在各處。但具體生產使用的時候是依賴於背書來做到確權,相對而言,Kafka應該只能是一種啟動Fabric網路的模式或型別。

Zookeeper一種在分散式系統中被廣泛用來作為分散式狀態管理、分散式協調管理、分散式管和分散式鎖服務的叢集。Kafka增加和減少伺服器都會在Zookeeper節點上觸發相應的事件,Kafka系統會捕獲這些事件,進行新一輪的負載均衡,客戶端也會捕獲這些事件來進行新一輪的處理。

Orderer排序服務是Fablic網路事務流中的最重要的環節,也是所有請求的點,它並不會立刻對請求給予回饋,一是因為生成區塊的條件所限,二是因為依託下游叢集的訊息處理需要等待結果。

5.2 kafka配置檔案模板

  • kafka配置檔案模板

    version: '2'
    
    services:
      kafka1:
        container_name: kafka1
        hostname: kafka1
        image: hyperledger/fabric-kafka
        restart: always
        environment:
          # broker.id
          - KAFKA_BROKER_ID=1
          - KAFKA_MIN_INSYNC_REPLICAS=2
          - KAFKA_DEFAULT_REPLICATION_FACTOR=3
          - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
          # 99 * 1024 * 1024 B
          - KAFKA_MESSAGE_MAX_BYTES=103809024 
          - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
          - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
          - KAFKA_LOG_RETENTION_MS=-1
          - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
        ports:
          - 9092:9092
        extra_hosts:
          - zookeeper1:192.168.24.201
          - zookeeper2:192.168.24.202
          - zookeeper3:192.168.24.203
          - kafka1:192.168.24.204
          - kafka2:192.168.24.205
          - kafka3:192.168.24.206
          - kafka4:192.168.24.207
    
    • 配置項解釋
    1. Kafka 預設埠為: 9092
    2. 環境變數:
      • KAFKA_BROKER_ID
        • 是一個唯一的非負整數, 可以作為代理Broker的名字
      • KAFKA_MIN_INSYNC_REPLICAS
        • 最小同步備份
        • 該值要小於環境變數 KAFKA_DEFAULT_REPLICATION_FACTOR的值
      • KAFKA_DEFAULT_REPLICATION_FACTOR
        • 默認同步備份, 該值要小於kafka叢集數量
      • KAFKA_ZOOKEEPER_CONNECT
        • 指向zookeeper節點的集合
      • KAFKA_MESSAGE_MAX_BYTES
        • 訊息的最大位元組數
        • 和配置檔案configtx.yaml中的Orderer.BatchSize.AbsoluteMaxBytes對應
        • 由於訊息都有頭資訊, 所以這個值要比計算出的值稍大, 多加1M就足夠了
      • KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
        • 副本最大位元組數, 試圖為每個channel獲取的訊息的位元組數
        • AbsoluteMaxBytes<KAFKA_REPLICA_FETCH_MAX_BYTES <= KAFKA_MESSAGE_MAX_BYTES
      • KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
        • 非一致性的 Leader 選舉
          • 開啟: true
          • 關閉: false
      • KAFKA_LOG_RETENTION_MS=-1
        • 對壓縮日誌保留的最長時間
        • 這個選項在Kafka中已經預設關閉
      • KAFKA_HEAP_OPTS
        • 設定堆記憶體大小, kafka預設為 1G
          • -Xmx256M -> 允許分配的堆記憶體
          • -Xms128M -> 初始分配的堆記憶體

5.3 各個kafka節點的配置

kafka1 配置

# kafka1.yaml
version: '2'

services:

  kafka1:
    container_name: kafka1
    hostname: kafka1
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=1
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

kafka2 配置

# kafka2.yaml
version: '2'

services:

  kafka2:
    container_name: kafka2
    hostname: kafka2
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=2
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

kafka3 配置

# kafka3.yaml
version: '2'

services:

  kafka3:
    container_name: kafka3
    hostname: kafka3
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=3
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

kafka4 配置

# kafka4.yaml
version: '2'
services:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=4
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - zookeeper1:192.168.247.101
      - zookeeper2:192.168.247.102
      - zookeeper3:192.168.247.103
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

6. orderer節點設定

6.1 orderer節點配置檔案模板

  • orderer節點配置檔案模板

    version: '2'
    
    services:
    
      orderer0.example.com:
        container_name: orderer0.example.com
        image: hyperledger/fabric-orderer
        environment:
          - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
          - ORDERER_GENERAL_LOGLEVEL=debug
          - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
          - ORDERER_GENERAL_LISTENPORT=7050
          - ORDERER_GENERAL_GENESISMETHOD=file
          - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
          - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
          - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
          # enabled TLS
          - ORDERER_GENERAL_TLS_ENABLED=false
          - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
          
          - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
          - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
          - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
          - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
          - ORDERER_KAFKA_VERBOSE=true
          - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric
        command: orderer
        volumes:
          - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
          - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
          - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
        networks:
        default:
          aliases:
            - aberic
        ports:
          - 7050:7050
        extra_hosts:
          - kafka1:192.168.24.204
          - kafka2:192.168.24.205
          - kafka3:192.168.24.206
          - kafka4:192.168.24.207
    
  • 細節解釋

    1. 環境變數
      • ORDERER_KAFKA_RETRY_LONGINTERVAL
        • 每隔多長時間進行一次重試, 單位:秒
      • ORDERER_KAFKA_RETRY_LONGTOTAL
        • 總共重試的時長, 單位: 秒
      • ORDERER_KAFKA_RETRY_SHORTINTERVAL
        • 每隔多長時間進行一次重試, 單位:秒
      • ORDERER_KAFKA_RETRY_SHORTTOTAL
        • 總共重試的時長, 單位: 秒
      • ORDERER_KAFKA_VERBOSE
        • 啟用日誌與kafka進行互動, 啟用: true, 不啟用: false
      • ORDERER_KAFKA_BROKERS
        • 指向kafka節點的集合
    2. 關於重試的時長
      • 先使用ORDERER_KAFKA_RETRY_SHORTINTERVAL進行重連, 重連的總時長為ORDERER_KAFKA_RETRY_SHORTTOTAL
      • 如果上述步驟沒有重連成功, 使用ORDERER_KAFKA_RETRY_LONGINTERVAL進行重連, 重連的總時長為ORDERER_KAFKA_RETRY_LONGTOTAL

6.3 orderer各節點的配置

orderer0配置

# orderer0.yaml
version: '2'

services:

  orderer0.test.com:
    container_name: orderer0.test.com
    image: hyperledger/fabric-orderer:latest
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
      
      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer0.test.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer0.test.com/tls/:/var/hyperledger/orderer/tls
    networks:
    default:
      aliases:
        - kafka
    ports:
      - 7050:7050
    extra_hosts:
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

orderer1配置

# orderer1.yaml
version: '2'

services:

  orderer1.test.com:
    container_name: orderer1.test.com
    image: hyperledger/fabric-orderer:latest
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
      
      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer1.test.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer1.test.com/tls/:/var/hyperledger/orderer/tls
    networks:
    default:
      aliases:
        - kafka
    ports:
      - 7050:7050
    extra_hosts:
      - kafka1:192.168.247.201
      - kafka2:192.168.247.202
      - kafka3:192.168.247.203
      - kafka4:192.168.247.204

orderer2配置

# orderer2.yaml
version: '2'

services:

  orderer2.test.com