1. 程式人生 > >我對hyperledger fabric1.1.0的執著(六):kafka叢集部署

我對hyperledger fabric1.1.0的執著(六):kafka叢集部署

1、用11臺伺服器,如下

名稱 ip Hostname 組織機構
Zk1 192.168.2.237 zookeeper1  
Zk2 192.168.2.131 zookeeper2  
Zk3 192.168.2.188 zookeeper3  
kafka1 192.168.2.182 kafka1  
kafka2 192.168.2.213 kafka2  
kafka3 192.168.2.137 kafka3  
kafka4 192.168.2.186 kafka4  
orderer0 192.168.2.238 orderer0.example.com  
orderer1 192.168.2.210 orderer1.example.com  
orderer2 192.168.2.235 orderer2.example.com  
peer0 192.168.2.118 peer0.org1.example.com Org1
peer1 192.168.2.21 peer1.org2.example.com org2

2、crypto-config.yaml配置:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
  # ---------------------------------------------------------------------------
  # Orderer
  # ---------------------------------------------------------------------------
  - Name: Orderer
    Domain: example.com
    # ---------------------------------------------------------------------------
    # "Specs" - See PeerOrgs below for complete description
    # ---------------------------------------------------------------------------
    Specs:
      - Hostname: orderer0
      - Hostname: orderer1
      - Hostname: orderer2
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
  # ---------------------------------------------------------------------------
  # Org1
  # ---------------------------------------------------------------------------
  - Name: Org1
    Domain: org1.example.com
    # ---------------------------------------------------------------------------
    # "Specs"
    # ---------------------------------------------------------------------------
    # Uncomment this section to enable the explicit definition of hosts in your
    # configuration.  Most users will want to use Template, below
    #
    # Specs is an array of Spec entries.  Each Spec entry consists of two fields:
    #   - Hostname:   (Required) The desired hostname, sans the domain.
    #   - CommonName: (Optional) Specifies the template or explicit override for
    #                 the CN.  By default, this is the template:
    #
    #                              "{{.Hostname}}.{{.Domain}}"
    #
    #                 which obtains its values from the Spec.Hostname and
    #                 Org.Domain, respectively.
    # ---------------------------------------------------------------------------
    # Specs:
    #   - Hostname: foo # implicitly "foo.org1.example.com"
    #     CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
    #   - Hostname: bar
    #   - Hostname: baz
    # ---------------------------------------------------------------------------
    # "Template"
    # ---------------------------------------------------------------------------
    # Allows for the definition of 1 or more hosts that are created sequentially
    # from a template. By default, this looks like "peer%d" from 0 to Count-1.
    # You may override the number of nodes (Count), the starting index (Start)
    # or the template used to construct the name (Hostname).
    #
    # Note: Template and Specs are not mutually exclusive.  You may define both
    # sections and the aggregate nodes will be created for you.  Take care with
    # name collisions
    # ---------------------------------------------------------------------------
    Template:
      Count: 2
      # Start: 5
      # Hostname: {{.Prefix}}{{.Index}} # default
    # ---------------------------------------------------------------------------
    # "Users"
    # ---------------------------------------------------------------------------
    # Count: The number of user accounts _in addition_ to Admin
    # ---------------------------------------------------------------------------
    Users:
      Count: 1
  # ---------------------------------------------------------------------------
  # Org2: See "Org1" for full specification
  # ---------------------------------------------------------------------------
  - Name: Org2
    Domain: org2.example.com
    Template:
      Count: 2
    Users:
      Count: 1
    Specs:
      - Hostname: foo
        CommonName: foo27.org2.example.com
      - Hostname: bar
      - Hostname: baz

  - Name: Org3
    Domain: org3.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org4
    Domain: org4.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org5
    Domain: org5.example.com
    Template:
      Count: 2
    Users:
      Count: 1

3、將cryptogen-config.yaml檔案上傳到192.168.2.238伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic資料夾,執行以下命令生成節點所需要的配置檔案:

./bin/cryptogen generate --config=./crypto-config.yaml

4、編寫configtx.yaml配置檔案:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

---
################################################################################
#
#   Profile
#
#   - Different configuration profiles may be encoded here to be specified
#   as parameters to the configtxgen tool
#
################################################################################
Profiles:

    TwoOrgsOrdererGenesis:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:                
                    - *Org1
                    - *Org2
                    - *Org3
                    - *Org4
                    - *Org5
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2
                - *Org3
                - *Org4
                - *Org5

################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:

    # SampleOrg defines an MSP using the sampleconfig.  It should never be used
    # in production but may be used as a template for other definitions
    - &OrdererOrg
        # DefaultOrg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        Name: OrdererOrg

        # ID to load the MSP definition as
        ID: OrdererMSP

        # MSPDir is the filesystem path which contains the MSP configuration
        MSPDir: crypto-config/ordererOrganizations/example.com/msp

    - &Org1
        # DefaultOrg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        Name: Org1MSP

        # ID to load the MSP definition as
        ID: Org1MSP

        MSPDir: crypto-config/peerOrganizations/org1.example.com/msp

        AnchorPeers:
            # AnchorPeers defines the location of peers which can be used
            # for cross org gossip communication.  Note, this value is only
            # encoded in the genesis block in the Application section context
            - Host: peer0.org1.example.com
              Port: 7051

    - &Org2
        # DefaultOrg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        Name: Org2MSP

        # ID to load the MSP definition as
        ID: Org2MSP

        MSPDir: crypto-config/peerOrganizations/org2.example.com/msp

        AnchorPeers:
            # AnchorPeers defines the location of peers which can be used
            # for cross org gossip communication.  Note, this value is only
            # encoded in the genesis block in the Application section context
            - Host: peer0.org2.example.com
              Port: 7051

    - &Org3
        Name: Org3MSP
        ID: Org3MSP
      
        MSPDir: crypto-config/peerOrganizations/org3.example.com/msp

        AnchorPeers:
            - Host: peer0.org3.example.com
              port: 7051

    - &Org4
        Name: Org4MSP
        ID: Org4MSP
      
        MSPDir: crypto-config/peerOrganizations/org4.example.com/msp
  
        AnchorPeers:
            - Host: peer0.org4.example.com
              port: 7051

    - &Org5
        Name: Org5MSP
        ID: Org5MSP
      
        MSPDir: crypto-config/peerOrganizations/org5.example.com/msp

        AnchorPeers:
            - Host: peer0.org5.example.com
              port: 7051

################################################################################
#
#   SECTION: Orderer
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults

    # Orderer Type: The orderer implementation to start
    # Available types are "solo" and "kafka"
    OrdererType: kafka

    Addresses:
        - orderer0.foodsbu.com:7050
        - orderer1.foodsbu.com:7050
        - orderer2.foodsbu.com:7050

    # Batch Timeout: The amount of time to wait before creating a batch
    BatchTimeout: 2s

    # Batch Size: Controls the number of messages batched into a block
    BatchSize:

        # Max Message Count: The maximum number of messages to permit in a batch
        MaxMessageCount: 10

        # Absolute Max Bytes: The absolute maximum number of bytes allowed for
        # the serialized messages in a batch.
        AbsoluteMaxBytes: 98 MB

        # Preferred Max Bytes: The preferred maximum number of bytes allowed for
        # the serialized messages in a batch. A message larger than the preferred
        # max bytes will result in a batch larger than preferred max bytes.
        PreferredMaxBytes: 512 KB

    Kafka:
        # Brokers: A list of Kafka brokers to which the orderer connects
        # NOTE: Use IP:port notation
        Brokers:
            - 192.168.2.182:9092
            - 192.168.2.213:9092
            - 192.168.2.137:9092
            - 192.168.2.186:9092

    # Organizations is the list of orgs which are defined as participants on
    # the orderer side of the network
    Organizations:

################################################################################
#
#   SECTION: Application
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults

    # Organizations is the list of orgs which are defined as participants on
    # the application side of the network
    Organizations:

Capabilities:
    Global: &ChannelCapabilities
        V1_1: true

    Orderer: &OrdererCapabilities
        V1_1: true

    Application: &ApplicationCapabilities
        V1_1: true
5、將configtx配置檔案上傳到192.168.2.238伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目錄,建立資料夾channel-artifacts,並執行以下命令生成創世區塊檔案:

./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block

 

6、創世區塊genesis.block是為了orderer排序服務啟動時用到的,peer節點在啟動後需要建立的channel的配置檔案在這裡也一併生成,執行以下命令:

./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel

7、Zookeeper配置:

7.1、編寫docker-zookeeper1.yaml配置檔案:

version: '2'

services:

  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=1
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

7.2、編寫docker-zookeeper2.yaml配置檔案:

version: '2'

services:

  zookeeper2:
    container_name: zookeeper2
    hostname: zookeeper2
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=2
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

7.3、編寫docker-zookeeper3.yaml配置檔案:

version: '2'

services:

  zookeeper3:
    container_name: zookeeper3
    hostname: zookeeper3
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=3
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8、kafka配置

8.1、編寫docker-kafka1.yaml配置檔案:

version: '2'

services:

  kafka1:
    container_name: kafka1
    hostname: kafka1
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8.2、編寫docker-kafka2.yaml配置檔案:

version: '2'

services:

  kafka2:
    container_name: kafka2
    hostname: kafka2
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      - KAFKA_BROKER_ID=2
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8.3、編寫docker-kafka3.yaml配置檔案:

version: '2'

services:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      - KAFKA_BROKER_ID=4
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8.4、編寫docker-kafka4.yaml配置檔案:

version: '2'

serrvices:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka
    restart: always
    enviroment:
      - KAFKA_BROKER_ID=4
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

9、orderer配置

9.1、編寫docker-orderer0.yaml

version: '2'

services:

  orderer0.example.com:
    container_name: orderer0.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10S
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100S
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1S
      - PRDERER_KAFKA_RETRY_SHORTTOTAL=30S
      - ORDERER_KAFKA_VERBOSE=TRUE
      - ORDERER_KAFKA_BROKERS=[192.168.2.182:9092,192.168.2.213:9092,192.168.2.137:9092,192.168.2.186:9092]      
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - example
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

9.2、編寫docker-orderer1.yaml

version: '2'

services:

  orderer1.example.com:
    container_name: orderer1.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10S
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100S
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1S
      - PRDERER_KAFKA_RETRY_SHORTTOTAL=30S
      - ORDERER_KAFKA_VERBOSE=TRUE
      - ORDERER_KAFKA_BROKERS=[192.168.2.182:9092,192.168.2.213:9092,192.168.2.137:9092,192.168.2.186:9092]      
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - example
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

9.3、編寫docker-orderer2.yaml

version: '2'

services:

  orderer2.example.com:
    container_name: orderer2.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10S
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100S
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1S
      - PRDERER_KAFKA_RETRY_SHORTTOTAL=30S
      - ORDERER_KAFKA_VERBOSE=TRUE
      - ORDERER_KAFKA_BROKERS=[192.168.2.182:9092,192.168.2.213:9092,192.168.2.137:9092,192.168.2.186:9092]      
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - example
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

10、啟動叢集,啟動順序為:zookeeper叢集——kafka叢集——orderer排序服務叢集

10.1 啟動zookeeper叢集:

將檔案docker-zookeeper.yaml上傳至192.168.2.237下的/opt/gopath/src/github.com/hyperledger/fabric/aberic目錄,如圖:

執行以下命令啟動docker-zookeeper1.yaml:

docker-compose -f docker-zookeeper1.yaml up

此時會報錯,拒絕連線,因為2,3doum都沒啟動,如圖:

按照上面方法依次啟動zookeeper2(192.168.2.131)和zookeeper3(192.168.2.188),在回頭看zookeeper1的日誌如下:

表示zookeeper1已與zookeeper2和3通訊,並設定自身屬性為FOLLOWING。

zookeeper3與1類似,為FOLLOWING:

而zookeeper2的日誌如下:

通過選舉,zookeeper2成了新的leader。

10.2 進入zookeeper2所在192.168.2.131伺服器,執行以下命令進入zookeeper容器:

docker exec -it zookeeper2 bash

然後進入bin目錄,並執行以下命令,看到如下資訊:

zkServer.sh status

表示zookeeper的配置所在路徑及mode屬性為leader,與先前啟動時觀測到的日誌一致,zookeeper啟動完成且一切正常。

11、啟動kafka叢集

分別將docker-kafka1.yaml/docker-kafka2.yaml/docker-kafka3.yaml/docker-kafka4.yaml上傳至192.168.2.182/192.168.2.213/192.168.2.137/192.168.2.186伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目錄,如圖:

在192.168.2.182伺服器執行以下命令啟動docker-kafka1.yaml:

docker-compose -f docker-kafka1.yaml up

如圖表示kafaka已經初始化、例項化並啟動,接下來依次啟動192.168.2.213/192.168.2.137/192.168.2.186上的kafka。

在kafka單個服務節點啟動時,zookeeper的leader節點伺服器日誌會有fank反饋,比如啟動kafka1時,在zookeeper2伺服器中反饋日誌如下:

12、啟動orderer節點

12.1 分別將docker-orderer0.yaml/docker-orderer1.yaml/docker-orderer2.yaml上傳至192.168.2.238/192.168.2.210/192.168.2.235伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目錄下。

12.2 將上文192.168.2.238伺服器中生成的genesis.block創世區塊檔案上傳到各自的/opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts目錄,若無此目錄,手動建立一個即可。

12.3 將crypto-config.yaml配置檔案上傳至各伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目錄。

12.4 將crypto-config目錄下的ordererOrganizations上傳至各伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic/crypto-config目錄下,若無此目錄,手動建立即可。

注意:/opt/gopath/src/github.com/hyperledger/fabric/aberic/crypto-config/ordererOrganizations/example.com/orderers目錄下有三個資料夾,只需要保留與之對應的資料夾即可。比如Orderer1排序伺服器上只保留orderer1.example.com即可。

上傳完之後tree一下,目錄結構如圖:

12.5 分別在三臺orderer排序伺服器上啟動docker-orderer.yaml,命令如下:

docker-compose -f docker-orderer0.yaml up

docker-compose -f docker-orderer1.yaml up -d

docker-compose -f docker-orderer2.yaml up -d

13、叢集環境測試

13.1 編寫docker-peer0org1.yaml檔案:

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - "5984:5984"

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/5e9cad71528b55c1e42b4c1e44bb656aae91b11a419ea146b26be21359cfa159_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/5e9cad71528b55c1e42b4c1e44bb656aae91b11a419ea146b26be21359cfa159_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984

      - CORE_PEER_ID=peer0.org1.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
        - /var/run/:/host/var/run/
        - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
        - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - example
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/aberic/chaincode/go
        - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
        - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - peer0.org1.example.com
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"
      - "peer0.org1.example.com:192.168.2.118"

13.2 將編寫好的docker-peer0org1.yaml檔案上傳到182.168.2.118伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目錄下。

13.3 將前文生成的mychannel.txpibd頻道檔案上傳到192.168.2.118伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts目錄。

13.4 將前文生成的peerOrganizations上傳至192.168.2.118伺服器的/opt/gopath/src/github.com/hyperledger/fabric/aberic/crypto-config資料夾,且只上傳org1相關配置即可。

13.5 在192.268.2.118伺服器的aberic目錄執行以下命令啟動peer節點服務:

 docker-compose -f docker-peer0org1.yaml up -d

13.6 docker ps -a檢視是否啟動成功:

13.7 進入客戶端,並建立頻道:

docker exec -it cli bash

peer channel create -o orderer0.example.com:7050 -c mychannel -t 50 -f ./channel-artifacts/mychannel.tx

13.8 加入頻道:

peer channel join -b mychannel.block

13.9 安裝智慧合約:

peer chaincode install -n mycc -p github.com/hyperledger/fabric/aberic/chaincode/go/chaincode_example02 -v 1.0

13.10 例項化:

peer chaincode instantiate -o orderer0.example.com:7050 -C mychannel -n mycc -c '{"Args":["init","A","10","B","10"]}' -P "OR ('Org1MSP.member','Org2MSP.member')" -v 1.0

注:

    檢視已安裝的chaincode:peer chaincode list --installed

    檢視已例項化的chaincode:peer chaincode list --instantiated -C mychannel

13.11 查詢A的資產值:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","A"]}'

如圖,能查詢到說明部署成功。

14 foo27org2節點測試

14.1 編寫docker-foo27org2.yaml

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - "5984:5984"

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/73281a53ab19100240ebc4633e8b489514c0fef921a3a2bc5ba348c595fc6765_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/73281a53ab19100240ebc4633e8b489514c0fef921a3a2bc5ba348c595fc6765_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  foo27.org2.example.com:
    container_name: foo27.org2.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=192.168.2.221:5984

      - CORE_PEER_ID=peer1.org1foo27.org2.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=foo27.org2.example.com:7051
      - CORE_PEER_CHAINCODEADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_CHAINCODELISTENADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=foo27.org2.example.com:7051
      - CORE_PEER_LOCALMSPID=Org2MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - CORE_VM_DOCKER_TLS_ENABLED=false
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/aberic/chaincode/go
        - ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/msp:/etc/hyperledger/fabric/msp
        - ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - aberic
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=foo27.org2.example.com:7051
      - CORE_PEER_LOCALMSPID=Org2MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/aberic/chaincode/go
        - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
        - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - foo27.org2.example.com
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"
      - "foo27.org2.example.com:192.168.2.221"

14.2 將foo27org2.yaml檔案上傳到192.168.2.221伺服器的aberic目錄下,並啟動:

docker-compose -f docker-foo27org2.yaml up -d

14.3 將192.168.2.118伺服器中的/opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts目錄下的mychannel.block複製到192.168.2.221相同目錄下,然後執行以下命令複製到cli容器中:

docker cp /opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts/mychannel.block 4f3d4a373e0c:/opt/gopath/src/github.com/hyperledger/fabric/peer/

其中:4f3d4a373e0c為cli容器的id。

14.4 進入客戶端:docker exec -it cli bash

14.5 ls檢視是否有mychannel.block

14.6 加入頻道:peer channel join -b mychannel.block

14.7 安裝智慧合約:

peer chaincode install -n mycc -p github.com/hyperledger/fabric/aberic/chaincode/go/chaincode_example02 -v 1.0

14.8 查詢A賬戶的值:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","A"]}'