1. 程式人生 > >Consul4-linux安裝consul以及叢集搭建

Consul4-linux安裝consul以及叢集搭建

          前面幾篇關於consul的文章簡單的介紹了windows下安裝consul以及consul作為註冊中心和配置中心的簡單使用,基於前面的基礎,這裡介紹下載linux下安裝consul以及結合docker搭建consul叢集,解決consul配置的資料無法儲存的問題。

目錄

目錄

選擇linux的版本的consul進行下載

二,解壓安裝

     把下載的linux下的安裝包consul拷貝到linux環境裡面,使用unzip進行解壓:

如果linux下面沒有unzip命令,則使用yum unstall unzip命令進行安裝

1,解壓完成以後,把解壓後的檔案拷貝到/usr/local/consul目錄下

2,配置環境變數

vi /etc/profile

配置如下:

export JAVA_HOME=/usr/local/jdk1.8.0_172
export MAVEN_HOME=/usr/local/apache-maven-3.5.4
export CONSUL_HOME=/usr/local/consul

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$CONSUL_HOME:$PATH

上面的CONSUL_HOME就是consul的路徑,上面的配置僅供參考。

進行了配置以後,退出儲存修改,使用下面的命令使配置生效:

source /etc/profile

   這樣進行配置以後,我們就可以方便在任何地方使用consul命令了

三,測試安裝結果

1,檢視安裝的consul版本

[[email protected] local]# consul -v
Consul v1.2.1
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
[
[email protected]
local]#

2,以開發模式啟動consul

[[email protected] local]# consul agent -dev
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '344af5b1-8914-41d6-f7b2-3143d025f493'
         Node name: 'iZbp1dmlbagds9s70r8luxZ'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 09:57:02 [DEBUG] agent: Using random ID "344af5b1-8914-41d6-f7b2-3143d025f493" as node ID
    2018/07/28 09:57:02 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:344af5b1-8914-41d6-f7b2-3143d025f493 Address:127.0.0.1:8300}]
    2018/07/28 09:57:02 [INFO] serf: EventMemberJoin: iZbp1dmlbagds9s70r8luxZ.dc1 127.0.0.1
    2018/07/28 09:57:02 [INFO] serf: EventMemberJoin: iZbp1dmlbagds9s70r8luxZ 127.0.0.1
    2018/07/28 09:57:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 09:57:02 [INFO] consul: Adding LAN server iZbp1dmlbagds9s70r8luxZ (Addr: tcp/127.0.0.1:8300) (DC: dc1)
    2018/07/28 09:57:02 [INFO] consul: Handled member-join event for server "iZbp1dmlbagds9s70r8luxZ.dc1" in area "wan"
    2018/07/28 09:57:02 [DEBUG] agent/proxy: managed Connect proxy manager started
    2018/07/28 09:57:02 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 09:57:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2018/07/28 09:57:02 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
    2018/07/28 09:57:02 [INFO] agent: started state syncer
    2018/07/28 09:57:02 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
    2018/07/28 09:57:02 [DEBUG] raft: Votes needed: 1
    2018/07/28 09:57:02 [DEBUG] raft: Vote granted from 344af5b1-8914-41d6-f7b2-3143d025f493 in term 2. Tally: 1
    2018/07/28 09:57:02 [INFO] raft: Election won. Tally: 1
    2018/07/28 09:57:02 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
    2018/07/28 09:57:02 [INFO] consul: cluster leadership acquired
    2018/07/28 09:57:02 [INFO] consul: New leader elected: iZbp1dmlbagds9s70r8luxZ
    2018/07/28 09:57:02 [INFO] connect: initialized CA with provider "consul"
    2018/07/28 09:57:02 [DEBUG] consul: Skipping self join check for "iZbp1dmlbagds9s70r8luxZ" since the cluster is too small
    2018/07/28 09:57:02 [INFO] consul: member 'iZbp1dmlbagds9s70r8luxZ' joined, marking health alive
    2018/07/28 09:57:02 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/07/28 09:57:02 [INFO] agent: Synced node info
    2018/07/28 09:57:04 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/07/28 09:57:04 [DEBUG] agent: Node info in sync
    2018/07/28 09:57:04 [DEBUG] agent: Node info in sync

    輸出如上所示,http埠為8500,dns埠為8600,繫結的本地ip為127.0.0.1,如果consul需要埠需要被外部訪問需要開發8500埠和8600埠,可以參照:

centos查詢埠是不是開放的
firewall-cmd --permanent --query-port=8500/tcp
#新增對外開放埠
firewall-cmd --permanent --add-port=8500/tcp

#重啟防火牆
firewall-cmd --reload

四:consul的引數知識

參考:參考連結

consul agent常用命令解讀
-data-dir :
作用:指定agent儲存狀態的資料目錄,這是所有agent都必須的,對server尤其重要,因為他們必須持久化叢集的狀態

-config-dir :
作用:指定service的配置檔案和檢查定義所在的位置。目錄必需為consul.d,檔案內容都是json格式的資料。配置詳解見官方

-config-file 
作用:指定一個要裝載的配置檔案

-dev :
作用:開發伺服器模式,雖然是server模式,但不用於生產環境,因為不會有任何持久化操作,即不會有任何資料寫入到磁碟

-bootstrap-expect 
作用: 引數表明該服務執行時最低開始進行選舉的節點數,當設定為1時,則意味允許節點為一個時也進行選舉;當設定為3時,則等到3臺節點同時執行consul並加入到server才能參與選舉,選舉完叢集才能夠正常工作。 一般建議伺服器結點3-5個。

-node :
作用:指定節點在叢集中的名稱,該名稱在叢集中必須是唯一的(預設這是機器的主機名),直接採用機器的IP

-bind :
作用:指明節點的IP地址,一般是0.0.0.0或者雲伺服器內網地址,不能寫阿里雲外網地址。這是Consul偵聽的地址,它必須可以被叢集中的所有其他節點訪問。雖然繫結地址不是絕對必要的,但最好提供一個。

-server :
作用:指定節點為server,每個資料中心(DC)的server數推薦3-5個。

-client :
作用:指定節點為client,指定客戶端介面的繫結地址,包括:HTTP、DNS、RPC 
預設是127.0.0.1,只允許迴環介面訪問

-datacenter :
作用:指定機器加入到哪一個資料中心中。老版本叫-dc,-dc已經失效

consul概念:

Agent: Consul叢集中長時間執行的守護程序,以consul agent 命令開始啟動. 在客戶端和服務端模式下都可以執行,可以執行DNS或者HTTP介面, 它的主要作用是執行時檢查和保持服務同步。 
Client: 客戶端, 無狀態, 以一個極小的消耗將介面請求轉發給區域網內的服務端叢集. 
Server: 服務端, 儲存配置資訊, 高可用叢集, 在區域網內與本地客戶端通訊, 通過廣域網與其他資料中心通訊. 每個資料中心的 server 數量推薦為 3 個或是 5 個. 
Datacenter: 資料中心,多資料中心聯合工作保證資料儲存安全快捷 
Consensus: 一致性協議使用的是Raft Protocol 
RPC: 遠端程式通訊 
Gossip: 基於 Serf 實現的 gossip 協議,負責成員、失敗探測、事件廣播等。通過 UDP 實現各個節點之間的訊息。分為 LAN 上的和 WAN 上的兩種情形。

1,引數案例

前面我們使用consul agent -dev啟動的consul在雲伺服器上是不能被外部訪問的,那麼要被外部訪問我們需要加引數,參照如下:

consul agent -dev -http-port 8500 -client 0.0.0.0

引數說明:

-client 0.0.0.0:表明不是繫結的不是預設的127.0.0.1地址,可以通過公網進行訪問

-http-port 8500:通過該引數可以修改consul啟動的http埠

2,檢視consul的叢集資訊:

[[email protected] local]# consul members
Node                     Address         Status  Type    Build  Protocol  DC   Segment
iZbp1dmlbagds9s70r8luxZ  127.0.0.1:8301  alive   server  1.2.1  2         dc1  <all>
[[email protected] local]# 

node:節點名
Address:節點地址
Status:alive表示節點健康
Type:server執行狀態是server狀態
DC:dc1表示該節點屬於DataCenter1

3,以server模式啟動consul

          前面都是以-dev開發模式啟動的consul,該模式啟動的consul作為配置中心的時候,配置資料是不能儲存的,不能進行持久化,需要進行持久化,則需要以服務模式啟動

[[email protected] local]# consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '04b82369-8b5b-19f3-ab0d-6a82266a2110'
         Node name: 'agent-one'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: true)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 47.98.112.71 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 10:54:02 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:04b82369-8b5b-19f3-ab0d-6a82266a2110 Address:47.98.112.71:8300}]
    2018/07/28 10:54:02 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:54:02 [INFO] serf: EventMemberJoin: agent-one.dc1 47.98.112.71
    2018/07/28 10:54:02 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:54:02 [INFO] serf: EventMemberJoin: agent-one 47.98.112.71
    2018/07/28 10:54:02 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:54:02 [INFO] raft: Node at 47.98.112.71:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 10:54:02 [INFO] consul: Adding LAN server agent-one (Addr: tcp/47.98.112.71:8300) (DC: dc1)
    2018/07/28 10:54:02 [INFO] consul: Handled member-join event for server "agent-one.dc1" in area "wan"
    2018/07/28 10:54:02 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 10:54:02 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:54:02 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/07/28 10:54:02 [INFO] agent: started state syncer
    2018/07/28 10:54:08 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 10:54:08 [INFO] raft: Node at 47.98.112.71:8300 [Candidate] entering Candidate state in term 2
    2018/07/28 10:54:08 [INFO] raft: Election won. Tally: 1
    2018/07/28 10:54:08 [INFO] raft: Node at 47.98.112.71:8300 [Leader] entering Leader state
    2018/07/28 10:54:08 [INFO] consul: cluster leadership acquired
    2018/07/28 10:54:08 [INFO] consul: New leader elected: agent-one
    2018/07/28 10:54:08 [INFO] consul: member 'agent-one' joined, marking health alive
    2018/07/28 10:54:08 [INFO] agent: Synced node info
    2018/07/28 10:54:11 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:12 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:14 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:16 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:18 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:20 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:20 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:23 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:24 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:26 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:28 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:30 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:32 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:32 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:35 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:36 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:38 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:40 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:42 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:44 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:44 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:47 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:54:48 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:50 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:52 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:54 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:56 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:54:56 [WARN] consul: error getting server health from "agent-one": rpc error getting client: failed to get conn: dial tcp <nil>->47.98.112.71:8300: i/o timeout
    2018/07/28 10:54:59 [WARN] consul: error getting server health from "agent-one": context deadline exceeded
    2018/07/28 10:55:00 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:55:02 [WARN] consul: error getting server health from "agent-one": last request still outstanding
    2018/07/28 10:55:04 [WARN] consul: error getting server health from "agent-one": last request still outstanding
^C    2018/07/28 10:55:10 [INFO] agent: Caught signal:  interrupt
    2018/07/28 10:55:10 [INFO] agent: Graceful shutdown disabled. Exiting
    2018/07/28 10:55:10 [INFO] agent: Requesting shutdown
    2018/07/28 10:55:10 [INFO] consul: shutting down server
    2018/07/28 10:55:10 [WARN] serf: Shutdown without a Leave
    2018/07/28 10:55:10 [WARN] serf: Shutdown without a Leave
    2018/07/28 10:55:10 [INFO] manager: shutting down
    2018/07/28 10:55:10 [INFO] agent: consul server down
    2018/07/28 10:55:10 [INFO] agent: shutdown complete
    2018/07/28 10:55:10 [INFO] agent: Stopping DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:55:10 [INFO] agent: Stopping DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:55:10 [INFO] agent: Stopping HTTP server [::]:8500 (tcp)
    2018/07/28 10:55:11 [WARN] agent: Timeout stopping HTTP server [::]:8500 (tcp)
    2018/07/28 10:55:11 [INFO] agent: Waiting for endpoints to shut down
    2018/07/28 10:55:11 [INFO] agent: Endpoints down
    2018/07/28 10:55:11 [INFO] agent: Exit code: 1
[[email protected] local]# 
[[email protected] local]# consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0
BootstrapExpect is set to 1; this is the same as Bootstrap mode.
bootstrap = true: do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.1'
           Node ID: '04b82369-8b5b-19f3-ab0d-6a82266a2110'
         Node name: 'agent-one'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: true)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 47.98.112.71 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/07/28 10:55:15 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:04b82369-8b5b-19f3-ab0d-6a82266a2110 Address:47.98.112.71:8300}]
    2018/07/28 10:55:15 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:55:15 [INFO] serf: EventMemberJoin: agent-one.dc1 47.98.112.71
    2018/07/28 10:55:15 [WARN] memberlist: Binding to public address without encryption!
    2018/07/28 10:55:15 [INFO] serf: EventMemberJoin: agent-one 47.98.112.71
    2018/07/28 10:55:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/07/28 10:55:15 [INFO] raft: Node at 47.98.112.71:8300 [Follower] entering Follower state (Leader: "")
    2018/07/28 10:55:15 [WARN] serf: Failed to re-join any previously known node
    2018/07/28 10:55:15 [WARN] serf: Failed to re-join any previously known node
    2018/07/28 10:55:15 [INFO] consul: Adding LAN server agent-one (Addr: tcp/47.98.112.71:8300) (DC: dc1)
    2018/07/28 10:55:15 [INFO] consul: Handled member-join event for server "agent-one.dc1" in area "wan"
    2018/07/28 10:55:15 [WARN] agent/proxy: running as root, will not start managed proxies
    2018/07/28 10:55:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/07/28 10:55:15 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/07/28 10:55:15 [INFO] agent: started state syncer
    2018/07/28 10:55:21 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2018/07/28 10:55:21 [INFO] raft: Node at 47.98.112.71:8300 [Candidate] entering Candidate state in term 3
    2018/07/28 10:55:21 [INFO] raft: Election won. Tally: 1
    2018/07/28 10:55:21 [INFO] raft: Node at 47.98.112.71:8300 [Leader] entering Leader state
    2018/07/28 10:55:21 [INFO] consul: cluster leadership acquired
    2018/07/28 10:55:21 [INFO] consul: New leader elected: agent-one
    2018/07/28 10:55:21 [INFO] agent: Synced node info

啟動命令:

consul agent -server -ui -bootstrap-expect=1 -data-dir=/usr/local/consul/data -node=agent-one -
advertise=47.98.112.71 -bind=0.0.0.0 -client=0.0.0.0

引數說明:

-server:伺服器模式 
-ui:能webui展示 
-bootstrap-expect:server為1時即選擇server叢集leader 
-data-dir:consul狀態儲存檔案地址 
-node:指定結點名 
advertise:本地ip地址 
-client:指定可訪問這個服務結點的ip 

上面是以服務模式啟動的輸出,注意需要開放8300埠,consul需要進行節點通訊使用

在此檢視成員資訊:

[[email protected] data]# consul members
Node       Address            Status  Type    Build  Protocol  DC   Segment
agent-one  47.98.112.71:8301  alive   server  1.2.1  2         dc1  <all>
[[email protected] data]# 

4,加入叢集

    加入叢集的命令:consul join xx.xx.xx.xx

五,consul叢集搭建

    使用docker容器來搭建consul叢集,編寫Docker compose

叢集說明
1,3 server 節點(consul-server1 ~ 3)和 2 node 節點(consul-node1 ~ 2)
2,對映本地 consul/data1 ~ 3/ 目錄到 Docker 容器中,避免 Consul 叢集重啟後資料丟失。
3,Consul web http 埠分別為 8501、8502、8503

新建:docker-compose.yml

version: '2.0'
services:
  consul-server1:
    image: consul:latest
    hostname: "consul-server1"
    ports:
      - "8501:8500"
    volumes:
      - ./consul/data1:/consul/data
    command: "agent -server -bootstrap-expect 3 -ui -disable-host-node-id -client 0.0.0.0"
  consul-server2:
    image: consul:latest
    hostname: "consul-server2"
    ports:
      - "8502:8500"
    volumes:
      - ./consul/data2:/consul/data
    command: "agent -server -ui -join consul-server1 -disable-host-node-id -client 0.0.0.0"
    depends_on: 
      - consul-server1
  consul-server3:
    image: consul:latest
    hostname: "consul-server3"
    ports:
      - "8503:8500"
    volumes:
      - ./consul/data3:/consul/data
    command: "agent -server -ui -join consul-server1 -disable-host-node-id -client 0.0.0.0"
    depends_on:
      - consul-server1
  consul-node1:
    image: consul:latest
    hostname: "consul-node1"
    command: "agent -join consul-server1 -disable-host-node-id"
    depends_on:
      - consul-server1
  consul-node2:
    image: consul:latest
    hostname: "consul-node2"
    command: "agent -join consul-server1 -disable-host-node-id"
    depends_on:
      - consul-server1

叢集啟動時預設以 consul-server1 為 leader,然後 server2 ~ 3 和 node1 ~ 2 加入到該叢集。當 server1 出現故障下線是,server2 ~ 3 則會進行選舉選出新leader。

叢集操作
建立並啟動叢集:docker-compose up -d
停止整個叢集:docker-compose stop
啟動叢集:docker-compose start
清除整個叢集:docker-compose rm(注意:需要先停止)
訪問
http://localhost:8501
http://localhost:8502
http://localhost:8503

上面搭建了consul叢集了,那麼怎麼進行負載均衡了,使用ngxin:

設定負載均衡伺服器列表:

upstream consul {
    server 127.0.0.1:8501;
    server 127.0.0.1:8502;
    server 127.0.0.1:8503;
}

服務配置:

server {
    listen       80;
    server_name  consul.test.com;#服務域名,需要填寫你的服務域名

    location / {
        proxy_pass  http://consul;#請求轉向consul伺服器列表
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

    總結:

到這裡consul的相關知識基本結束了,但是這只是consul的基本使用,擴充套件的還需要進行consul客戶端的二次開發,比如自定義consul的服務註冊與發現,consul作為配置中心存在配置資料丟失的情況,怎麼進行配置資訊的自動備份找回。有空的話在對這一塊的擴充套件進行深入的學習探討。

   相關文章:

   之前自己寫了一個控制器來作為服務健康檢查其實是沒必要的,springboot已經為我們解決了,參考程式碼:

歡迎加群學習交流:331227121