基於docker swarm搭建ELK叢集
阿新 • • 發佈:2018-12-16
1 Swarm介紹
Swarm是Docker官方提供的一款叢集管理工具,其主要作用是把若干臺Docker主機抽象為一個整體,並且通過一個入口統一管理這些Docker主機上的各種Docker資源。Swarm和Kubernetes比較類似,但是更加輕,具有的功能也較kubernetes更少一些。
總而言之使用swarm搭建叢集是非常方便的,先看一下docker swarm的一些命令。
[email protected]:~# docker swarm -h Flag shorthand -h has been deprecated, please use --help Usage: docker swarm COMMAND Manage Swarm Commands: ca Display and rotate the root CA # 加入集群后,顯示管理節點的根證書 init Initialize a swarm # 初始化一個叢集 join Join a swarm as a node and/or manager # 作為管理節點或者普通節點加入一個叢集 join-token Manage join tokens # 可以顯示worker和manage加入的tokens leave Leave the swarm # 離開這個叢集 unlock Unlock swarm # 解鎖叢集 unlock-key Manage the unlock key # 管理解鎖金鑰 update Update the swarm # 更新叢集
2. 初步使用
先準備好兩臺同網段的主機A和B,然後確保docker都安裝好。
先初始化一個叢集出來,在A主機上執行docker swarm init
命令。這裡會提示到加入叢集的命令為docker swarm join --token ...
:
[email protected]:~# docker swarm init Swarm initialized: current node (92irgcp4xemxjzra97cnzonqk) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-4bjd7a3lzrep33vl351isahd0tch4l2b61rky1mf8ee2f4 192.168.1.126:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
然後在B主機上執行上面的命令docker swarm join --token ...
:
[email protected]:~# docker swarm join --token SWMTKN-1-4bjd7a3lzrep33vl351isahd0tch4l2b61rky1mf8ee2f4izlu-0tdz2ef58x6 192.168.1.126:2377
This node joined a swarm as a worker.
這樣B就加入了A建立的叢集。
檢視叢集的節點使用docker node ls
:
[email protected]:~# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION ej6p0305fr1tt8ccz2vyn872e iZwz9 Ready Active 18.09.0 92irgcp4xemxjzra97cnzonqk * iZwz Ready Active Leader 18.09.0
可以看到叢集中已經有了兩個節點,其中iZwz是管理節點。
檢視網路,可以看到多了兩個網路docker_gwbridge
和swarm
:
[email protected]:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
1919a9525cf4 bridge bridge local
e6f18d4a9240 docker_gwbridge bridge local
94d29affd332 host host local
fsd9xxtmtpcc ingress overlay swarm
539fd79f795c none null local
3. 編寫docker-compose file
以下的compose-file做了這些事情:
- 啟動兩個es
- 啟動兩個logstash
- 啟動一個kibana
- 建立兩個資料卷
- 建立一個overlay網路
version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
deploy:
placement:
constraints:
- node.role == manager
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
deploy:
placement:
constraints:
- node.role == worker
logstash:
image: docker.elastic.co/logstash/logstash:6.2.4
environment:
- "LS_JAVA_OPTS=-Xms256m -Xmx256m"
networks:
- esnet
deploy:
replicas: 2
logstash2:
image: docker.elastic.co/logstash/logstash:6.2.4
environment:
- "LS_JAVA_OPTS=-Xms256m -Xmx256m"
networks:
- esnet
deploy:
replicas: 2
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
ports:
- "5601:5601"
networks:
- esnet
deploy:
placement:
constraints:
- node.role == manager
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
driver: "overlay"
4. 在叢集上啟動ELK系列服務
執行docker stack deploy -c docker-compose.yaml elk_cluster
啟動服務:
[email protected]:~/docker_i/elk_cluster# docker stack deploy -c docker-compose.yaml elk_cluster
Creating network elk_cluter_esnet
Creating service elk_cluter_elasticsearch2
Creating service elk_cluter_logstash
Creating service elk_cluter_logstash2
Creating service elk_cluter_kibana
Creating service elk_cluter_elasticsearch
檢視在不同節點上啟動的容器:
docker ps -a
可以在管理節點上檢視所啟動的服務:
docker service ls
檢視日誌:
docker service logs -f elk_cluster
刪除服務:
docker stack rm elk_cluster