Docker搭建ELK6.4.1以及Elasticsearch6.4.1叢集(三個節點)
阿新 • • 發佈:2018-12-14
轉載請表明出處 https://blog.csdn.net/Amor_Leo/article/details/83144739 謝謝
ELK6.4.1以及Elasticsearch6.4.1叢集Docker搭建
搭建Elasticsearch叢集(三個節點)
docker pull elasticsearch:6.4.1
修改配置
- 修改配置 sysctl.conf
vi /etc/sysctl.conf
- 新增下面配置:
vm.max_map_count=655360
- 執行命令:
sysctl -p
準備配置檔案
- es-1.yml
#叢集名 cluster.name: ESCluster #節點名 node.name: node-128-1 #設定繫結的ip地址,可以是ipv4或ipv6的,預設為0.0.0.0, #指繫結這臺機器的任何一個ip network.bind_host: 0.0.0.0 #設定其它節點和該節點互動的ip地址,如果不設定它會自動判斷, #值必須是個真實的ip地址 network.publish_host: 192.168.0.128 #設定對外服務的http埠,預設為9200 http.port: 9200 #設定節點之間互動的tcp埠,預設是9300 transport.tcp.port: 9300 #是否允許跨域REST請求 http.cors.enabled: true #允許 REST 請求來自何處 http.cors.allow-origin: "*" #節點角色設定 node.master: true node.data: true #有成為主節點資格的節點列表 #discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300","0.0.0.0:9301","0.0.0.0:9302"] discovery.zen.ping.unicast.hosts: ["192.168.0.128 :9300","192.168.0.128:9301","192.168.0.128:9302"] #叢集中一直正常執行的,有成為master節點資格的最少節點數(預設為1) # (totalnumber of master-eligible nodes / 2 + 1) discovery.zen.minimum_master_nodes: 2
- es-2.yml
#叢集名 cluster.name: ESCluster #節點名 node.name: node-128-2 #設定繫結的ip地址,可以是ipv4或ipv6的,預設為0.0.0.0, #指繫結這臺機器的任何一個ip network.bind_host: 0.0.0.0 #設定其它節點和該節點互動的ip地址,如果不設定它會自動判斷, #值必須是個真實的ip地址 network.publish_host: 192.168.0.128 #設定對外服務的http埠,預設為9200 http.port: 9201 #設定節點之間互動的tcp埠,預設是9300 transport.tcp.port: 9301 #是否允許跨域REST請求 http.cors.enabled: true #允許 REST 請求來自何處 http.cors.allow-origin: "*" #節點角色設定 node.master: true node.data: true #有成為主節點資格的節點列表 #discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300","0.0.0.0:9301","0.0.0.0:9302"] discovery.zen.ping.unicast.hosts: ["192.168.0.128 :9300","192.168.0.128:9301","192.168.0.128:9302"] #叢集中一直正常執行的,有成為master節點資格的最少節點數(預設為1) # (totalnumber of master-eligible nodes / 2 + 1) discovery.zen.minimum_master_nodes: 2
- es-3.yml
#叢集名
cluster.name: ESCluster
#節點名
node.name: node-128-3
#設定繫結的ip地址,可以是ipv4或ipv6的,預設為0.0.0.0,
#指繫結這臺機器的任何一個ip
network.bind_host: 0.0.0.0
#設定其它節點和該節點互動的ip地址,如果不設定它會自動判斷,
#值必須是個真實的ip地址
network.publish_host: 192.168.0.128
#設定對外服務的http埠,預設為9200
http.port: 9202
#設定節點之間互動的tcp埠,預設是9300
transport.tcp.port: 9302
#是否允許跨域REST請求
http.cors.enabled: true
#允許 REST 請求來自何處
http.cors.allow-origin: "*"
#節點角色設定
node.master: true
node.data: true
#有成為主節點資格的節點列表
#discovery.zen.ping.unicast.hosts: ["0.0.0.0:9300","0.0.0.0:9301","0.0.0.0:9302"]
discovery.zen.ping.unicast.hosts: ["192.168.0.128 :9300","192.168.0.128:9301","192.168.0.128:9302"]
#叢集中一直正常執行的,有成為master節點資格的最少節點數(預設為1)
# (totalnumber of master-eligible nodes / 2 + 1)
discovery.zen.minimum_master_nodes: 2
配置ik中文分詞器
- 去GitHub頁面下載對應的ik分詞zip包
- 把ik壓縮包複製到你的liunx系統 (我是用xftp) /usr/conf/elasticsearch/ 路徑下
unzip -d /usr/conf/elasticsearch/elasticsearch1/ik elasticsearch-analysis-ik-6.4.1.zip
unzip -d /usr/conf/elasticsearch/elasticsearch2/ik elasticsearch-analysis-ik-6.4.1.zip
unzip -d /usr/conf/elasticsearch/elasticsearch3/ik elasticsearch-analysis-ik-6.4.1.zip
- 最後把elasticsearch-analysis-ik-6.4.1.zip 刪除
rm -rf /usr/conf/elasticsearch/elasticsearch-analysis-ik-6.4.1.zip
賦權chmod 777 /usr/conf/usr/conf/elasticsearch/elasticsearch1/ik 剩下兩個也是這樣
建立容器並執行
docker run -d --name ES1 -p 9200:9200 -p 9300:9300 -v /usr/conf/es-1.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/conf/data1:/usr/share/elasticsearch/data -v /usr/conf/elasticsearch/elasticsearch1:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.4.1
docker run -d --name ES2 -p 9201:9201 -p 9301:9301 -v /usr/conf/es-2.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/conf/data2:/usr/share/elasticsearch/data -v /usr/conf/elasticsearch/elasticsearch2:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.4.1
docker run -d --name ES3 -p 9202:9202 -p 9302:9302 -v /usr/conf/es-3.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /usr/conf/data3:/usr/share/elasticsearch/data -v /usr/conf/elasticsearch/elasticsearch3:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:6.4.1
- -v 後面的/usr/conf以及子目錄是自己建立資料夾(mkdir 資料夾名)並賦予許可權 chmod 777 /usr/conf/
- data1,data2以及data3是空的資料夾 chmod 777 /usr/conf/data1 chmod 777 /usr/conf/data2 chmod 777 /usr/conf/data3
- 檢視ik分詞是否載入成功
docker logs ES1
出現以下資訊,則成功
[node-128-2] loaded plugin [analysis-ik]
安裝head外掛
docker pull mobz/elasticsearch-head:5
docker run --name es-head -p 9100:9100 -d docker.io/mobz/elasticsearch-head:5
- 測試 Elasticsearch 是否啟動成功 http://192.168.0.128:9202 (ip為自己虛擬機器的ip)頁面會顯示以下資訊
{
"name" : "node-128-3",
"cluster_name" : "ESCluster",
"cluster_uuid" : "fHM6eysXQsqYEjyddfzamg",
"version" : {
"number" : "6.4.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "e36acdb",
"build_date" : "2018-09-13T22:18:07.696808Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
- head視覺化介面 http://192.168.0.128:9100/ (ip為自己虛擬機器的ip)
安裝kibana
- 拉取映象
docker pull kibana:6.4.1
- 建立執行docker容器
docker run -d --name kibana1 -e "ELASTICSEARCH_URL=http://192.168.0.128:9200" -p 5601:5601 kibana:6.4.1
其中 192.168.0.128為ES叢集其中一個節點的ip
安裝logstash
- 拉取映象
docker pull docker.elastic.co/logstash/logstash:6.4.1
- 配置檔案
- /usr/conf/logstash/conf.d/logstash.conf
input { file { path => "/tmp/access_log" start_position => "beginning" } }output { elasticsearch { hosts => ["192.168.0.128:9200"] ## ES叢集其中一個ip地址 user => "root" ##使用者名稱 password => "root" ## 密碼 } }
- /usr/conf/logstash/logstash.yml
http.host: "0.0.0.0" path.config: /usr/share/logstash/pipeline xpack.monitoring.elasticsearch.url: http://192.168.0.128:9200 xpack.monitoring.elasticsearch.username: root xpack.monitoring.elasticsearch.password: root
- 建立執行容器
docker run -v /usr/conf/logstash/conf.d:/usr/share/logstash/pipeline -v /usr/conf/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -p 5000:5000 -p 5044:5044 -p 9600:9600 --name logstash --privileged=true -d docker.elastic.co/logstash/logstash:6.4.1