1. 程式人生 > >生產環境elasticsearch5.0.1叢集的部署配置詳解

生產環境elasticsearch5.0.1叢集的部署配置詳解

線上環境elasticsearch5.0.1叢集的配置部署

es叢集的規劃:
硬體:
7臺8核、64G記憶體、2T ssd硬碟加1臺8核16G的阿里雲伺服器

其中一臺作為kibana+kafka連線查詢的伺服器
其他6臺都作為node和master兩種角色

作業系統:centos7.2 x86_64
為方便磁碟擴容建議將磁碟進行lvm邏輯卷配置,可以參考:
aliyun新增資料盤後的物理分割槽和lvm邏輯卷兩種掛載方式
http://blog.csdn.net/reblue520/article/details/54174178

1.安裝jdk1.8和elasticsearch5.0.1

rpm -ivh jdk-8u111-linux-x64.rpm
tar -zxvf elasticsearch-5.0.1.tar.gz

2.新增yunva這個執行elasticsearch的使用者(es必須使用非root使用者啟動)

useradd yunva -d /home/yunva
echo 'pass'|passwd --stdin yunva

chown -R yunva.yunva /data

修改預設埠
sed -i 's/#Port 22/Port 2222/' /etc/ssh/sshd_config
sed -i 's/^GSSAPIAuthentication yes$/GSSAPIAuthentication no/' /etc/ssh/sshd_config
service sshd restart

3.針對es做的一些系統的優化配置

swapoff -a

echo "fs.file-max = 1000000" >> /etc/sysctl.conf
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
echo "vm.swappiness = 1" >> /etc/sysctl.conf

sysctl -p
sed -i 's/* soft nofile 65535/* soft nofile 655350/g' /etc/security/limits.conf
sed -i 's/* hard nofile 65535/* hard nofile 655350/g' /etc/security/limits.conf


將java_home加入環境變數
cat >> /etc/profile <<EOF
export JAVA_HOME=/usr/java/jdk1.8.0_111 
export PATH=\$JAVA_HOME/bin:\$PATH
EOF

source /etc/profile

4.es記憶體調整配置檔案(建議配置為實體記憶體的一半或者更多最好不要超過32G,超過了也可能不會增強效能):


/data/elasticsearch-5.0.1/config/jvm.options

sed -i 's/-Xms2g/-Xms32g/' /data/elasticsearch-5.0.1/config/jvm.options
sed -i 's/-Xmx2g/-Xmx32g/' /data/elasticsearch-5.0.1/config/jvm.options
echo "-Xss256k" >>/data/elasticsearch-5.0.1/config/jvm.options

sed -i 's/-XX:+UseConcMarkSweepGC/-XX:+UseG1GC/' /data/elasticsearch-5.0.1/config/jvm.options

5.叢集的主要配置檔案

修改elasticsearch的引數
vim /etc/elasticsearch/elasticsearch.yml(rpm安裝方式的配置檔案位置)
vim  /data/elasticsearch-5.0.1/config/elasticsearch.yml

es master節點的配置:
# 節點名
cluster.name: yunva-es
# 叢集的名稱,可以不寫
discovery.zen.ping.unicast.hosts: ["node-1","yunva_etl_es2", "yunva_etl_es3","yunva_etl_es4","yunva_etl_es5","yunva_etl_es6","yunva_etl_es7"]
node.name: yunva_etl_es6
node.master: true
node.data: true
path.data: /data/es/data
path.logs: /data/es/logs
action.auto_create_index: false
indices.fielddata.cache.size: 12g
bootstrap.memory_lock: false
# 內網地址,可以加快速度
network.host: 192.168.1.10
http.port: 9200
# 增加新的引數head外掛可以訪問es
http.cors.enabled: true
http.cors.allow-origin: "*"

gateway.recover_after_time: 8m
gateway.expected_nodes: 3

cluster.routing.allocation.node_initial_primaries_recoveries: 8

# 以下配置可以減少當es節點短時間宕機或重啟時shards重新分佈帶來的磁碟io讀寫浪費
discovery.zen.fd.ping_timeout: 180s
discovery.zen.fd.ping_retries: 8
discovery.zen.fd.ping_interval: 30s
discovery.zen.ping_timeout: 120s


針對kibana的es配置(非node和master節點)
# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: yunva-es
node.name: yunva_etl_es1
node.master: false
node.data: false
node.ingest: false

action.auto_create_index: false
path.data: /data/es/data
path.logs: /data/es/logs
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200

http.cors.enabled: true

http.cors.allow-origin: "*"

# 以下配置可以減少當es節點短時間宕機或重啟時shards重新分佈帶來的磁碟io讀寫浪費
discovery.zen.fd.ping_timeout: 180s
discovery.zen.fd.ping_retries: 8
discovery.zen.fd.ping_interval: 30s
discovery.zen.ping_timeout: 120s


注意修改配置檔案vim /etc/hosts 列出叢集節點名稱和對應ip地址的對應關係(有內網dns並且配置的就不需要再次配置了)

echo "10.28.50.131 node-1" >> /etc/hosts
echo "10.26.241.239 yunva_etl_es3" >> /etc/hosts
echo "10.25.135.215 yunva_etl_es2" >> /etc/hosts
echo "10.26.241.237 yunva_etl_es4" >> /etc/hosts
echo "10.27.78.228 yunva_etl_es5" >> /etc/hosts
echo "10.27.65.121 yunva_etl_es6" >> /etc/hosts
echo "10.27.35.94 yunva_etl_es7" >> /etc/hosts

6.建立日誌和資料存放目錄

mkdir -p /data/es/data
mkdir /data/es/logs
chown -R yunva.yunva /data

7.啟動es服務:

# su - yunva
[yunva]$ cd /data/elasticsearch-5.0.1/bin/
./elasticsearch &

8.檢查單臺服務是否正常:


$ curl http://ip:9200/
{
  "name" : "yunva_etl_es5",
  "cluster_name" : "yunva-es",
  "cluster_uuid" : "2shAg8u3SjCRNJ4mEUBzBQ",
  "version" : {
    "number" : "5.0.1",
    "build_hash" : "080bb47",
    "build_date" : "2016-11-11T22:08:49.812Z",
    "build_snapshot" : false,
    "lucene_version" : "6.2.1"
  },
  "tagline" : "You Know, for Search"
}


# 檢視叢集狀態
$ curl http://ip:9200/_cluster/health/?pretty
{
  "cluster_name" : "yunva-es",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 5,
  "number_of_data_nodes" : 4,
  "active_primary_shards" : 66,
  "active_shards" : 132,
  "relocating_shards" : 2,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


然後將配置好的es程式拷貝到其他伺服器中,注意修改以下內容(network.host為內網地址,速度更快,節省互相複製、分片的時候處理頻寬):
1.elasticsearch.yml檔案的配置修改
node.name: 節點名稱
network.host: es節點的內網IP地址
2./etc/hosts檔案中內網ip和node.name的對應關係


後續新增對叢集服務的監控,可以參考:

 zabbix通過簡單shell命令監控elasticsearch叢集狀態
http://blog.csdn.net/reblue520/article/details/54412388

kibana.yml

server.port: 9529
server.host: "192.168.1.2"
elasticsearch.url: "http://192.168.1.1:9200"
elasticsearch.pingTimeout: 600000
elasticsearch.requestTimeout: 18000000


nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;
    server {
        listen       80 default_server;
        listen       [::]:80 default_server;
        server_name  _;
        root         /usr/share/nginx/html;
        include /etc/nginx/default.d/*.conf;
        location / {
        }
        error_page 404 /404.html;
            location = /40x.html {
        }
        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}

es.conf
server {  
  listen       80;  
  server_name 1.1.1.1;  
  location / {  
     auth_basic "secret";  
     auth_basic_user_file /data/nginx/db/passwd.db;  
     proxy_pass http://192.168.1.2:9200;
     proxy_set_header Host $host:9200;  
     proxy_set_header X-Real-IP $remote_addr;  
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;  
     proxy_set_header Via "nginx";  
  }  
  access_log off;
}