1. 程式人生 > >基於salt-stack實現的高可用負載均衡 salt-stack模組介紹認識

基於salt-stack實現的高可用負載均衡 salt-stack模組介紹認識

Salt-satck一鍵部署keepalived;haproxy服務:

角色 server-id 安裝
MASTER Server1 haproxy;keepalived(MASTER)
MINION Server2 httpd(REAL SERVER)
MINION Server3 nginx(REAL SERVER)
MINION Server4 haproxy;keepalived(BAUKUP)

配置yum源:將server4新增到salt-stack叢集中

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.30.250/rhel6.5 enabled=1 gpgcheck=0 [salt] name=salt-stack baseurl=http://172.25.30.250/rhel6 enabled=1 gpgcheck=0 [LoadBalancer] name=LoadBalancer baseurl=http://172.25.30.250/rhel6.5/LoadBalancer enabled=1 gpgcheck=0

1.keepalived原始碼安裝

cd /srv/salt/
mkdir keepalived
mkdir pkgs #安裝依賴包
mkdir files
cd keepalived/
vim install.sls
include:
  - pkgs.make
kp-installed: file.managed: - name: /mnt/keepalived-2.0.6.tar.gz - source: salt://keepalived/files/keepalived-2.0.6.tar.gz cmd.run: - name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd /mnt/keepalived-2.0.6 && ./configure --prefix=/usr/local/keepalived --with-init=SYSV &> /dev
/null && make &> make install &> /dev/null
- creates: /usr/local/keepalived/ /etc/keepalived: file.directory: - mode: 755 /etc/sysconfig/keepalived: file.symlink: - target: /usr/local/keepalived/etc/sysconfig/keepalived /sbin/keepalived: file.symlink: - target: /usr/local/keepalived/sbin/keepalived cd pkgs vim make.sls make: pkg.installed: - pkgs: - gcc - pcre-devel - openssl-devel


用pillar設定變數:

cd /srv/pillar/
make keepalived
cd /srv/pillar/keepalived
vim install.sls
~
{% if grains['fqdn']== 'server1' %}
state: MASTER #如果主機名稱是server1;state:MASTER;權重100
vrid: 131
priority: 100
{% elif grains['fqdn']== 'server4'%} #如果主機是server4;state:BACKUP;權重50
state: BACKUP
vrid: 131
priority: 50
{% endif %}

cd /srv/pillar
vim top.sls 
base:
  '*':
    - web.install
    - keepalived.install
在files裡設定vip
cd /srv/salt/keepalived/files/
vim keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
        [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state {{ STATE }} #狀態呼叫pillar中設定的變數
    interface eth0 
    virtual_router_id {{ VRID }} #虛擬id
    priority {{ PRIORITY }} #權重
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    172.25.30.100 #VIP
    }
}

2.部署keepalived:

vim keepalived.servoce
include:
  - keepalived.install #呼叫安裝檔案

/etc/keepalived/keepalived.conf:
  file.managed:
    - source: salt://keepalived/files/keepalived.conf
    - template: jinja #jinja模組
    - context:
        STATE: {{ pillar['state'] }}
        VRID: {{ pillar['vrid'] }}
        PRIORITY: {{ pillar['priority'] }}

kp-service:
  file.managed:
    - name: /etc/init.d/keepalived
    - source: salt://keepalived/files/keepalived
    - mode: 755
  service.running:
    - name: keepalived
    - reload: True
    - watch:
      - file: /etc/keepalived/keepalived.conf

3.top 部署haproxy;keepalived高可用負載均衡平臺:

上一節中我們部署的haproxy實現的均衡負載;現在我們在原來的基礎上安裝keepalived實現高可用

vim /srv/salt/top.sls 
base:
  'server1':
    - haproxy.install
    - keepalived.service
  'server4':
    - haproxy.install
    - keepalived.service
  'roles:apache':
    - match: grain
    - httpd.install
  'roles:nginx':
    - match: grain
    - nginx.service

推送部署:



salt ‘*’ state.highstate
推送測試:



我們這個服務當haproxy關掉並不能實現高可用因此我們編輯了一個監控指令碼

4.實現haproxy高可用:

新增一個監控指令碼:
cd /opt/
vim check_haproxy.sh #如果haproxy狀態關閉就自動開啟;如果返回值不等於0關閉keepalived;vip就會調轉到另一個keepalived上

#!/bin/bash

/etc/init.d/haproxy status &> /dev/null || /etc/init.d/haproxy restart &> /dev/nnull

if [ $? -ne 0 ];then 
/etc/init.d/keepalived stop &> /dev/null
fi
~                                           

chmod +x check_haproxy.sh 
vim /srv/salt/keepalived/files/keepalived.conf # 配置檔案呼叫監控指令碼

! Configuration File for keepalived

vrrp_script check_haproxy {
        script "/opt/check_haproxy.sh"
        interval 2 
        weight   2 #每當發現haproxy宕掉就減少master的權重直到比backup的權重低就會自動切換
}

global_defs {
   notification_email {
        [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}


vrrp_instance VI_1 {
    state {{ STATE }}
    interface eth0
    virtual_router_id {{ VRID }}
    priority {{ PRIORITY }}
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    172.25.30.100
    }
    track_script {
        check_haproxy
    }
}

salt server4 state.sls keepalived.service #將keepalived.service推給server4
chmod +x check_haproxy.sh 

測試:
server1:關閉/etc/init.d/haproxy stop
為了防止指令碼將haproxy再開啟我們可以取消haproxy指令碼許可權來測試效果測試完畢要將他的許可權又賦予

/etc/init.d/salt-minion restart

Salt命令部署:

salt-cp '*' /etc/passwd /tmp/ #cp /etc/passwd到minion的/tmp下面
salt '*' cmd.run 'rm -f /tmp/passwd ' #cmd命令刪除minion/tmp/passwd
salt server3 state.single pkg.installed httpd #模組安裝單獨給server3安裝httpd
salt '*' cmd.run 'yum install httpd -y ' #命令安裝

5.安裝資料庫快取資料

5.1 > minioin上安裝資料庫記錄

External Job Cache - Minion-Side Returner

Salt-minion:
yum install -y MySQL-python.x86_64
vim /etc/salt/minion
/etc/init.d/salt-minion restart

yum insatll mysql-server
 /etc/init.d/mysqld start
Mysql
mysql> grant all on salt.* to [email protected]'172.25.30.%' identified by 'westos';
mysql> drop database salt;
mysql < test.sql
salt 'server2' test.ping --return mysql
server2:
    True

5.2.在master安裝

Master Job Cache - Master-Side Returner

yum install MySQL-python -y
vim /etc/salt/master

/etc/init.d/mysqld start
Server3:
/etc/init.d/salt-minion restart

6.封裝模組:

有時候我們可以把很多函式封裝在一個模組中這樣在我們呼叫的過程中就可用呼叫模組中的函式;這樣即方便還便於整理

mkdir /srv/salt/_modules
cd /srv/salt/_modules
vim my_disk.py

#!/usr/bin/env python

def df():
    return __salt__['cmd.run']('df -h')

salt '*' saltutil.sync_modules #同步模組


當模組同步到minion中時會將封裝檔案推送到minion的快取中和grains相似

執行封裝模組命令

7.Syndic:

部署: server4——>top master—->master of master
Server4:
在master刪除minion中的server4(salt-key -d server4);server4把他之前的服務都關掉

/etc/init.d/salt-minion stop
chkconfig salt-minion off
/etc/init.d/haproxy stop
/etc/init.d/keepalived stop
yum install salt-master -y
vim /etc/salt/master #開啟syndic埠

etc/init.d/salt-master start
Sever1:master----->syndic
yum install salt-syndic
vim /etc/salt/master

/etc/init.d/salt-master start
/etc/init.d/salt-master stop
/etc/init.d/salt-syndic  start

Server4:top-master:


當環境不允許我們安裝salt我們可以通過salt-ssh來部署:

8.salt-ssh

Master端:

yum install -y salt-ssh
vim /etc/salt/roster #新增server3 因為是root所以不用賦予許可權

vim /etc/salt/master #ssh和mysql有衝突所以我們需要註釋mysql
/etc/init.d/salt-master stop
/etc/init.d/salt-master start
salt-ssh 'server3' test.ping

關閉server3的minion測試:

 cd /etc/pki/tls/private/
 openssl genrsa 1024 > localhost.key #生成key在ocalhost.key檔案中
 cd ../certs/
 make testcert #製作證書

vim /etc/salt/master.d/api.conf #api配置檔案定義api埠加密檔案和證書檔案
rest_cherrypy:
  port: 8000
  ssl_crt: /etc/pki/tls/certs/localhost.crt

  ssl_key: /etc/pki/tls/private/localhost.key
vim auth.conf #編輯認證配置檔案
external_auth:
  pam:
    saltapi:
      - '.*'
      - '@wheel'
      - '@runner'
      - '@jobs'
useradd saltapi #建立認證使用者和密碼
passwd saltapi

重啟服務

[root@server1 master.d]# curl -sSk https://localhost:8000/login -H 'Accept: application/x-yaml' -d username=saltapi -d password=westos -d eauth=pam
return:
- eauth: pam
  expire: 1534598878.4669149
  perms:
  - .*
  - '@wheel'
  - '@runner'
  - '@jobs'
  start: 1534555678.4669139
  token: **4f0f24b3cea429878d66af4c1ca2d9cd630aef41**
  user: saltapi
[root@server1 master.d]# curl -sSk https://localhost:8000 \
> -H 'Accept: application/x-yaml' \
> -H 'X-Auth-Token: **4f0f24b3cea429878d66af4c1ca2d9cd630aef41**' \
> -d client=local \
> -d tgt='*' \
> -d fun=test.ping
return:
- server1: true
  server2: true
  server3: true

vim saltapi.py #編輯api模組用html編寫py檔案呼叫方法:

# -*- coding: utf-8 -*-

import urllib2,urllib
import time

try:
    import json
except ImportError:
    import simplejson as json

class SaltAPI(object):
    __token_id = ''
    def __init__(self,url,username,password):
        self.__url = url.rstrip('/')
        self.__user = username
        self.__password = password


    def token_id(self):
        ''' user login and get token id '''
        params = {'eauth': 'pam', 'username': self.__user, 'password': self.__password}
        encode = urllib.urlencode(params)
        obj = urllib.unquote(encode)
        content = self.postRequest(obj,prefix='/login')
        try:
            self.__token_id = content['return'][0]['token']
        except KeyError:
            raise KeyError

    def postRequest(self,obj,prefix='/'):
        url = self.__url + prefix
        headers = {'X-Auth-Token'   : self.__token_id}
        req = urllib2.Request(url, obj, headers)
        opener = urllib2.urlopen(req)
        content = json.loads(opener.read())
        return content

    def list_all_key(self):
        params = {'client': 'wheel', 'fun': 'key.list_all'}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        minions = content['return'][0]['data']['return']['minions']
        minions_pre = content['return'][0]['data']['return']['minions_pre']
        return minions,minions_pre

    def delete_key(self,node_name):
        params = {'client': 'wheel', 'fun': 'key.delete', 'match': node_name}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        ret = content['return'][0]['data']['success']
        return ret

    def accept_key(self,node_name):
        params = {'client': 'wheel', 'fun': 'key.accept', 'match': node_name}


        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        ret = content['return'][0]['data']['success']
        return ret

    def remote_noarg_execution(self,tgt,fun):
        ''' Execute commands without parameters '''
        params = {'client': 'local', 'tgt': tgt, 'fun': fun}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        return content

    def async_deploy(self,tgt,arg):
        ''' Asynchronously send a command to connected minions '''
        params = {'client': 'local_async', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

    def target_deploy(self,tgt,arg):
        ''' Based on the node group forms deployment '''
        params = {'client': 'local_async', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg, 'expr_form': 'nodegroup'}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)

        content = self.postRequest(obj)
        ret = content['return'][0][tgt]
        return ret

    def remote_execution(self,tgt,fun,arg):
        ''' Command execution with parameters '''
        params = {'client': 'local', 'tgt': tgt, 'fun': fun, 'arg': arg}

        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

    def deploy(self,tgt,arg):
        ''' Module deployment '''
        params = {'client': 'local', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        return content

    def async_deploy(self,tgt,arg):
        ''' Asynchronously send a command to connected minions '''
        params = {'client': 'local_async', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

    def target_deploy(self,tgt,arg):
        ''' Based on the node group forms deployment '''
        params = {'client': 'local_async', 'tgt': tgt, 'fun': 'state.sls', 'arg': arg, 'expr_form': 'nodegroup'}
        obj = urllib.urlencode(params)
        self.token_id()
        content = self.postRequest(obj)
        jid = content['return'][0]['jid']
        return jid

def main():
    sapi = SaltAPI(url='https://**172.25.30.1:8000**',username='saltapi',password='westos')
    #sapi.token_id()
    print sapi.list_all_key()
    #sapi.delete_key('test-01')
    #sapi.accept_key('test-01')
    sapi.deploy('**server3','nginx.service**')
    #print sapi.remote_noarg_execution('test-01','grains.items')

if __name__ == '__main__':
    main()


相關推薦

基於salt-stack實現可用負載均衡 salt-stack模組介紹認識

Salt-satck一鍵部署keepalived;haproxy服務: 角色 server-id 安裝 MASTER Server1 haproxy;keepalived(MASTER) MINION Serve

Keepalived+Nginx實現可用負載均衡集群

連接 靜態 adf -1 rip mail fff hostname dex 一 環境介紹 1.操作系統CentOS Linux release 7.2.1511 (Core) 2.服務keepalived+lvs雙主高可用負載均衡集群及LAMP應用keepalived-1

LVS+Keepalived實現可用負載均衡

lvs+keepalived 高可用 負載均衡 用LVS+Keepalived實現高可用負載均衡,簡單來說就是由LVS提供負載均衡,keepalived通過對rs進行健康檢查、對主備機(director)進行故障自動切換,實現高可用。1. LVS NAT模式配置準備三臺服務器,一臺director, 兩

HAProxy+Varnish+LNMP實現可用負載均衡動靜分離集群部署

else 應用服務器 bash == 開機啟動 多少 heal 啟用 4.0 轉自http://bbs.hfteams.com/forum.php?mod=viewthread&tid=11&extra=page%3D1 HAProxy+Varnish+LN

nginx+keepalived實現可用負載均衡

其中 centos7.3 9.png IT 配置文件 bsp 是我 add nginx 環境: centos7.3虛擬機A 10.0.3.46 centos7.3虛擬機B 10.0.3.110 虛擬機A和B都需要安裝nginx和keepalived(過程省略,其中keepa

keepalived+lvs實現可用負載均衡集群

keepalived+lvs實現高可用LVS實戰篇第1章 環境準備1.1 系統環境1.1.1 系統版本[root@lb01 ~]# cat /etc/redhat-release CentOS release 6.9 (Final) [root@lb01 ~]# uname -r 2.6.32-696.el6

nginx+keepalive實現可用負載均衡

keepalived+nginx高可實驗一:實驗環境 主nginx負載均衡器:192.168.10.63 (通過keepalived配置了VIP:192.168.10.188供外使用)副nginx負載均衡器:192.168.10.200(通過keepalived配置了VIP:192.168.10.188供外

LVS+Keepalived 實現可用負載均衡叢集

LVS+Keepalived  實現高可用負載均衡叢集     隨著網站業務量的增長,網站的伺服器壓力越來越大?需要負載均衡方案!商業的硬體如 F5 ,Array又太貴,你們又是創業型互聯公司如何有效節約成本,節省不必要的浪費?同時還需要實現商業硬體一樣的高效能高可

docker下用keepalived+Haproxy實現可用負載均衡叢集

先記錄下遇到的坑,避免之後忘了; 花時間最多去解決的一個題是:在docker下啟動haproxy服務後在這個docker服務內安裝keepalived無法ping通的問題,雖然最後也是莫名其妙就好了,但是加強了不少對docker的理解和還需深入學習的地方。 為什麼要用

【Linux】LVS+Keepalived實現可用負載均衡(Web群集)

一、Keepalived概述 keepalived是一個類似於layer3,4,5交換機制的軟體,也就是我們平時說的第3層、第4層和第5層交換。Keepalived的作用是檢測web伺服器的狀態,

LVS-DR+keepalived實現可用負載均衡

介紹 LVS是Linux Virtual Server的簡寫,意即Linux虛擬伺服器,是一個虛擬的伺服器集群系統。本專案在1998年5月由章文嵩博士成立,是中國國內最早出現的自由軟體專案之一。 Keepalived 主要用作RealServer的健康狀態檢查以及Load

centos7.2 MYSQL雙主+半同步+keepalived實現可用負載均衡

這兩天瞭解了一下mysql的叢集方案,發現有很多解決方案,有複雜的也有簡單的,有興趣的參考下面網址:http://www.cnblogs.com/Kellana/p/6738739.html 這裡,我使用中小企業最常用也是較簡單的方案,用keepalived提供一個vip(

利用lvs+keepalived實現可用負載均衡環境的部署

http://wangwq.blog.51cto.com/8711737/1852212 ,執行即可(注意指令碼的VIP,不同的realserver對應不同的VIP) 1 2 3 4 5 6

RHCS套件+Nginx實現可用負載均衡

紅帽叢集套件(RedHat Cluter Suite, RHCS)是一套綜合的軟體元件,可以通過在部署時採用不同的配置,以滿足你對高可用性,負載均衡,可擴充套件性,檔案共享和節約成本的需要。 它提供有如下兩種不同型別的叢集:  1、高可用性:應用/服務故障切換-通過建立n

利用saltstack自動化運維工具結合keepalived實現可用負載均衡

在上次實驗“saltstsck自動化運維工具實現負載均衡”的基礎上,也就是在server3端配置實現server4端的httpd和server5端的nginx負載均衡,繼續進行操作實現高可用: (本文所有主機ip均為172.25.17網段,主機名和ip相對應。

Haproxy + Pacemaker 實現可用負載均衡(二)

Pacemaker server1 和 server2 均安裝pacemaker 和 corosync server1 和 server2 作相同配置 [root@server1 ~]# yum install -y pacemaker coros

keepalived+httpd+tomcat實現可用負載均衡

一、環境 centos 6.5 keepalived keepalived-1.2.19.tar.gz httpd httpd-2.4.12.tar.gz tomcat apache-tomcat-7.0.63.tar.gz 二、部署 安裝 http

LVS+Keepalived 實現可用負載均衡

## 前言 在業務量達到一定量的時候,往往單機的服務是會出現瓶頸的。此時最常見的方式就是通過負載均衡來進行橫向擴充套件。其中我們最常用的軟體就是 Nginx。通過其反向代理的能力能夠輕鬆實現負載均衡,當有服務出現異常,也能夠自動剔除。但是負載均衡服務自身也可能出現故障,因此需要引入其他的軟體來實現負載均衡服

基於騰訊雲CLB實現K8S v1.10.1集群可用+負載均衡

開源 可能 管理平臺 可用 st3 tab OS 1.10 style 概述: 最近對K8S非常感興趣,同時對容器的管理等方面非常出色,是一款非常開源,強大的容器管理方案,最後經過1個月的本地實驗,最終決定在騰訊雲平臺搭建屬於我們的K8S集群管理平臺~ 采購之後已經在本

基於HAProxy+Keepalived可用負載均衡web服務的搭建

1.2 epo cnblogs oba backup 保持 ica mysql redis 一 原理簡介 1.HAProxyHAProxy提供高可用性、負載均衡以及基於TCP和HTTP應用的代理,支持虛擬主機,它是免費、快速並且可靠的一種解決方案。HAProxy特別適用於那