1. 程式人生 > >saltstack部署實現nginx和apache的負載均衡及高可用(批量實現)

saltstack部署實現nginx和apache的負載均衡及高可用(批量實現)

一 saltstack的介紹

SaltStack是一個伺服器基礎架構集中化管理平臺,具備配置管理、遠端執行、監控等功能,基於Python語言實現,結合輕量級訊息佇列(ZeroMQ)與Python第三方模組(Pyzmq、PyCrypto、Pyjinjia2、python-msgpack和PyYAML等)構建。

通過部署SaltStack,我們可以在成千萬臺伺服器上做到批量執行命令,根據不同業務進行配置集中化管理、分發檔案、採集伺服器資料、作業系統基礎及軟體包管理等,SaltStack是運維人員提高工作效率、規範業務配置與操作的利器。

1 基本原理

SaltStack 採用 C/S模式,server端就是salt的master,client端就是minion,minion與master之間通過ZeroMQ訊息佇列通訊

minion上線後先與master端聯絡,把自己的pub key發過去,這時master端通過salt-key -L命令就會看到minion的key,接受該minion-key後,也就是master與minion已經互信

master可以傳送任何指令讓minion執行了,salt有很多可執行模組,比如說cmd模組,在安裝minion的時候已經自帶了,它們通常位於你的python庫中,locate salt | grep /usr/ 可以看到salt自帶的所有東西。

這些模組是python寫成的檔案,裡面會有好多函式,如cmd.run,當我們執行salt '*' cmd.run 'uptime'

的時候,master下發任務匹配到的minion上去,minion執行模組函式,並返回結果。master監聽4505和4506埠,4505對應的是ZMQ的PUB system,用來發送訊息,4506對應的是REP system是來接受訊息的。

具體步驟如下:

  1. Salt stack的Master與Minion之間通過ZeroMq進行訊息傳遞,使用了ZeroMq的釋出-訂閱模式,連線方式包括tcp,ipc
  2. salt命令,將cmd.run ls命令從salt.client.LocalClient.cmd_cli釋出到master,獲取一個Jodid,根據jobid獲取命令執行結果。
  3. master接收到命令後,將要執行的命令傳送給客戶端minion。
  4. minion從訊息總線上接收到要處理的命令,交給minion._handle_aes處理
  5. minion._handle_aes發起一個本地執行緒呼叫cmdmod執行ls命令。執行緒執行完ls後,呼叫minion._return_pub方法,將執行結果通過訊息匯流排返回給master
  6. master接收到客戶端返回的結果,呼叫master._handle_aes方法,將結果寫的檔案中
  7. salt.client.LocalClient.cmd_cli通過輪詢獲取Job執行結果,將結果輸出到終端。

二 實驗步驟

1)實驗環境

master minion
server2: 172.25.1.2

server2:172.25.1.2(haproxy,keeplived)

server3:172.25.1.3(httpd)
server4:172.25.1.4(nginx)
server5:172.25.1.5(haproxy,keeplived)

這四個結點上配置好安裝的yum源:

[salt]
name=salt
baseurl=http://172.25.1.250/rhel6
gpgcheck=0

2)安裝軟體

server2(master):

yum install -y salt-master

編輯/etc/salt/master

mkdir /srv/salt

mkdir /srv/pillar

注意:Pillar是在salt 0.9.8版本後才新增的功能元件。它跟grains的結構一樣,也是一個字典格式,資料通過key/value的格式進行儲存。在Salt的設計 中,Pillar使用獨立的加密sessiion,所以Pillar可以用來傳遞敏感的資料,例如ssh-key,加密證書等。

/etc/init.d/salt-master start

server2.server3.server4.server5(minion)

yum install -y salt-minion

編輯 /etc/salt/minion

master: 172.25.1.2(指定master)

3 )Master與Minion認證

(1) minion在第一次啟動時,會在/etc/salt/pki/minion/(該路徑在/etc/salt/minion裡面設定)下自動生成 minion.pem(private key)和 minion.pub(public key),然後將 minion.pub傳送給master。

(2)master在接收到minion的public key後,通過salt-key命令accept minion public key,這樣在master的/etc/salt/pki/master/minions下的將會存放以minion id命名的 public key,然後master就能對minion傳送指令了

4 )Master與Minion的連線

SaltStack master啟動後預設監聽4505和4506兩個埠。4505(publish_port)為saltstack的訊息釋出系 統,4506(ret_port)為saltstack客戶端與服務端通訊的埠。如果使用lsof 檢視4505埠,會發現所有的minion在4505埠持續保持在ESTABLISHED狀態。

5 )apache

1 建立file檔案倉庫和安裝指令碼

[[email protected] apache]# pwd
/srv/salt/apache
[[email protected] apache]# ls
files  lib.sls  web.sls
[[email protected] apache]# cd files/
[[email protected] files]# ls
httpd.conf  index.php
[[email protected] files]# 

2  vim web.sls

apache-install:
  pkg.installed:
    - pkgs:
      - httpd
      - php
  file.managed:
    - name: /var/www/html/index.php
    - source: salt://apache/files/index.php
    - mode: 644
    - user: root
    - group: root

apache-service:
  file.managed:
    - name: /etc/httpd/conf/httpd.conf
    - source: salt://apache/files/httpd.conf
    - template: jinja
    - context:
      port: {{ pillar['port'] }}
      bind: {{ pillar['bind'] }}
  service.running:
    - name: httpd
    - enable: True
    - reload: True
    - watch:
      - file: apache-service
~                                        

3 vim lib.sls

{% set port = 80 %}
4

[[email protected] web]# pwd
/srv/pillar/web
[[email protected] web]# ls
install.sls
[[email protected] web]# vim install.sls 

{% if grains['fqdn'] == 'server3' %}
webserver: httpd
bind: 172.25.1.3
port: 80


在server3 上編輯vim /etc/salt/minion

/etc/salt/minion restart

5  最好是做完一個服務傳送的minion端 進行驗證,保證正確!

files 裡面的配置檔案和預設釋出目錄自己編寫好,本文是將minion的配置檔案傳送到master端來進行修改操作的,其實也沒有改什麼!

 vim index.php
<?php
phpinfo()
?>
~    

6 驗證

6 )nginx

apache 一樣 這裡只展現程式碼

[[email protected] nginx]# pwd
/srv/salt/nginx
[[email protected] nginx]# ls
files  install.sls  nginx-pre.sls  service.sls
[[email protected] nginx]# cd files/
[[email protected] files]# ls
nginx  nginx-1.14.0.tar.gz  nginx.conf
[[email protected] files]# 


vim nginx-pre.sls
pkg-init:
  pkg.installed:
    - pkgs:
      - gcc
      - zlib-devel
      - openssl-devel
      - pcre-devel
~                 

vim install.sls
include:
  - nginx.nginx-pre

nginx-source-install:
  file.managed:
   - name: /mnt/nginx-1.14.0.tar.gz
   - source: salt://nginx/files/nginx-1.14.0.tar.gz


  cmd.run:
   - name: cd /mnt && tar zxf nginx-1.14.0.tar.gz && cd nginx-1.14.0 && sed -i.bak 's/#define NGINX_VER "nginx\/" NGINX_VERSION/#define NGINX_VER "nginx"/g' src/core/nginx.h && sed -i.bak 's/CFLAGS="$CFLAGS -g"/#CFLAGS="$CFLAGS -g"/g' auto/cc/gcc && ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module && make && make install && ln -s /usr/local/nginx/sbin/nginx /usr/sbin/nginx
   - creates: /usr/local/nginx


vim service.sls 
include:
  - nginx.install

/usr/local/nginx/conf/nginx.conf:
  file.managed:
    - source: salt://nginx/files/nginx.conf

nginx-service:
  file.managed:
    - name: /etc/init.d/nginx
    - source: salt://nginx/files/nginx
    - mode: 755



  service.running:
    - name: nginx
    - enable: True
    - reload: True
    - watch:
      - file: /usr/local/nginx/conf/nginx.conf


cd files
vim nginx
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig:   - 85 15
# description:  Nginx is an HTTP(S) server, HTTP(S) reverse \
#               proxy and IMAP/POP3 proxy server
# processname: nginx
# config:      /usr/local/nginx/conf/nginx.conf
# pidfile:     /usr/local/nginx/logs/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)

lockfile="/var/lock/subsys/nginx"
pidfile="/usr/local/nginx/logs/${prog}.pid"

NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"


start() {
    [ -x $nginx ] || exit 5
    [ -f $NGINX_CONF_FILE ] || exit 6
    echo -n $"Starting $prog: "
    daemon $nginx -c $NGINX_CONF_FILE
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    killproc -p $pidfile $prog
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    configtest_q || return 6
    stop
    start
}
reload() {
    configtest_q || return 6
    echo -n $"Reloading $prog: "
    killproc -p $pidfile $prog -HUP
    echo
}

configtest() {
    $nginx -t -c $NGINX_CONF_FILE
}

configtest_q() {
    $nginx -t -q -c $NGINX_CONF_FILE
}

rh_status() {
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

# Upgrade the binary with no downtime.
upgrade() {
    local oldbin_pidfile="${pidfile}.oldbin"

    configtest_q || return 6
    echo -n $"Upgrading $prog: "
    killproc -p $pidfile $prog -USR2
    retval=$?
    sleep 1
    if [[ -f ${oldbin_pidfile} && -f ${pidfile} ]];  then
        killproc -p $oldbin_pidfile $prog -QUIT
        success $"$prog online upgrade"
        echo 
        return 0
    else
        failure $"$prog online upgrade"
        echo
        return 1
    fi
}

# Tell nginx to reopen logs
reopen_logs() {
    configtest_q || return 6
    echo -n $"Reopening $prog logs: "
    killproc -p $pidfile $prog -USR1
    retval=$?
    echo
    return $retval
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart|configtest|reopen_logs)
        $1
        ;;
    force-reload|upgrade)
        rh_status_q || exit 7
        upgrade
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    status|status_q)
        rh_$1
        ;;
    condrestart|try-restart)
        rh_status_q || exit 7
        restart
            ;;
    *)
        echo $"Usage: $0 {start|stop|reload|configtest|status|force-reload|upgrade|restart|reopen_logs}"
        exit 2
esac

vim nginx.conf
#user  nobody;
worker_processes  auto;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;
#charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
 # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
 #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}




[[email protected] web]# pwd
/srv/pillar/web
[[email protected] web]# vim install.sls 

{% if grains['fqdn'] == 'server3' %}
webserver: httpd
bind: 172.25.1.3
port: 80
{% elif grains['fqdn'] == 'server4'%}
webserver: nginx
bind: 172.25.1.4
port: 80

minion端:
[[email protected] salt]# cd /etc/salt/
[[email protected] salt]# vim grains    
roles: nginx
                                                 

注意:最後每個服務都去測試

測試結果:

7 )haproxy

[[email protected] haproxy]# pwd
/srv/salt/haproxy
[[email protected] haproxy]# cd files/
[[email protected] files]# ls
haproxy-1.6.11.tar.gz  haproxy.cfg  haproxy.init
[[email protected] files]# pwd
/srv/salt/haproxy/files


vim install.sls
include:
  - nginx.nginx-pre
  - users.haproxy

haproxy-install:
  file.managed:
    - name: /mnt/haproxy-1.6.11.tar.gz
    - source: salt://haproxy/files/haproxy-1.6.11.tar.gz
  cmd.run:
    - name: cd /mnt && tar zxf haproxy-1.6.11.tar.gz && cd haproxy-1.6.11 && make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy && make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy install
    - creates: /usr/local/haproxy

/etc/haproxy:
  file.directory:
    - mode: 755


/usr/sbin/haproxy:
  file.symlink:
    - target: /usr/local/haproxy/sbin/haproxy
~           


vim service.sls
      include:
  - haproxy.install

/etc/haproxy/haproxy.cfg:
  file.managed:
    - source: salt://haproxy/files/haproxy.cfg

haproxy-service:
  file.managed:
    - name: /etc/init.d/haproxy
    - source: salt://haproxy/files/haproxy.init
    - mode: 755

  service.running:

    - name: haproxy
    - enable: True
    - reload: True
    - watch:
      - file: /etc/haproxy/haproxy.cfg

 vim haproxy.cfg 
defaults
        mode            http
        log             global
        option          httplog
        option          dontlognull
        monitor-uri     /monitoruri
        maxconn         8000
        timeout client  30s
        retries         2
        option redispatch
        timeout connect 5s
        timeout server  30s
        timeout queue   30s
        stats uri       /admin/stats

# The public 'www' address in the DMZ
frontend public
        bind            *:80 name clear
        default_backend dynamic


# the application servers go here
backend dynamic
        balance         roundrobin
        fullconn        4000 # the servers will be used at full load above this number of connections
        server          dynsrv1 172.25.1.3:80 check inter 1000
        server          dynsrv2 172.25.1.4:80 check inter 1000



vim haproxy.init 
從minion端 安裝包裡/mnt/haproxy-1.6.11/examples 自行復制指令碼啟動檔案  
               

建立一個使用者sls檔案
[[email protected] salt]# cd users/
[[email protected] users]# ls
haproxy.sls
vim haproxy.sls
haproxy-group:
  group.present:
    - name: haproxy
    - gid: 200

haproxy:
  user.present:
    - uid: 200
    - gid: 200
    - home: /usr/local/haproxy
    - createhome: False
    - shell: /sbin/nologin

server2 和 server5 都需要安裝

[[email protected] web]# pwd
/srv/pillar/web

[[email protected] web]# vim install.sls

{% if grains['fqdn'] == 'server3' %}
webserver: httpd
bind: 172.25.1.3
port: 80
{% elif grains['fqdn'] == 'server4'%}
webserver: nginx
bind: 172.25.1.4
port: 80
{% elif grains['fqdn'] == 'server2'%}
webserver: haproxy
{% elif grains['fqdn'] == 'server5'%}
webserver: haproxy
{% endif %}

負載均衡測試:

8 )keeplived

[[email protected] keepalived]# pwd
/srv/salt/keepalived
[[email protected] keepalived]# ls
files  install.sls  service.sls
[[email protected] keepalived]# cd files/
[[email protected] files]# ls
keepalived  keepalived-2.0.6.tar.gz  keepalived.conf
[[email protected] files]# pwd
/srv/salt/keepalived/files


vim install.sls
include:
  - nginx.nginx-pre

kp-install:
  file.managed:
    - name: /mnt/keepalived-2.0.6.tar.gz
    - source: salt://keepalived/files/keepalived-2.0.6.tar.gz

  cmd.run:
    - name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 &&  ./configure --prefix=/usr/local/keepalived --with-init=SYSV && make && make install
    - creates: /usr/local/keepalived
/etc/keepalived:
  file.directory:
    - mode: 755

/sbin/keepalived:
  file.symlink:
    - target: /usr/local/keepalived/sbin/keepalived


/etc/sysconfig/keepalived:
  file.symlink:
    - target: /usr/local/keepalived/etc/sysconfig/keepalived

/etc/init.d/keepalived:
  file.managed:
    - source: salt://keepalived/files/keepalived
    - mode: 755



vim service.sls
include:
  - keepalived.install

kp-service:
  file.managed:
    - name: /etc/keepalived/keepalived.conf
    - source: salt://keepalived/files/keepalived.conf
    - template: jinja
    - context:
      STATE: {{ pillar['state'] }}
      VRID: {{ pillar['vrid'] }}
      PRIORITY: {{ pillar['priority'] }}
  service.running:
     - name: keepalived
     - reload: True
     - watch:
       - file: kp-service
~                                 


vim keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
      [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
#   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state {{ STATE }}
    interface eth0
    virtual_router_id {{ VRID }}
    priority {{ PRIORITY }}
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      172.25.1.100
    }
}
                                                                                       
 vim keepalived
指令碼自行復制


[[email protected] keepalived]# pwd
/srv/pillar/keepalived
[[email protected] keepalived]# ls
install.sls
[[email protected] keepalived]# vim install.sls 
{% if grains['fqdn'] == 'server2' %}
state: MASTER
vrid: 39
priority: 100
{% elif grains['fqdn'] == 'server5'%}
state: SLAVE
vrid: 39
priority: 50
{% endif %}


[[email protected] pillar]# pwd
/srv/pillar
[[email protected] pillar]# ls
keepalived  top.sls  web
[[email protected] pillar]# vim top.sls 
base:
  '*':
    - web.install
    - keepalived.install
~                      

[[email protected] salt]# pwd
/srv/salt
[[email protected] salt]# vim top.sls 
base:
  'server2':
    - haproxy.service
    - keepalived.service
  'server5':
    - haproxy.service
    - keepalived.service
  'roles:apache':
    - match: grain
    - apache.web
  'roles:nginx':
    - match: grain
    - nginx.service

 salt '*' state.highstate (高階推送)

測試:

這個時候假如done掉server2端,會發現訪問依然沒有變化,但是master vip 已經漂向server5了

相關推薦

saltstack部署實現nginxapache負載均衡可用批量實現

一 saltstack的介紹 SaltStack是一個伺服器基礎架構集中化管理平臺,具備配置管理、遠端執行、監控等功能,基於Python語言實現,結合輕量級訊息佇列(ZeroMQ)與Python第三方模組(Pyzmq、PyCrypto、Pyjinjia2、python-m

nginx反向代理tomacat+keepalived實現動靜分離、負載均衡可用

時間 超時 error css 權限命令 上傳 轉發 onf ioc 本文的動靜分離主要是通過nginx+tomcat來實現,其中nginx處理圖片、html、JS、CSS等靜態文件,tomcat處理jsp、servlet等動態請求 服務器名稱

解決Nginx + Keepalived主從雙機熱備+自動切換,實現負載均衡可用

解決Nginx + Keepalived主從雙機熱備+自動切換,實現負載均衡及高可用 IP 伺服器 服務 192.168.1.10 lb-node1 Nginx、kee

haproxy+keepalived實現負載均衡可用

keepalived+haproxy HAProxy是一個使用C語言編寫的自由及開放源代碼軟件,其提供高性能性、負載均衡,以及基於TCP和HTTP的應用程序代理。相較與 Nginx,HAProxy 更專註與反向代理,因此它可以支持更多的選項,更精細的控制,更多的健康狀態檢測機制和負載均衡算法。 H

Heartbeat + haproxy + MySQL雙主複製 實現讀寫負載均衡可用

目錄 四、總結 參考:         上一篇我們使用Heartbeat的HA功能,實現MySQL主從複製的自動故障切換。它的工作原理是:當Heartbeat啟動時,會將VIP繫結到haresources檔案中指定

LVS的DR模式中的負載均衡可用

** 一 LVS的三種IP負載均衡技術比較 ** 我們先分析實現虛擬網路服務的主要技術,指出IP負載均衡技術是在負載排程器的實現技術中效率最高的。在已有的IP負載均衡技術中,主要有通過網路地址轉換(Network Address Transl

haproxy負載均衡可用叢集

 Haproxy HAProxy 提供高可用性、負載均衡以及基於 TCP 和 HTTP 應用的代理,支援虛擬主機,它是免費、快速並且可靠的一種解決方案。HAProxy 特別適用於那些負載特大的 web 站點, 這些站點通常又需要會話保持或七層處理。HAProxy 執行在當前的

NginxTomcat負載均衡實現session共享

    以前的專案使用Nginx作為反向代理實現了多個Tomcat的負載均衡,為了實現多個Tomcat之間的session共享,使用了開源的Memcached-Session-Manager框架。     此框架的優勢:          1、支援Tomcat6和Tomca

Keepalived+Nginx實現前端負載均衡可用

keepalived+nginx實現前端負載均衡的高可用一、實驗前準備時間同步、關閉iptables+selinux、各主機能相互解析在192.168.0.101和192.168.0.102上安裝Keepalived和Nginx,通過Nginx負載均衡至192.168.0.103及192.168.0.104上

Nginx負載均衡+Keepalived可用集群

check list proxy www alived 編譯安裝nginx efi class request 一、搭建環境及軟件版本 負載均衡:Nginx 高可用:Keepalived Linux:Centos 6.5 Nginx:nginx-1.6.2 Keepaliv

Nginx負載均衡+keepalived可用

_id htm ins oba web服務器 介紹 rtu netstat 相對 註:環境介紹:VMware環境下4臺虛擬機 兩臺做負載均衡服務器 兩臺做realserver (模擬物理web服務器)1 配置keepalived實現nginx負載均衡高可用,keepali

[轉載]nginx負載均衡+keepalived三主多主配置

rtu bucket 次數 with 信息 .gz plain int x86_64 nginx負載均衡+keepalived三主(多主)配置 1.實驗環境,實現目標三臺主機分別配置nginx負載均衡對後端多臺主機做轉發,同時配置keepalived實現HA,保證任意主機出

Keepalived+LVS+Nginx負載均衡可用

  一、Keepalived介紹    Keepalived是分散式部署系統解決系統高可用的軟體,結合LVS(Linux Virtual Server)使用,其功能類似於heartbeat,解決單機宕機的問題。   二、Keepalived技術原理    keepalived是以VRRP協議為實現基礎

【LVS+Keepalived】 LVS+Keepalived實現tcp、udp負載均衡HA可用

LVS 安裝下載編譯安裝 yum install -y kernel-devel gcc gcc-c++ yum install libnl* libpopt* popt-static -y解壓完之後進入解壓目錄執行make && make install編譯

windows環境 Apache負載均衡session共享環境搭建

第1章   概述 1.1 背景介紹 為了提高系統的高可用性及系統性能,我們常常會用到負載平衡,通過某種負載分擔技術,將外部發送來的請求均勻分配到某一臺伺服器上,而接收到請求的伺服器獨立地迴應客戶的請求。均衡負載能夠平均分配客戶請求到伺服器列陣,籍此提供快速獲取重要資料,解決

15套java互聯網架構師、並發、集群、負載均衡可用、數據庫設計、緩存、性能優化、大型分布式 項目實戰視頻教程

二階 並發 支持 線程並發 important http 系統架構 四十 mongodb入門 * { font-family: "Microsoft YaHei" !important } h1 { color: #FF0 } 15套java架構師、集群、高可用、高可擴

02-keepalived實現nginx服務的可用主備

trac code lob back ddr then 密碼 rtu alt 實驗環境:controller3,controller4為後端web服務器,controller1,controller2為nginx負載均衡服務器,用keepalived實現主備模式的高可用 c

分布式、集群、負載均衡可用的概念

活性 百萬 ava RF lan 最好 壓力 消息傳遞 我們 分布式(不一定有集群):   是指將不同的業務分布在不同的地方(應用服務器)。 集群cluster: 一群機器的集合。 負載均衡(集群):(Load balance cluster, LBC)

Haproxy負載均衡+Keepalived可用web群集

ofo 很多 eth1 virt table 可用性 pch proxy 高可用web 一、HAProxy 1.haproxy簡介 HAProxy 是一款提供高可用性、負載均衡以及基於TCP(第四層)和HTTP(第七層)應用的代理軟件,支持虛擬主機,它是免費、快速並且

Keepalived負載均衡可用

flags scrip vsa 基於ip 主機 lvs 詳細介紹 ipaddress type 1.keepalived服務介紹Keepalived的項目實現的主要目標是簡化LVS項目的配置並增強其穩定性,即Keepalived是對LVS項目的擴展增強。Keepalived