1. 程式人生 > >cool-2018-03-08-FastDFS叢集搭建+keepalive+nginx高可用

cool-2018-03-08-FastDFS叢集搭建+keepalive+nginx高可用

這裡以我搭建好的FastDFS叢集高可用+負載均衡的虛擬機器為例,下載地址:

一、環境準備

    centOS-6.6,VMware

   8個節點,node22,node23,node25~node30;其中node22,node23作為keepalived+nginx高可用負載均衡,具體思路看框架原理圖。node25,node26作為tracker,node27~node30作為storage【node27,node28作為一個組,node29,node30作為一個組,檔案上傳到組中一個節點後,會根據內部演算法同步到同組中的其他節點上】

    框架結構圖

        原理:用keeplived模擬一個VIP,通過這個VIP請求負載均衡過的nginx的其中一個節點,再利用nginx做後端兩個tracker的負載均衡上的nginx,最後通過tracker上的nginx訪問其中的一組storage


    fastDFS說明:


        同一個組(Volume)中的節點是相互備份的,同一個組中的儲存容量是以最小的那個節點的儲存容量的大小來衡量的,同一個組中各個節點的儲存資料是相同的,所以若是一個組中的節點數量多,那麼該組的讀操作能力就越強,類似該組內部實現了一個負載均衡,可以提高訪問效能,一個叢集中,不同的組起到的作用就是橫向擴容的作用,每個組中的儲存內容不一樣,所以擴容增加組或者增加組中的節點的容量即可

    資料:



fastdfs_client.zipfastdfs_client_java._v1.25.tar.gz,fastdfs_client_v1.24.jar,FastDFS_v5.05.tar.gz,fastdfs-nginx-module_v1.16.tar.gz,libfastcommon-1.0

二、安裝

    第一步:所有節點安裝libfastcommon

        將安裝包分別傳送到這6個節點

        編譯和安裝所需的依賴包、環境,利用xshell的to all sessions操作,每個節點都安裝環境

         yum install make cmake gcc gcc-c++

        暫時將所有的資料都放在root目錄下,接下來所有節點安裝libfastcommon

        tar -zxvf libfastcommon-1.0.7.tar.gz

        cd libfastcommon-1.0.7

        ./make.sh


        ./make.sh install


        libfastcommon 預設安裝到了
        /usr/lib64/libfastcommon.so

        /usr/lib64/libfdfsclient.so

(如果是32位系統,則在當前目錄下將這個檔案拷貝一份,64位則不用,cp /usr/lib64/libfastcommon.so /usr/lib/)

    第二步:所有節點安裝FastDFS

            cd

            tar -zxvf FastDFS_v5.05.tar.gz

            cd FastDFS

            編譯前要確保已經成功安裝了 libfastcommon

            ./make.sh


        ./make.sh install


        說明:預設服務指令碼在/etc/init.d/目錄下

        /etc/init.d/fdfs_storaged

        /etc/init.d/fdfs_tracker


        說明:命令工具在/usr/bin/目錄下的

        fdfs_appender_test
        fdfs_appender_test1
        fdfs_append_file
        fdfs_crc32
        fdfs_delete_file
        fdfs_download_file
        fdfs_file_info
        fdfs_monitor
        fdfs_storaged
        fdfs_test
        fdfs_test1
        fdfs_trackerd
        fdfs_upload_appender
        fdfs_upload_file
        stop.sh

        restart.sh


        服務指令碼需要連線到/usr/bin/目錄下的命令工具,預設服務指令碼中連線到的卻是/usr/local/bin

        所以先將兩個tracker節點node25,node26改成對應目錄,

        vi /etc/init.d/fdfs_trackerd

        %s+/usr/local/bin+/usr/bin


        儲存退出

        將剩下的四個storage節點進行如下更改

        vi /etc/init.d/fdfs_storaged

        %s+/usr/local/bin+/usr/bin


         以 上操作無論是配置 tracker 還是配置 storage  都是必須的,而 tracker 和storage  的區別主要是在安裝完fastdfs 之後的配置過程中,自此各個節點的安裝已經結束,下面是配置

三、配置

    第一步:所有節點拷貝配置檔案

        樣例配置檔案在:

        /etc/fdfs/client.conf.sample
        /etc/fdfs/storage.conf.sample
        /etc/fdfs/tracker.conf.sample


        進入root目錄

        cd 回到root目錄

        cd FastDFS/conf/

        to all sessions 所有節點執行下面操作

        [[email protected] conf]# cp * /etc/fdfs/

        cd /etc/fdfs


        cd /home

        mkdir fastdfs

        cd fastdfs

        新建目錄,所有節點一次性將所有需要用到的目錄都建立,to all sessions操作實現,這裡的所有操作都是如此

        mkdir client

        mkdir storage

        mkdir tracker


    第二步:配置跟蹤器:25,26兩個tracker節點(暫時只需要tracker.conf),這裡就不需要to all sessions

        vi /etc/fdfs/tracker.conf

        只要更改base_path

        base_path=/home/fastdfs/tracker



        防火牆中開啟跟蹤器埠(預設為 22122):25,26兩個tracker節點

        vi /etc/sysconfig/iptables

        -A INPUT -m state --state NEW -m tcp -p tcp --dport 22122 -j ACCEPT

        重啟25,26兩個節點的防火牆

        service iptables restart

        啟動tracker之前/home/fastdfs/traker目錄下是沒有檔案的

        25,26節點執行    啟動命令

        /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf

        停止

        /usr/bin/stop.sh /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf

        初次啟動會在/home/fastdfs/tracker目錄下建立data,logs兩個目錄,可以通過兩個方法檢視traker是否啟動成功

        (1)檢視 22122 埠監聽情況:netstat -unltp|grep fdfs


        (2)通過以下命令檢視 tracker 的啟動日誌,看是否有錯誤

        tail -100f /home/fastdfs/tracker/logs/trackerd.log

        設定 FastDFS 跟蹤器開機啟動

        vi /etc/rc.d/rc.local

        ## FastDFS Tracker
        /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf


        至此,tracker配置成功

    第三步:配置storage(27,28,29,30四個節點)

        cd /etc/fdfs/

        vi /etc/fdfs/storage.conf

修改的內容如下:(綠色不用改,紅色要改)27,28和29,30在以下的配置只有group_name不同,其他配置相同
disabled=false #啟用配置檔案
group_name=group1 #組名(第一組(27,28)為 group1,第二組(29,30)為 group2)
port=23000  #storage 的埠號,同一個組的 storage 埠號必須相同
base_path=/home/fastdfs/storage  #設定 storage 的日誌目錄

store_path0=/home/fastdfs/storage #儲存路徑

store_path_count=1 #儲存路徑個數,需要和 store_path 個數匹配
tracker_server=192.168.25.125:22122  #tracker 伺服器的 IP 地址和埠
tracker_server=192.168.25.126:22122  #多個 tracker 直接新增多條配置

http.server_port=8888  #設定 http 埠號



        (其它引數保留預設配置,具體配置解釋請參考官方文件說明:http://bbs.chinaunix.net/thread-1941456-1-1.html

        防火牆開放埠,四個storage節點

        防火牆中開啟儲存器埠(預設為 23000):
        vi /etc/sysconfig/iptables
        新增如下埠行:

        -A INPUT -m state --state NEW -m tcp -p tcp --dport 23000 -j ACCEPT


        重啟防火牆:

        service iptables restart

        啟動四個storage節點,啟動storage之前/home/fastdfs/storage目錄下是沒有檔案的

        /usr/bin/fdfs_storaged /etc/fdfs/storage.conf

        啟動之後,storage目錄下生成data,logs目錄,data目錄下有很多的儲存目錄

        檢查是否成功

        tail -f /home/fastdfs/storage/logs/storaged.log



        檢視 23000 埠監聽情況:

        netstat -unltp|grep fdfs

        tracker節點26掛機測試,檢視storage節點狀態

        /usr/bin/stop.sh /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf

        storage節點日誌檢視,停止前leader是126,停止後leader是125節點,

        26節點的tracker重新啟動後,storage都會監聽得到

        所有 Storage 節點都啟動之後,可以在任一 Storage 節點上使用如下命令檢視叢集資訊:

        /usr/bin/fdfs_monitor /etc/fdfs/storage.conf

[[email protected] fdfs]# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
[2018-03-08 01:36:51] DEBUG - base_path=/home/fastdfs/storage, connect_timeout=30, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0
server_count=2, server_index=1
tracker server is 192.168.25.126:22122
group count: 2

Group 1:
group name = group1
disk total space = 12669 MB
disk free space = 10158 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

    Storage 1:
        id = 192.168.25.127
        ip_addr = 192.168.25.127  ACTIVE
        http domain =
        version = 5.05
        join time = 2018-03-08 01:05:20
        up time = 2018-03-08 01:05:20
        total storage = 12669 MB
        free storage = 10158 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 192.168.25.128
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2018-03-08 01:36:31
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00
    Storage 2:
        id = 192.168.25.128
        ip_addr = 192.168.25.128  ACTIVE
        http domain =
        version = 5.05
        join time = 2018-03-08 01:05:16
        up time = 2018-03-08 01:05:16
        total storage = 12669 MB
        free storage = 10158 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id =
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2018-03-08 01:36:31
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00

Group 2:
group name = group2
disk total space = 12669 MB
disk free space = 10158 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

    Storage 1:
        id = 192.168.25.129
        ip_addr = 192.168.25.129  ACTIVE
        http domain =
        version = 5.05
        join time = 2018-03-08 01:05:13
        up time = 2018-03-08 01:05:13
        total storage = 12669 MB
        free storage = 10158 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 192.168.25.130
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2018-03-08 01:36:34
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00
        last_synced_timestamp = 1970-01-01 08:00:00
    Storage 2:
        id = 192.168.25.130
        ip_addr = 192.168.25.130  ACTIVE
        http domain =
        version = 5.05
        join time = 2018-03-08 01:05:02
        up time = 2018-03-08 01:05:02
        total storage = 12669 MB
        free storage = 10158 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id =
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 1
        connection.max_count = 1
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 0
        success_sync_in_bytes = 0
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 0
        success_file_open_count = 0
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 0
        success_file_write_count = 0
        last_heart_beat_time = 2018-03-08 01:36:42
        last_source_update = 1970-01-01 08:00:00
        last_sync_update = 1970-01-01 08:00:00

        last_synced_timestamp = 1970-01-01 08:00:00

        可以看到儲存節點狀態為 ACTIVE 則可

        停止storage

        /usr/bin/stop.sh /usr/bin/fdfs_storaged /etc/fdfs/storage.conf

        給storage配置開機啟動

        vi /etc/rc.d/rc.local

        /usr/bin/fdfs_storaged /etc/fdfs/storage.conf

四、測試

    第一步:修改 Tracker 伺服器中的客戶端配置檔案client.conf(以25節點為例作為測試即可,26節點也可以做同樣的配置)

        vi /etc/fdfs/client.conf

        base_path=/home/fastdfs/tracker
        tracker_server=192.168.25.125:22122
        tracker_server=192.168.25.126:22122


        檔案上傳測試

        /usr/bin/fdfs_test /etc/fdfs/client.conf upload anti-steal.jpg


        再測試一個,節點的同步時間還是很快的

        /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /root/FastDFS_v5.05.tar.gz


        將上面兩個測試再執行一次,檔案則上傳到了group2中

五、四個的storage節點nginx安裝fastdfs模組

    第一步:nginx安裝fastdfs模組並且配置

        cd 回到root目錄

        tar -zxvf fastdfs-nginx-module_v1.16.tar.gz

        cd fastdfs-nginx-module

        cd src/

        pwd 複製

        /root/fastdfs-nginx-module/src

        vi config

        將裡面所有的local去掉 儲存這個路徑修改是很重要,不然在 nginx 編譯的時候會報錯



        本例子中所有的節點都預先安裝了nginx,接下來只要重新編譯安裝nginx即可,

        (如果沒有安裝則需要安裝yum install gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel

openssl openssl-devel)

    所有storage節點都重新安裝配置nginx    cd /root/nginx-1.8.0
        (或者./configure --prefix=/usr/local/nginx --add-module=/usr/local/src/fastdfs-nginx-module/src)    ./configure --prefix=/usr/nginx-1.8 \    --add-module=/root/fastdfs-nginx-module/src        make

        make install


        cp /root/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs

        vi /etc/fdfs/mod_fastdfs.conf

        第一組 Storage 的 mod_fastdfs.conf 配置與第二組配置只有 group_name 不同:
group_name=group2

connect_timeout=10
base_path=/tmp
tracker_server=192.168.25.125:22122
tracker_server=192.168.25.126:22122
storage_server_port=23000
group_name=group1 #27,28節點group1  29,30節點group2
url_have_group_name = true
store_path0=/home/fastdfs/storage
group_count = 2
[group1]
group_name=group1
storage_server_port=23000
store_path_count=1
store_path0=/home/fastdfs/storage
[group2]
group_name=group2
storage_server_port=23000
store_path_count=1
store_path0=/home/fastdfs/storage

其中一個組中配置檔案

# connect timeout in seconds
# default value is 30s
connect_timeout=10

# network recv and send timeout in seconds
# default value is 30s
network_timeout=30

# the base path to store log files
base_path=/tmp

# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true

# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400

# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf

# FastDFS tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
tracker_server=192.168.25.125:22122
tracker_server=192.168.25.126:22122


# the port of the local storage server
# the default value is 23000
storage_server_port=23000

# the group name of the local storage server
group_name=group1

# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
url_have_group_name = true

# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
store_path0=/home/fastdfs/storage
#store_path1=/home/yuqing/fastdfs1

# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=

# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=

# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf


# if support flv
# default value is false
# since v1.15
flv_support = true

# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv


# set the group count
# set to none zero to support multi-group
# set to 0  for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 2

# group settings for group #1
# since v1.14
# when support multi-group, uncomment following section
[group1]
group_name=group1
storage_server_port=23000
store_path_count=1
store_path0=/home/fastdfs/storage

#store_path1=/home/yuqing/fastdfs1

# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
[group2]
group_name=group2
storage_server_port=23000
store_path_count=1
store_path0=/home/fastdfs/storage

    第二步:四個storage節點配置nginx

cd /usr/nginx-1.8/confvi nginx.conf

server {
    listen 8888;
    server_name localhost;
    location ~/group([0-9])/M00 {
        ngx_fastdfs_module;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root html;
    }
}

        說明:
        A、8888 埠值是要與/etc/fdfs/storage.conf 中的 http.server_port=8888 相對應,因為 http.server_port 預設為 8888,如果想改成 80,則要對應修改過來。
        B、Storage 對應有多個 group 的情況下,訪問路徑帶 group 名,如/group1/M00/00/00/xxx,對應的 Nginx 配置為:
            location ~/group([0-9])/M00 {
                ngx_fastdfs_module;
            }

        C、如查下載時如發現老報 404,將 nginx.conf 第一行 user nobody 修改為 user root 後重新啟動。

        四個storage節點,在防火牆中開啟 Nginx 的 8888 埠

        vi /etc/sysconfig/iptables

        -A INPUT -m state --state NEW -m tcp -p tcp --dport 8888 -j ACCEPT

        service iptables restart

        啟動 Nginx

        cd /usr/nginx-1.8

        ./sbin/nginx

        設定 Nginx 開機啟動

        vi /etc/rc.local

        /usr/nginx-1.8/sbin/nginx

        瀏覽器訪問測試時的上傳檔案

        http://192.168.25.129:8888/group2/M00/00/00/wKgZgVqgKLKAVhNNAABdrZgsqUU428_big.jpg


        http://192.168.25.129:8888/group2/M00/00/00/wKgZglqgKLWACQnpAAVFOL7FJU4.tar.gz

        http://192.168.25.128:8888/group1/M00/00/00/wKgZgFqgJ3CACfMrAAVFOL7FJU4.tar.gz

    第三步:在跟蹤器tracker節點( 192 .168.25.125 、 192 .168.25.126 )上安裝 Nginx清除快取並且配置

        ngx_cache_purge模組的作用:用於清除指定url的快取

        tar -zxvf ngx_cache_purge-2.3.tar.gz

        cd /root/nginx-1.8.0

        ./configure --prefix=/usr/nginx-1.8 \        --add-module=/root/ngx_cache_purge-2.3

        make && make install

        配置 Nginx,設定負載均衡以及快取

        vi /usr/nginx-1.8/conf/nginx.conf

        配置內容如下

user  root;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

events {
    worker_connections  1024;
    use epoll;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;
    #設定快取
    server_names_hash_bucket_size 128;
    client_header_buffer_size 32k;
    large_client_header_buffers 4 32k;
    client_max_body_size 300m;
    
    proxy_redirect off;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_connect_timeout 90;
    proxy_send_timeout 90;
    proxy_read_timeout 90;
    proxy_buffer_size 16k;
    proxy_buffers 4 64k;
    proxy_busy_buffers_size 128k;
    proxy_temp_file_write_size 128k;

    #設定快取儲存路徑、儲存方式、分配記憶體大小、磁碟最大空間、快取期限
    proxy_cache_path /home/fastdfs/cache/nginx/proxy_cache levels=1:2
    keys_zone=http-cache:200m max_size=1g inactive=30d;
    proxy_temp_path /home/fastdfs/cache/nginx/proxy_cache/tmp;
 
    #設定 group1 的伺服器
    upstream fdfs_group1 {
         server 192.168.25.127:8888 weight=1 max_fails=2 fail_timeout=30s;
         server 192.168.25.128:8888 weight=1 max_fails=2 fail_timeout=30s;
    }

    #設定 group2 的伺服器
    upstream fdfs_group2 {
         server 192.168.25.129:8888 weight=1 max_fails=2 fail_timeout=30s;
         server 192.168.25.130:8888 weight=1 max_fails=2 fail_timeout=30s;
    }

    server {
        listen       8000;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;


        #設定 group 的負載均衡引數
    location /group1/M00 {
            proxy_next_upstream http_502 http_504 error timeout invalid_header;
            proxy_cache http-cache;
            proxy_cache_valid  200 304 12h;
            proxy_cache_key $uri$is_args$args;
            proxy_pass http://fdfs_group1;
            expires 30d;
        }
         
        location /group2/M00 {
            proxy_next_upstream http_502 http_504 error timeout invalid_header;
            proxy_cache http-cache;
            proxy_cache_valid  200 304 12h;
            proxy_cache_key $uri$is_args$args;
            proxy_pass http://fdfs_group2;
            expires 30d;
        }
    
    #設定清除快取的訪問許可權
        location ~/purge(/.*) {
            allow 127.0.0.1;
            allow 192.168.1.0/24;
            deny all;
            proxy_cache_purge http-cache $1$is_args$args;
        }    

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

        按以上 nginx 配置檔案的要求,建立對應的快取目錄:
        mkdir -p /home/fastdfs/cache/nginx/proxy_cache

        mkdir -p /home/fastdfs/cache/nginx/proxy_cache/tmp

        tracker節點中,系統防火牆開啟對應的埠

        vi /etc/sysconfig/iptables

        -A INPUT -m state --state NEW -m tcp -p tcp --dport 8000 -j ACCEPT

        service iptables restart

        啟動 Nginx

        /usr/nginx-1.8/sbin/nginx

        設定 Nginx 開機啟動

        vi /etc/rc.local

        /usr/nginx-1.8/sbin/nginx

        檔案訪問測試

        前面直接通過訪問 Storage 節點中的 Nginx 的檔案

        現在可以通過 Tracker 中的 Nginx 來進行訪問

        http://192.168.25.125:8000/group2/M00/00/00/wKgZgVqgKLKAVhNNAABdrZgsqUU428_big.jpg

        http://192.168.25.125:8000/group1/M00/00/00/wKgZgFqgJ3CACfMrAAVFOL7FJU4.tar.gz

        http://192.168.25.125:8000/group2/M00/00/00/wKgZglqgKLWACQnpAAVFOL7FJU4.tar.gz

六、配置keepalived+nginx高可用負載均衡(22節點和23節點)

    說名:每一個 Tracker 中的 Nginx 都單獨對後端的 Storage 組做了負載均衡,但整套 FastDFS 叢集如果想對外提供統一的檔案訪問地址,還需要對兩個 Tracker 中的 Nginx 進行 HA 叢集

    yum install gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib-devel openssl openssl-devel

    使用Keepalived + Nginx組成的高可用負載均衡叢集做兩個Tracker節點中Nginx的負載均衡

    新增了兩個節點(22-master,23-backup節點作為負載均衡所用),這兩個節點已經安裝了nginx和jdk,克隆所得

    第一步:配置nginx,只更改紅色部分

user  root;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
#pid        logs/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       88;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location / {
            root   html;
            index  index.html index.htm;
        }
        #error_page  404              /404.html;
        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }
}

        vi /usr/nginx-1.8/html/index.html

        192.168.25.122 中的標題加 2
        <h1>Welcome to nginx! 2</h1>
        192.168.25.123 中的標題加 3
        <h1>Welcome to nginx! 3</h1>

        系統防火牆開啟對應的埠 88
        vi /etc/sysconfig/iptables
        ## Nginx
        -A INPUT -m state --state NEW -m tcp -p tcp --dport 88 -j ACCEPT

        service iptables restar

        測試 Nginx 是否安裝成功

        /usr/nginx-1.8/sbin/nginx -t

        出現下面兩行說名配置成功

        nginx: the configuration file /usr/nginx-1.8/conf/nginx.conf syntax is ok
        nginx: configuration file /usr/nginx-1.8/conf/nginx.conf test is successful



        啟動 Nginx
        /usr/nginx-1.8/sbin/nginx
        重啟 Nginx

        /usr/nginx-1.8/sbin/nginx -s reload

        設定 Nginx 開機啟動

        vi /etc/rc.local

        /usr/nginx-1.8/sbin/nginx

        分別訪問兩個 Nginx

        192.168.25.122:88


        192.168.25.123:88


    第二步:安裝 Keepalived

        解壓安裝

        tar -zxvf keepalived-1.4.1.tar.gz

        cd keepalived-1.4.1

        ./configure --prefix=/usr/keepalived

        make && make install

        將 keepalived 安裝成 Linux 系統服務:

        因為沒有使用 keepalived 的預設路徑安裝(預設是/usr/local),安裝完成之後,需要做一些工作複製預設配置檔案到預設路徑

        keepalived預設會讀取/etc/keepalived/keepalived.conf配置檔案,所以

        mkdir /etc/keepalived

        cp /usr/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

        複製啟動指令碼到/etc/init.d下

        cd /root/keepalived-1.4.1

        cp ./keepalived/etc/init.d/keepalived /etc/init.d/

        chmod 755 /etc/init.d/keepalived

        複製sysconfig檔案到/etc/sysconfig下

        cp /usr/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

        複製/sbin/keepalived到/usr/sbin下

        cp /usr/keepalived/sbin/keepalived /usr/sbin/

        設定 keepalived 服務開機啟動

        chkconfig keepalived on

        修改 Keepalived 配置檔案

        (1) MASTER 節點配置檔案(192.168.25.122)

        vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
## keepalived 自帶的郵件提醒需要開啟 sendmail 服務。建議用獨立的監控或第三方 SMTP
router_id edu-proxy-01  ## 標識本節點的字條串,通常為 hostname
}
## keepalived 會定時執行指令碼並對指令碼執行的結果進行分析,動態調整 vrrp_instance 的優先順序。如果
指令碼執行結果為 0,並且 weight 配置的值大於 0,則優先順序相應的增加。如果指令碼執行結果非 0,並且 weight
配置的值小於 0,則優先順序相應的減少。其他情況,維持原本配置的優先順序,即配置檔案中 priority 對應
的值。
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"  ## 檢測 nginx 狀態的指令碼路徑
interval 2  ## 檢測時間間隔
weight -20  ## 如果條件成立,權重-20
}
## 定義虛擬路由,VI_1 為虛擬路由的標示符,自己定義名稱
vrrp_instance VI_1 {
state MASTER  ## 主節點為 MASTER,對應的備份節點為 BACKUP
interface eth0 ## 繫結虛擬 IP 的網路介面,與本機 IP 地址所在的網路介面相同,我的是 eth1
virtual_router_id 51  ## 虛擬路由的 ID 號,兩個節點設定必須一樣,可選 IP 最後一段使用, 相
同的 VRID 為一個組,他將決定多播的 MAC 地址
mcast_src_ip 192.168.25.122  ## 本機 IP 地址
priority 100  ## 節點優先順序,值範圍 0-254,MASTER 要比 BACKUP 高
nopreempt ## 優先順序高的設定 nopreempt 解決異常恢復後再次搶佔的問題
advert_int 1  ## 組播資訊傳送間隔,兩個節點設定必須一樣,預設 1s
## 設定驗證資訊,兩個節點必須一致
authentication {
auth_type PASS
auth_pass 1111 ## 真實生產,按需求對應該過來
}
## 將 track_script 塊加入 instance 配置塊
track_script {
chk_nginx ## 執行 Nginx 監控的服務
}
## 虛擬 IP 池, 兩個節點設定必須一樣
virtual_ipaddress {
192.168.25.50  ## 虛擬 ip,可以定義多個
}

}

        (2)BACKUP 節點配置檔案(192.168.25.123):
        vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id edu-proxy-02
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
mcast_src_ip 192.168.25.123
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.25.50
}

}

        (3)編寫 Nginx 狀態檢測指令碼 /etc/keepalived/nginx_check.sh (已在 keepalived.conf 中配置)
        指令碼要求:如果 nginx 停止執行,嘗試啟動,如果無法啟動則殺死本機的 keepalived 程序,keepalived,將虛擬 ip 繫結到 BACKUP 機器上。內容如下:

        vi /etc/keepalived/nginx_check.sh

#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
    /usr/local/nginx