1. 程式人生 > >yum安裝rabbitmq3.6.11與erlange20配置及優化

yum安裝rabbitmq3.6.11與erlange20配置及優化

rabbitmq3.6.11

yum安裝rabbitmq3.6.11與erlange20配置及優化


RabbitMQ簡介

AMQP,即Advanced Message Queuing Protocol,高級消息隊列協議,是應用層協議的一個開放標準,為面向消息的中間件設計。消息中間件主要用於組件之間的解耦,消息的發送者無需知道消息使用者的存在,反之亦然。

AMQP的主要特征是面向消息、隊列、路由(包括點對點和發布/訂閱)、可靠性、安全。

RabbitMQ是一個開源的AMQP實現,服務器端用Erlang語言編寫,支持多種客戶端,如:Python、Ruby、.NET、Java、JMS、C、PHP、ActionScript、XMPP、STOMP等,支持AJAX。用於在分布式系統中存儲轉發消息,在易用性、擴展性、高可用性等方面表現不俗。


官網的配置

http://www.rabbitmq.com/production-checklist.html

http://www.rabbitmq.com/configure.html

http://www.rabbitmq.com/memory.html

http://www.rabbitmq.com/production-checklist.html#resource-limits-ram


參考文檔

http://www.cnblogs.com/crazylqy/p/6567253.html

http://wangqingpei557.blog.51cto.com/1009349/1881540


基本概念

ConnectionFactory、Connection、Channel

ConnectionFactory、Connection、Channel都是RabbitMQ對外提供的API中最基本的對象。Connection是RabbitMQ的socket鏈接,它封裝了socket協議相關部分邏輯。ConnectionFactory為Connection的制造工廠。

Channel是我們與RabbitMQ打交道的最重要的一個接口,我們大部分的業務操作是在Channel這個接口中完成的,包括定義Queue、定義Exchange、綁定Queue與Exchange、發布消息等。


Queue

Queue(隊列)是RabbitMQ的內部對象,用於存儲消息,。

RabbitMQ中的消息都只能存儲在Queue中,生產者生產消息並最終投遞到Queue中,消費者可以從Queue中獲取消息並消費。

多個消費者可以訂閱同一個Queue,這時Queue中的消息會被平均分攤給多個消費者進行處理,而不是每個消費者都收到所有的消息並處理。


Message acknowledgment

在實際應用中,可能會發生消費者收到Queue中的消息,但沒有處理完成就宕機(或出現其他意外)的情況,這種情況下就可能會導致消息丟失。為了避免這種情況發生,我們可以要求消費者在消費完消息後發送一個回執給RabbitMQ,RabbitMQ收到消息回執(Message acknowledgment)後才將該消息從Queue中移除;如果RabbitMQ沒有收到回執並檢測到消費者的RabbitMQ連接斷開,則RabbitMQ會將該消息發送給其他消費者(如果存在多個消費者)進行處理。這裏不存在timeout概念,一個消費者處理消息時間再長也不會導致該消息被發送給其他消費者,除非它的RabbitMQ連接斷開。

這裏會產生另外一個問題,如果我們的開發人員在處理完業務邏輯後,忘記發送回執給RabbitMQ,這將會導致嚴重的bug——Queue中堆積的消息會越來越多;消費者重啟後會重復消費這些消息並重復執行業務邏輯…

另外pub message是沒有ack的。


Message durability

如果我們希望即使在RabbitMQ服務重啟的情況下,也不會丟失消息,我們可以將Queue與Message都設置為可持久化的(durable),這樣可以保證絕大部分情況下我們的RabbitMQ消息不會丟失。但依然解決不了小概率丟失事件的發生(比如RabbitMQ服務器已經接收到生產者的消息,但還沒來得及持久化該消息時RabbitMQ服務器就斷電了),如果我們需要對這種小概率事件也要管理起來,那麽我們要用到事務。


Prefetch count

前面我們講到如果有多個消費者同時訂閱同一個Queue中的消息,Queue中的消息會被平攤給多個消費者。這時如果每個消息的處理時間不同,就有可能會導致某些消費者一直在忙,而另外一些消費者很快就處理完手頭工作並一直空閑的情況。我們可以通過設置prefetchCount來限制Queue每次發送給每個消費者的消息數,比如我們設置prefetchCount=1,則Queue每次給每個消費者發送一條消息;消費者處理完這條消息後Queue會再給該消費者發送一條消息。


Exchange

在上一節我們看到生產者將消息投遞到Queue中,實際上這在RabbitMQ中這種事情永遠都不會發生。實際的情況是,生產者將消息發送到Exchange,由Exchange將消息路由到一個或多個Queue中(或者丟棄)。

Exchange是按照什麽邏輯將消息路由到Queue的?這個將在Binding一節介紹。

RabbitMQ中的Exchange有四種類型,不同的類型有著不同的路由策略,這將在Exchange Types一節介紹。


routing key

生產者在將消息發送給Exchange的時候,一般會指定一個routing key,來指定這個消息的路由規則,而這個routing key需要與Exchange Type及binding key聯合使用才能最終生效。

在Exchange Type與binding key固定的情況下(在正常使用時一般這些內容都是固定配置好的),我們的生產者就可以在發送消息給Exchange時,通過指定routing key來決定消息流向哪裏。

RabbitMQ為routing key設定的長度限制為255 bytes。


Binding

RabbitMQ中通過Binding將Exchange與Queue關聯起來,這樣RabbitMQ就知道如何正確地將消息路由到指定的Queue了。



1.安裝erlang


下載rpm倉庫:wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm


安裝rpm倉庫

rpm -Uvh erlang-solutions-1.0-1.noarch.rpm


安裝erlang

yum -y install erlang



2,安裝rabbitmq

wget https://github.com/rabbitmq/rabbitmq-server/releases/download/rabbitmq_v3_6_11/rabbitmq-server-3.6.11-1.el6.noarch.rpm


rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc

yum install rabbitmq-server-3.6.11-1.noarch.rpm


yum install rabbitmq-server-3.6.11-1.el6.noarch.rpm

Installed:

rabbitmq-server.noarch 0:3.6.11-1.el6


Dependency Installed:

socat.x86_64 0:1.7.2.3-1.el6



3,啟動

mkdir -p /data/rabbitmq/data

mkdir -p /data/rabbitmq/log

chown rabbitmq:rabbitmq /data/rabbitmq/ -R

/etc/init.d/rabbitmq-server restart



4,添加帳號


rabbitmqctl add_user rabbitmq76 RMQXXXXXXXXX

rabbitmqctl set_user_tags rabbitmq76 administrator


rabbitmqctl set_permissions -p / rabbitmq76 ".*" ".*" ".*"



5,加入集群

[[email protected] etc]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.188.76 rabbitmq76

192.168.188.77 rabbitmq77

192.168.188.78 rabbitmq78

192.168.188.79 rabbitmq79

192.168.188.80 rabbitmq80


scp -P10001 .erlang.cookie 192.168.188.77:$PWD

scp -P10001 .erlang.cookie 192.168.188.78:$PWD

scp -P10001 .erlang.cookie 192.168.188.79:$PWD

scp -P10001 .erlang.cookie 192.168.188.80:$PWD



[[email protected] ~]# rabbitmqctl stop_app

Stopping rabbit application on node [email protected]

[[email protected] ~]# rabbitmqctl join_cluster [email protected]

Clustering node [email protected] with [email protected]

[[email protected] ~]# rabbitmqctl start_app

Starting node rabb[email protected]




rabbitmq-plugins enable rabbitmq_management


[[email protected] etc]# rabbitmq-plugins list|wc -l

32

[[email protected] etc]# rabbitmqctl cluster_status

Cluster status of node [email protected]

[{nodes,[{disc,[[email protected],[email protected],[email protected],

[email protected],[email protected]]}]},

{running_nodes,[[email protected],[email protected],[email protected],

[email protected],[email protected]]},

{cluster_name,<<"[email protected]">>},

{partitions,[]},

{alarms,[{[email protected],[]},

{[email protected],[]},

{[email protected],[]},

{[email protected],[]},

{[email protected],[]}]}]


6,設置鏡像隊列策略

可以參考

http://blog.csdn.net/zheng911209/article/details/49949303


在任意一個節點上執行下面的命令

rabbitmqctl set_policy ha-all "^" ‘{"ha-mode":"all"}‘


剩下的可以在web上操作


1),首先創建vhost,例如yicheapp

2),其次創建user,例如yichapp,tags設置None,vhosts選擇yicheapp ,設置好密碼

3),配置police

選擇要應用策略的vhost

name:起個有意義的名稱,比如我們寫的ha-all(表示在所有節點建立鏡像)

pattern:為正則表達式,是要匹配的exchange和queue的名稱,比要匹配所有exchange或者queue的話,可以直接寫 ‘^‘,表示任意名稱開頭的,如果要指定以ha-開發的exchange和queue的話


可以使用 ‘^ha-‘

applyto:是表示策略應用的範圍,比如 exchange或者queue還是兩者都應用。

priority:優先級,是只在exchange轉發時優先級高的會優先轉發。

definition: 下面有提示,點擊就可以definition的定義表單上去,值的話,點擊下面定義後面跟的?號就可以看到可選值。通常會加兩項定義:

ha-mode:all  在集群所有節點中創建鏡像exchange或者queue

ha-sync-mode:automatic 自動同步到鏡像節點

4),創建exchange,例如yicheapp-exchange

5),配置queues,例如yicheapp-queues

6),綁定exchange與queues,例如yicheapp-exchange和yicheapp-queues.key綁定



7,優化配置等

官網的配置

http://www.rabbitmq.com/production-checklist.html

http://www.rabbitmq.com/configure.html

http://www.rabbitmq.com/memory.html

http://www.rabbitmq.com/production-checklist.html#resource-limits-ram



線上優化後配置

[[email protected] rabbitmq]# cat rabbitmq.conf

[

{rabbit,

[

{loopback_users, []},

{vm_memory_high_watermark, 0.40}, #最大使用內存40%,erlang開始GC

{vm_memory_high_watermark_paging_ratio, 0.8}, #32G內存,32*0.8*0.2時開始持久化磁盤

{disk_free_limit, "10GB"}, #磁盤使用量剩余10G時,不收發消息

{hipe_compile, true}, #開啟hipe,提高erlang性能

{collect_statistics_interval, 10000}, #統計刷新時間默認5秒,改成10秒

{cluster_partition_handling, autoheal} #網絡優化參數,不穩定時用這個選項

]

}

].



[[email protected] rabbitmq]# cat rabbitmq-env.conf

[email protected] #節點名字,全局唯一

RABBITMQ_MNESIA_BASE=/data/rabbitmq/data #消息落地存放位置

RABBITMQ_LOG_BASE=/data/rabbitmq/log #日誌位置

RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+A 128" #默認65,server線程



另外系統參數需要留有swap空間,及打開文件數rabbitmq啟動進程至少需要5萬,yum安裝時rabbitmq啟動,源碼安裝時root啟動

[[email protected] rabbitmq]# cat /etc/security/limits.conf

* soft nofile 65536

* hard nofile 131072

* soft nproc 10240

* hard nproc 20480



備註

內存控制:

vm_memory_high_watermark 該值為內存閾值,默認為0.4。意思為物理內存的40%。40%的內存並不是內存的最大的限制,它是一個發布的節制,當達到40%時Erlang會做GC。最壞的情況是使用


內存80%。如果把該值配置為0,將關閉所有的publishing 。


rabbitmqctl set_vm_memory_high_watermark 0



Paging 內存閾值,該值為默認為0.5,該值為vm_memory_high_watermark的20%時,將把內存數據寫到磁盤。


如機器內存16G,當RABBITMQ占用內存1.28G(16*0.4*0.2)時把內存數據放到磁盤。



硬盤控制:


當RabbitMQ的磁盤空閑空間小於50M(默認),生產者將被BLOCK,


如果采用集群模式,磁盤節點空閑空間小於50M將導致其他節點的生產者都被block


可以通過disk_free_limit來對進行配置。



Memory


By default, RabbitMQ will not accept any new messages when it detects that it‘s using more than 40% of the available memory (as reported by the OS):


{vm_memory_high_watermark, 0.4}. This is a safe default and care should be taken when modifying this value, even when the host is a dedicated RabbitMQ node.


The OS and file system use system memory to speed up operations for all system processes. Failing to leave enough free system memory for this purpose will have an


adverse effect on system performance due to OS swapping, and can even result in RabbitMQ process termination.


A few recommendations when adjusting the default vm_memory_high_watermark:


Nodes hosting RabbitMQ should have at least 128MB of memory available at all times.

The recommended vm_memory_high_watermark range is 0.40 to 0.66

Values above 0.7 are not recommended. The OS and file system must be left with at least 30% of the memory, otherwise performance may degrade severely due to paging.

As with every tuning, benchmarking and measuring are required to find the best setting for your environment and workload.




Disk Space


The current 50MB disk_free_limit default works very well for development and tutorials. Production deployments require a much greater safety margin. Insufficient disk


space will lead to node failures and may result in data loss as all disk writes will fail.


Why is the default 50MB then? Development environments sometimes use really small partitions to host /var/lib, for example, which means nodes go into resource alarm


state right after booting. The very low default ensures that RabbitMQ works out of the box for everyone. As for production deployments, we recommend the following:


{disk_free_limit, {mem_relative, 1.0}} is the minimum recommended value and it translates to the total amount of memory available. For example, on a host dedicated to


RabbitMQ with 4GB of system memory, if available disk space drops below 4GB, all publishers will be blocked and no new messages will be accepted. Queues will need to


be drained, normally by consumers, before publishing will be allowed to resume.

{disk_free_limit, {mem_relative, 1.5}} is a safer production value. On a RabbitMQ node with 4GB of memory, if available disk space drops below 6GB, all new messages


will be blocked until the disk alarm clears. If RabbitMQ needs to flush to disk 4GB worth of data, as can sometimes be the case during shutdown, there will be


sufficient disk space available for RabbitMQ to start again. In this specific example, RabbitMQ will start and immediately block all publishers since 2GB is well under


the required 6GB.

{disk_free_limit, {mem_relative, 2.0}} is the most conservative production value, we cannot think of any reason to use anything higher.If you want full confidence in


RabbitMQ having all the disk space that it needs, at all times, this is the value to use.



Open File Handles Limit


Operating systems limit maximum number of concurrently open file handles, which includes network sockets. Make sure that you have limits set high enough to allow for


expected number of concurrent connections and queues.


Make sure your environment allows for at least 50K open file descriptors for effective RabbitMQ user, including in development environments.


As a rule of thumb, multiple the 95th percentile number of concurrent connections by 2 and add total number of queues to calculate recommended open file handle limit.


Values as high as 500K are not inadequate and won‘t consume a lot of hardware resources, and therefore are recommended for production setups. See Networking guide for


more information.


本文出自 “jerrymin” 博客,請務必保留此出處http://jerrymin.blog.51cto.com/3002256/1958923

yum安裝rabbitmq3.6.11與erlange20配置及優化