1. 程式人生 > >Redis4/5新命令合集

Redis4/5新命令合集

cal finish write program string log 簡單 fast 占用內存

Lazyfree異步 3命令

UNLINK:異步刪除key
FLUSHDB ASYNC:異步清空當前DB
FLUSHALL ASYNC:異步清空所有DB

Lazyfree異步 4配置

lazyfree-lazy-expire:異步刪除過期key
lazyfree-lazy-eviction:異步淘汰key
lazyfree-lazy-server-del:隱式刪除時采取異步刪除,比如rename a b,若b存在則需刪除b
slave-lazy-flush:全量同步時,slave異步清空所有DB

熱點數據
使用OBJECT FREQ命令即可獲取指定key的訪問頻率,不過需要首先把內存逐出策略設置為allkeys-lfu或者volatile-lfu:

127.0.0.1:6379> config get maxmemory-policy
1) "maxmemory-policy"
2) "noeviction"
127.0.0.1:6379> object freq counter:000000006889
(error) ERR An LFU maxmemory policy is not selected, access frequency not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.

?
127.0.0.1:6379> config set maxmemory-policy allkeys-lfu
OK
127.0.0.1:6379> object freq counter:000000006889
(integer) 3
使用scan命令遍歷所有key,再通過OBJECT FREQ獲取訪問頻率並排序,即可得到熱點key。為了方便用戶使用,Redis自帶的客戶端redis-cli也提供了熱點key發現功能,執行redis-cli時加上--hotkeys選項即可,示例如下:

$./redis-cli --hotkeys
?

Scanning the entire keyspace to find hot keys as well as

average sizes per key type. You can use -i 0.1 to sleep 0.1 sec

per 100 SCAN commands (not usually needed).

?
[00.00%] Hot key ‘counter:000000000002‘ found so far with counter 87
[00.00%] Hot key ‘key:000000000001‘ found so far with counter 254
[00.00%] Hot key ‘mylist‘ found so far with counter 107
[00.00%] Hot key ‘key:000000000000‘ found so far with counter 254
[45.45%] Hot key ‘counter:000000000001‘ found so far with counter 87
[45.45%] Hot key ‘key:000000000002‘ found so far with counter 254
[45.45%] Hot key ‘myset‘ found so far with counter 64
[45.45%] Hot key ‘counter:000000000000‘ found so far with counter 93
-------- summary -------
Sampled 22 keys in the keyspace!
hot key found with counter: 254 ? keyname: key:000000000001
hot key found with counter: 254 ? keyname: key:000000000000
hot key found with counter: 254 ? keyname: key:000000000002
hot key found with counter: 107 ? keyname: mylist
hot key found with counter: 93 ? keyname: counter:000000000000
hot key found with counter: 87 ? keyname: counter:000000000002
hot key found with counter: 87 ? keyname: counter:000000000001
hot key found with counter: 64 ? keyname: myset
MEMORY內存分析命令
127.0.0.1:6379> memory help
1) "MEMORY DOCTOR ? ? ? ? ? ? ? ? ? ? ? - Outputs memory problems report"
2) "MEMORY USAGE <key> [SAMPLES <count>] - Estimate memory usage of key"
3) "MEMORY STATS ? ? ? ? ? ? ? ? ? ? ? ? - Show memory usage details"
4) "MEMORY PURGE ? ? ? ? ? ? ? ? ? ? ? ? - Ask the allocator to release memory"
5) "MEMORY MALLOC-STATS ? ? ? ? ? ? ? ? - Show allocator internal stats"
MEMORY STATS

127.0.0.1:6379> memory stats
1) "peak.allocated"
2) (integer) 423995952
3) "total.allocated"
4) (integer) 11130320
5) "startup.allocated"
6) (integer) 9942928
7) "replication.backlog"
8) (integer) 1048576
9) "clients.slaves"
10) (integer) 16858
11) "clients.normal"
12) (integer) 49630
13) "aof.buffer"
14) (integer) 3253
15) "db.0"
16) 1) "overhead.hashtable.main"
? ?2) (integer) 5808
? ?3) "overhead.hashtable.expires"
? ?4) (integer) 104
17) "overhead.total"
18) (integer) 11063904
19) "keys.count"
20) (integer) 94
21) "keys.bytes-per-key"
22) (integer) 12631
23) "dataset.bytes"
24) (integer) 66416
25) "dataset.percentage"
26) "5.5934348106384277"
27) "peak.percentage"
28) "2.6251003742218018"
29) "fragmentation"
30) "1.1039986610412598"
一共有15項內容,內存使用量均以字節為單位,我們一個一個來看:

  1. peak.allocated

redis啟動到現在,最多使用過多少內存。

  1. total.allocated

當前使用的內存總量。

  1. startup.allocated

redis啟動初始化時使用的內存,有很多讀者會比較奇怪,為什麽我的redis啟動以後什麽都沒做就已經占用了幾十MB的內存?
這是因為redis本身不僅存儲key-value,還有其他的內存消耗,比如共享變量、主從復制、持久化和db元信息,下面各項會有詳細介紹。

  1. replication.backlog

主從復制backlog使用的內存,默認10MB,backlog只在主從斷線重連時發揮作用,主從復制本身並不依賴此項。

  1. clients.slaves

主從復制中所有slave的讀寫緩沖區,包括output-buffer(也即輸出緩沖區)使用的內存和querybuf(也即輸入緩沖區),這裏簡單介紹一下主從復制:

redis把一次事件循環中,所有對數據庫發生更改的內容先追加到slave的output-buffer中,在事件循環結束後統一發送給slave。
那麽主從之間就難免會有數據的延遲,如果主從之間連接斷開,重連時為了保證數據的一致性就要做一次全量同步,這顯然是不夠高效的。backlog就是為此而設計,master在backlog中緩存一部分主從復制的增量數據,斷線重連時如果slave的偏移量在backlog中,那就可以只把偏移量之後的增量數據同步給slave即可,避免了全量同步的開銷。

  1. clients.normal

除slave外所有其他客戶端的讀寫緩沖區。
有時候一些客戶端讀取不及時,就會造成output-buffer積壓占用內存過多的情況,可以通過配置項client-output-buffer-limit來限制,當超過閾值之後redis就會主動斷開連接以釋放內存,slave亦是如此。

  1. aof.buffer

此項為aof持久化使用的緩存和aofrewrite時產生的緩存之和,當然如果關閉了appendonly那這項就一直為0:

redis並不是在有寫入時就立即做持久化的,而是在一次事件循環內把所有的寫入數據緩存起來,待到事件循環結束後再持久化到磁盤。
aofrewrite時緩存增量數據使用的內存,只在aofrewrite時才會使用,aofrewrite機制可以參考之前的文章《redis4.0之利用管道優化aofrewrite》。
可以看出這一項的大小與寫入流量成正比。

  1. db.0

redis每個db的元信息使用的內存,這裏只使用了db0,所以只打印了db0的內存使用狀態,當使用其他db時也會有相應的信息。
db的元信息有以下三項:
a) redis的db就是一張hash表,首先就是這張hash表使用的內存(redis使用鏈式hash,hash表中存放所有鏈表的頭指針);
b) 每一個key-value對都有一個dictEntry來記錄他們的關系,元信息便包含該db中所有dictEntry使用的內存;
c) redis使用redisObject來描述value所對應的不同數據類型(string、list、hash、set、zset),那麽redisObject占用的空間也計算在元信息中。
overhead.hashtable.main:
db的元信息也即是以上三項之和,計算公式為:
hashtable + dictEntry + redisObject
overhead.hashtable.expires:
對於key的過期時間,redis並沒有把它和value放在一起,而是單獨用一個hashtable來存儲,但是expires這張hash表記錄的是key-expire信息,所以不需要redisObject來描述value,其元信息也就少了一項,計算公式為:
hashtable + dictEntry

  1. overhead.total

3-8項之和:startup.allocated+replication.backlog+clients.slaves+clients.normal+aof.buffer+dbx

  1. dataset.bytes

所有數據所使用的內存——也即total.allocated - overhead.total——當前內存使用量減去管理類內存使用量。

  1. dataset.percentage

所有數據占比,這裏並沒有直接使用total.allocated做分母,而是除去了redis啟動初始化的內存,計算公式為:
100 * dataset.bytes / (total.allocated - startup.allocated)

  1. keys.count

redis當前存儲的key總量

  1. keys.bytes-per-key

平均每個key的內存大小,直覺上應該是用dataset.bytes除以keys.count即可,但是redis並沒有這麽做,而是把管理類內存也平攤到了每個key的內存使用中,計算公式為:
(total.allocated - startup.allocated) / keys.count

  1. peak.percentage

當前使用內存與歷史最高值比例

  1. fragmentation

內存碎片率

  1. MEMORY USAGE
    內存評估

MEMORY usage [samples]
以hash為例看下其如果工作:

首先類似於上一節中的overhead.hashtable.main,要計算hash的元信息內存,包括hash表的大小以及所有dictEntry的內存占用信息。
與overhead.hashtable.main不同的是,每個dictEntry中key-value都是字符串,所以沒redisObject的額外消耗。在評估真正的數據內存大小時redis並沒有去遍歷所有key,而是采用的抽樣估算:隨機抽取samples個key-value對計算其平均內存占用,再乘以key-value對的個數即得到結果。試想一下如果要精確計算內存占用,那麽就需要遍歷所有的元素,當元素很多時就是使redis阻塞,所以請合理設置samples的大小。

  1. MEMORY DOCTOR
    首先是沒問題的情況

運行狀態良好:
Hi Sam, I can‘t find any memory issue in your instance. I can only account for what occurs on this base.
?
redis的數據量很小,暫無建議:
Hi Sam, this instance is empty or is using very little memory, my issues detector can‘t be used in these conditions. Please, leave for your mission on Earth and fill it with some data. The new Sam and I will be back to our programming as soon as I finished rebooting.
?
接下來出現的結果就需要註意了

內存使用峰值1.5倍於目前內存使用量,此時內存碎片率可能會比較高,需要註意:
Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.
?
內存碎片率過高超過1.4,需要註意:
High fragmentation: This instance has a memory fragmentation greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc.
?
每個slave緩沖區的平均內存超過10MB,原因可能是master寫入流量過高,也有可能是主從同步的網絡帶寬不足或者slave處理較慢:
Big slave buffers: The slave output buffers in this instance are greater than 10MB for each slave (on average). This likely means that there is some slave instance that is struggling receiving data, either because it is too slow or because of networking issues. As a result, data piles on the master output buffers. Please try to identify what slave is not receiving data correctly and why. You can use the INFO output in order to check the slaves delays and the CLIENT LIST command to check the output buffers of each slave.
普通客戶端緩沖區的平均內存超過200KB,原因可能是pipeline使用不當或者Pub/Sub客戶端處理消息不及時導致:
Big client buffers: The clients output buffers in this instance are greater than 200K per client (on average). This may result from different causes, like Pub/Sub clients subscribed to channels bot not receiving data fast enough, so that data piles on the Redis instance output buffer, or clients sending commands with large replies or very large sequences of commands in the same pipeline. Please use the CLIENT LIST command in order to investigate the issue if it causes problems in your instance, or to understand better why certain clients are using a big amount of memory.

  1. MEMORY MALLOC-STATS
    打印內存分配器狀態,只在使用jemalloc時有用。

  2. MEMORY PURGE
    請求分配器釋放內存,同樣只對jemalloc生效

Redis4/5新命令合集