logstash解析mysql慢日誌
阿新 • • 發佈:2018-10-10
clu mutate ges num msd roc 慢查詢日誌 use cms 在工作中需要在elk中展示mysql的慢語句,以便於DBA每天查看並對比進行優化;
mysql5.5,mysql5.6,mysql5.7的慢日誌格式都不相同,根據自已的需要進行收集;
mysql5.5日誌樣例:
mysql5.5,mysql5.6,mysql5.7的慢日誌格式都不相同,根據自已的需要進行收集;
mysql5.5日誌樣例:
# Time: 180911 10:50:31 # User@Host: osdb[osdb] @ [172.25.14.78] # Query_time: 12.597483 Lock_time: 0.000137 Rows_sent: 451 Rows_examined: 2637425 SET timestamp=1536634231; SELECT id,name,contenet from cs_tables;
mysql5.6日誌樣例:
# Time: 180911 11:36:20
# User@Host: root[root] @ localhost [] Id: 1688
# Query_time: 3.006539 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0
SET timestamp=1536550580;
SELECT id,name,contenet from cs_tables;
mysql5.7日誌樣例:
# Time: 2018-09-10T06:26:40.895801Z # User@Host: root[root] @ [172.16.213.120] Id: 208 # Query_time: 3.032884 Lock_time: 0.000139 Rows_sent: 46389 Rows_examined: 46389 use cmsdb; SET timestamp=1536560800; select * from cstable;
通過分析上面三個mysql版本的慢查詢日誌,得出如下結論:
(1)每個Mysql版本的慢查詢日誌中Time字段格式都不一樣
(2)在mysql5.6、5.7版本中有一個id字段,而在mysql5.5版本中是沒有Id字段的。
(3)每個慢查詢語句是分多行完成的,並且每行中有多少不等的空格、回車等字符。
(4)use db語句可能出現在慢查詢中,也可以不出現。
(5)每個慢查詢語句的最後一部分是具體執行的sql,這個sql可能跨多行,也可能是多條sql語句。
filebeat先讀取mysql的慢日誌,寫入redis中:
filebeat.inputs: - type: log paths: - /data/mysqldata/mysql-slow.log tags: ["oms-slow-logs"] exclude_lines: [‘^\# Time‘] fields: type: "oms-slow-logs" fields_under_root: true multiline: pattern: ‘^\# Time|^\# User‘ negate: true match: after processors: - drop_fields: fields: ["source","input","beat","prospector","offset"] name: 10.10.7.32 output.redis: hosts: ["10.78.1.180"] key: "oms-slow-logs" type: list
logstash向redis讀取數據,解析過濾之後寫入elastic中:
input {
redis {
host => "10.78.1.180"
port => 6379
data_type => list
key => "oms-slow-logs"
}
}
filter {
grok {
# 有ID有use
match => [ "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{NUMBER:id}\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query>[\s\S]*)" ]
# 有ID無use
match => [ "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{NUMBER:id}\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query>[\s\S]*)" ]
# 無ID有use
match => [ "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query>[\s\S]*)" ]
# 無ID無use
match => [ "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query>[\s\S]*)" ]
}
date {
match => ["timestamp_mysql","UNIX"]
target => "@timestamp"
}
mutate {
remove_field => ["@version","message","timestamp_mysql"]
}
}
output {
if [type] == "oms-slow-logs" {
if [tags][0] == "oms-slow-logs" {
elasticsearch {
hosts => ["10.10.5.78:9200","10.10.5.79:9200","10.10.5.80:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
}
}
kibana展示:
logstash解析mysql慢日誌