1. 程式人生 > >ES:mget批量查詢、bulk批量增刪改

ES:mget批量查詢、bulk批量增刪改

1、mget批量查詢

GET /_mget
{
  "docs": [
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": 8
    },
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": 10
    }
  ]
}
{
  "docs": [
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": "8",
      "_version": 3,
      "found": true,
      "_source": {
        "test_field": "test test1"
      }
    },
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": "10",
      "_version": 2,
      "found": true,
      "_source": {
        "test_field1": "test test1",
        "test_field2": "update test2"
      }
    }
  ]
}

2、查詢的document是一個index下的不同type

GET /test_index/_mget
{
  "docs": [
    {
      "_type": "test_type",
      "_id": 8
    },
    {
      "_type": "test_type",
      "_id": 10
    }
  ]
}
{
  "docs": [
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": "8",
      "_version": 3,
      "found": true,
      "_source": {
        "test_field": "test test1"
      }
    },
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": "10",
      "_version": 2,
      "found": true,
      "_source": {
        "test_field1": "test test1",
        "test_field2": "update test2"
      }
    }
  ]
}

3、查詢的資料都在同一個index下的同一個type下

GET /test_index/test_type/_mget
{
   "ids": [8, 10]
}
{
  "docs": [
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": "8",
      "_version": 3,
      "found": true,
      "_source": {
        "test_field": "test test1"
      }
    },
    {
      "_index": "test_index",
      "_type": "test_type",
      "_id": "10",
      "_version": 2,
      "found": true,
      "_source": {
        "test_field1": "test test1",
        "test_field2": "update test2"
      }
    }
  ]
}

4、bulk批量增刪改
(1) bulk語法
有哪些型別的操作可以執行呢?
(1)delete:刪除一個文件,只要1個json串就可以了
(2)create:PUT /index/type/id/_create,強制建立
(3)index:普通的put操作,可以是建立文件,也可以是全量替換文件
(4)update:執行的partial update操作

POST /_bulk
{
  "delete": {
    "_index": "test_index",
    "_type": "test_type",
    "_id": 8
  }
}
{
  "create": {
    "_index": "test_index",
    "_type": "test_type",
    "_id": 9
  }
}
{
  "test_field": "test9"
}
{
  "create": {
    "_index": "test_index",
    "_type": "test_type",
    "_id": 6
  }
}
{
  "test_field": "test6"
}
{
  "index": {
    "_index": "test_index",
    "_type": "test_type",
    "_id": 4
  }
}
{
  "test_field": "replaced test4"
}
{
  "update": {
    "_index": "test_index",
    "_type": "test_type",
    "_id": 1
  }
}
{
  "doc": {
    "test_field2": "bulk test1"
  }
}
{
  "error": {
    "root_cause": [
      {
        "type": "json_e_o_f_exception",
        "reason": "Unexpected end-of-input: expected close marker for Object (start marker at [Source: [email protected]; line: 1, column: 1])\n at [Source: [email protected]; line: 1, column: 3]"
      }
    ],
    "type": "json_e_o_f_exception",
    "reason": "Unexpected end-of-input: expected close marker for Object (start marker at [Source: [email protected]; line: 1, column: 1])\n at [Source: [email protected]; line: 1, column: 3]"
  },
  "status": 500
}

bulk api對json的語法,有嚴格的要求,每個json串不能換行,只能放一行,同時一個json串和一個json串之間,必須有一個換行

POST /_bulk
{"delete":{"_index":"test_index","_type":"test_type","_id":8}}
{"create":{"_index":"test_index","_type":"test_type","_id":9}}
{"test_field":"test9"}
{"create":{"_index":"test_index","_type":"test_type","_id":6}}
{"test_field":"test6"}
{"index":{"_index":"test_index","_type":"test_type","_id":4}}
{"test_field":"replaced test4"}
{"update":{"_index":"test_index","_type":"test_type","_id":1}}
{"doc":{"test_field2":"bulk test1"}}
{
  "took": 316,
  "errors": true,
  "items": [
    {
      "delete": {
        "_index": "test_index",
        "_type": "test_type",
        "_id": "8",
        "_version": 2,
        "result": "not_found",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 10,
        "_primary_term": 1,
        "status": 404
      }
    },
    {
      "create": {
        "_index": "test_index",
        "_type": "test_type",
        "_id": "9",
        "status": 409,
        "error": {
          "type": "version_conflict_engine_exception",
          "reason": "[test_type][9]: version conflict, document already exists (current version [1])",
          "index_uuid": "toqtg_FpS-e8bCUkqRr2-Q",
          "shard": "1",
          "index": "test_index"
        }
      }
    },
    {
      "create": {
        "_index": "test_index",
        "_type": "test_type",
        "_id": "6",
        "status": 409,
        "error": {
          "type": "version_conflict_engine_exception",
          "reason": "[test_type][6]: version conflict, document already exists (current version [1])",
          "index_uuid": "toqtg_FpS-e8bCUkqRr2-Q",
          "shard": "2",
          "index": "test_index"
        }
      }
    },
    {
      "index": {
        "_index": "test_index",
        "_type": "test_type",
        "_id": "4",
        "_version": 6,
        "result": "updated",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 6,
        "_primary_term": 1,
        "status": 200
      }
    },
    {
      "update": {
        "_index": "test_index",
        "_type": "test_type",
        "_id": "1",
        "_version": 3,
        "result": "updated",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 2,
        "_primary_term": 1,
        "status": 200
      }
    }
  ]
}

bulk操作中,任意一個操作失敗,是不會影響其他的操作的,但是在返回結果裡,會告訴你異常日誌
(2) bulk size最佳大小

bulk request會載入到記憶體裡,如果太大的話,效能反而會下降,因此需要反覆嘗試一個最佳的bulk size。一般從1000~5000條資料開始,嘗試逐漸增加。另外,如果看大小的話,最好是在5-15MB之間。

(3)bulk api的奇特json格式與底層效能優化關係大揭祕
1、如果採用比較良好的json陣列格式
允許任意的換行,整個可讀性非常棒,讀起來很爽,es拿到那種標準格式的json串以後,要按照下述流程去進行處理
(1)將json陣列解析為JSONArray物件,這個時候,整個資料,就會在記憶體中出現一份一模一樣的拷貝,一份資料是json文字,一份資料是JSONArray物件
(2)解析json數組裡的每個json,對每個請求中的document進行路由
(3)為路由到同一個shard上的多個請求,建立一個請求陣列
(4)將這個請求陣列序列化
(5)將序列化後的請求陣列傳送到對應的節點上去

2、耗費更多記憶體,更多的jvm gc開銷
我們之前提到過bulk size最佳大小的那個問題,一般建議說在幾千條那樣,然後大小在10MB左右,所以說,可怕的事情來了。假設說現在100個bulk請求傳送到了一個節點上去,然後每個請求是10MB,100個請求,就是1000MB = 1GB,然後每個請求的json都copy一份為jsonarray物件,此時記憶體中的佔用就會翻倍,就會佔用2GB的記憶體,甚至還不止。因為弄成jsonarray之後,還可能會多搞一些其他的資料結構,2GB+的記憶體佔用。

佔用更多的記憶體可能就會積壓其他請求的記憶體使用量,比如說最重要的搜尋請求,分析請求,等等,此時就可能會導致其他請求的效能急速下降,佔用記憶體更多,就會導致java虛擬機器的垃圾回收次數更多,跟頻繁,每次要回收的垃圾物件更多,耗費的時間更多,導致es的java虛擬機器停止工作執行緒的時間更多

3、現在的奇特格式
(1)不用將其轉換為json物件,不會出現記憶體中的相同資料的拷貝,直接按照換行符切割json
(2)對每兩個一組的json,讀取meta,進行document路由
(3)直接將對應的json傳送到node上去

最大的優勢在於,不需要將json陣列解析為一個JSONArray物件,形成一份大資料的拷貝,浪費記憶體空間,儘可能地保證效能