1. 程式人生 > >ES ik分詞器使用技巧

ES ik分詞器使用技巧

match查詢會將查詢詞分詞,然後對分詞的結果進行term查詢。

bool查詢原理

然後預設是將每個分詞term查詢之後的結果求交集,所以只要分詞的結果能夠命中,某條資料就可以被查詢出來,而分詞是在新建索引時指定的,只有text型別的資料才能設定分詞策略。

新建索引,並指定分詞策略:

PUT mail_test3
{
  "settings": {
    "index": {
      "refresh_interval": "30s",
      "number_of_shards": "1",
      "number_of_replicas": "0"
    }
  },
  "mappings": {
    "default": {
      "_all": {
        "enabled": false
      },
      "_source": {
        "enabled": true
      },
      "properties": {
        "addressTude": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_smart",
          "copy_to": [
            "commonText"
          ],
          "fielddata": true
        },
        "captureTime": {
          "type": "long"
        },
        "commonText": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_smart",
          "fielddata": true
        },
        "commonNum":{
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_smart",
          "fielddata": true
        },
        "imsi": {
          "type": "keyword",
          "copy_to": ["commonNum"]
        },
        "mailFrom": {
          "type": "keyword",
          "copy_to": ["commonText"]
        },
        "mailSubject": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_smart",
          "copy_to": [
            "commonText"
          ]
        },
        "mcc": {
          "type": "integer",
          "copy_to": ["commonNum"]
        },
        "rcptTo": {
          "type": "text",
          "analyzer": "ik_max_word",
          "search_analyzer": "ik_smart",
          "copy_to": ["commonText"]
        },
        "userName": {
          "type": "keyword",
          "copy_to": ["commonText"]
        },
        "uuid": {
          "type": "keyword"
        }
      }
    }
  }
}

analyzer 指的是在建索引時的分詞策略,search_analyzer 指的是在查詢時的分詞策略。ik分詞器還有一種ik_smart 的分詞策略,可以比較兩種分詞策略的差別:

ik_smart分詞策略:

GET mail_test3/_analyze
{
  "analyzer": "ik_smart",
  "text": "湖南省湘潭市江山路96號-11-8"
}

結果:

{
  "tokens": [
    {
      "token": "湖南省",
      "start_offset": 0,
      "end_offset": 3,
      "type": "CN_WORD",
      "position": 0
    },
    {
      "token": "湘潭市",
      "start_offset": 3,
      "end_offset": 6,
      "type": "CN_WORD",
      "position": 1
    },
    {
      "token": "江",
      "start_offset": 6,
      "end_offset": 7,
      "type": "CN_CHAR",
      "position": 2
    },
    {
      "token": "山路",
      "start_offset": 7,
      "end_offset": 9,
      "type": "CN_WORD",
      "position": 3
    },
    {
      "token": "96號",
      "start_offset": 9,
      "end_offset": 12,
      "type": "TYPE_CQUAN",
      "position": 4
    },
    {
      "token": "11-8",
      "start_offset": 13,
      "end_offset": 17,
      "type": "LETTER",
      "position": 5
    }
  ]
}

ik_max_word分詞策略:

GET mail_test1/_analyze
{
  "analyzer": "ik_max_word",
  "text": "湖南省湘潭市江山路96號-11-8"
}

分詞結果:

 {
  "tokens": [
    {
      "token": "湖南省",
      "start_offset": 0,
      "end_offset": 3,
      "type": "CN_WORD",
      "position": 0
    },
    {
      "token": "湖南",
      "start_offset": 0,
      "end_offset": 2,
      "type": "CN_WORD",
      "position": 1
    },
    {
      "token": "省",
      "start_offset": 2,
      "end_offset": 3,
      "type": "CN_CHAR",
      "position": 2
    },
    {
      "token": "湘潭市",
      "start_offset": 3,
      "end_offset": 6,
      "type": "CN_WORD",
      "position": 3
    },
    {
      "token": "湘潭",
      "start_offset": 3,
      "end_offset": 5,
      "type": "CN_WORD",
      "position": 4
    },
    {
      "token": "市",
      "start_offset": 5,
      "end_offset": 6,
      "type": "CN_CHAR",
      "position": 5
    },
    {
      "token": "江山",
      "start_offset": 6,
      "end_offset": 8,
      "type": "CN_WORD",
      "position": 6
    },
    {
      "token": "山路",
      "start_offset": 7,
      "end_offset": 9,
      "type": "CN_WORD",
      "position": 7
    },
    {
      "token": "96",
      "start_offset": 9,
      "end_offset": 11,
      "type": "ARABIC",
      "position": 8
    },
    {
      "token": "號",
      "start_offset": 11,
      "end_offset": 12,
      "type": "COUNT",
      "position": 9
    },
    {
      "token": "11-8",
      "start_offset": 13,
      "end_offset": 17,
      "type": "LETTER",
      "position": 10
    },
    {
      "token": "11",
      "start_offset": 13,
      "end_offset": 15,
      "type": "ARABIC",
      "position": 11
    },
    {
      "token": "8",
      "start_offset": 16,
      "end_offset": 17,
      "type": "ARABIC",
      "position": 12
    }
  ]
}

ik_max_word分詞器的分詞結果更多,分詞的粒度更細,但是ik_smart的分詞結果粒度更粗,一般的策略是建立索引使用ik_max_word,查詢時使用ik_smart,這樣就能儘可能多的查到結果,而且上文提到,match查詢最終是轉化為term查詢,因此只要某個分詞結果命中,結果中就會有該條資料。

如果對搜尋結果的精度較高,可以在查詢中加入operator引數,然後讓分詞結果的每個term查詢結果之間求交集,這樣能儘可能地提高精度。

這裡的operator設定為or和and的差別較大,可以測試進行比較:

GET mail_test3/_search
{
  "query": {
    "match": {
      "commonText": {
         "query": "湖北省宜昌市天台東二街",
         "operator": "and"
      }
    }
  }
}