Token Filters(令牌過濾器)
分析器按名稱引用令牌過濾器。使用現(xiàn)有的或使用 IndexMapping.AddCustomTokenFilter 創(chuàng)建變體:
var m *IndexMapping = index.Mapping()
err := m.AddCustomTokenFilter("color_stop_filter", map[string]interface{}{
"type": stop_tokens_filter.Name,
"tokens": []interface{}{
"red",
"green",
"blue",
},
})
if err != nil {
log.Fatal(err)
}
創(chuàng)建一個名為“color_stop_filter”的新停止令牌過濾器,它刪除所有“red”、“green”或“blue”令牌。注冊后,自定義分析器可以引用該過濾器。
creates a new Stop Token Filter named “color_stop_filter”, which removes all “red”, “green” or “blue” tokens. Once registered, this filter can be referenced by a custom Analyzer.
Apostrophe(撇號,即')
Configuration:
-
type:apostrophe_filter.Name
撇號標記過濾器刪除撇號后的所有字符。
The Apostrophe Token Filter removes all characters after an apostrophe.
Camel Case
camel case篩選器將以Camel Case書寫的令牌拆分為包含它的令牌集。例如,“駱駝箱”這個標記會產(chǎn)生“駱駝”和“箱子”。
The Camel Case Filter splits a token written in camel case into the set of tokens comprising it. For example, the token camelCasewould produce camel and Case.
CLD2
CLD2令牌過濾器將從每個令牌中獲取文本,并將其傳遞到[緊湊語言檢測2庫。每個令牌都被替換為與檢測到的國際標準化組織639語言代碼相對應的新令牌。輸入文本應該已經(jīng)轉(zhuǎn)換為小寫。
The CLD2 Token Filter will take the text from each token and pass it to the Compact Language Detection 2 library. Each token is replaced with a new token corresponding to the ISO 639 language code detected. Input text should already be converted to lower case.
Compound Word Dictionary
復合詞詞典過濾器允許您提供一個組合形成復合詞的詞詞典,并允許您單獨對它們進行索引。
The compound word dictionary filter lets you supply a dictionary of words that combine to form compound words and lets you index them individually.
Edge n-gram
邊緣n-gram令牌過濾器將像n-gram令牌過濾器一樣計算n-gram,但是所有計算的n-gram都植根于一側(cè)(前面或后面)。
The edge n-gram token filter will compute n-grams just like the n-gram token filter, but all the computed n-grams are rooted at one side (either the front or the back).
Elision
The elision filter identifies and removes articles prefixing a term and separated by an apostrophe.
For example, in French l'avion becomes avion.
The elision filter is configured with a reference to a token map containing the articles.
Keyword Marker
The keyword marker filter will identify keywords and mark them as such. Keywords are then ignored by any downstream stemmer.
The keyword marker filter is configured with a token map containing the keywords.
Length
長度過濾器識別太長或太短的標記。有兩個參數(shù),最小令牌長度和最大令牌長度。太長或太短的令牌將從令牌流中刪除。
The length filter identifies tokens which are either too long or too short. There are two parameters, the minimum token length and the maximum token length. Tokens that are either too long or too short are removed from the token stream.
Lowercase
小寫令牌過濾器將檢查每個輸入令牌,并將所有Unicode字母映射到它們的小寫字母。
The Lowercase Token Filter will examine each input token and map all Unicode letters to their lower case.
n-gram
n-gram令牌過濾器根據(jù)每個輸入令牌計算n-gram。有兩個參數(shù),最小和最大n-gram長度。
The n-gram token filter computes n-grams from each input token. There are two parameters, the minimum and maximum n-gram length.
Porter Stemmer
波特詞干過濾器將波特詞干算法應用于輸入令牌。
The porter stemmer filter applies the Porter Stemming Algorithm to the input tokens.
Shingle
瓦片區(qū)過濾器根據(jù)輸入令牌流計算多令牌瓦片區(qū)。例如,令牌流“快速棕色狐貍”當配置有最小和最大長度為2的瓦片時,將產(chǎn)生令牌“快速”、“快速棕色”和“棕色狐貍”。
The Shingle filter computes multi-token shingles from the input token stream. For example, the token stream the quick brown foxwhen configured with a shingle minimum and maximum length of 2 would produce the tokens the quick, quick brown and brown fox.
Stemmer
詞干分析器令牌過濾器接受輸入術語,并對它們應用[詞干處理。
The stemmer token filter takes input terms and applies a stemming process to them.
這個實現(xiàn)使用了[libstemmer。
This implementation uses libstemmer.
支持的語言有:
The supported languages are:
- Danish
- Dutch
- English
- Finnish
- French
- German
- Hungarian
- Italian
- Norwegian
- Porter
- Portuguese
- Romanian
- Russian
- Spanish
- Swedish
- Turkish
Stop Token
Configuration:
-
type:stop_tokens_filter.Name -
stop_token_map(string): the name of the token map identifying tokens to remove.
The Stop Token Filter is configured with a map of tokens that should be removed from the token stream.
Truncate Token
The truncate token filter truncates each input token to a maximum token length.
Unicode Normalize
The Unicode normalization filter converts the input terms into the specified Unicode Normalization Form.
The supported forms are:
- nfc
- nfd
- nfkc
- nfkd