Similarity Queries for Security Name by Gensim

Introduction of Gensim

Gensim is a free Python library designed to automatically extract semantic topics from documents, as efficiently (computer-wise) and painlessly (human-wise) as possible.

Gensim is designed to process raw, unstructured digital texts (“plain text”). The algorithms in gensim, such as Latent Semantic Analysis, Latent Dirichlet Allocation and Random Projections discover semantic structure of documents by examining statistical co-occurrence patterns of the words within a corpus of training documents. These algorithms are unsupervised, which means no human input is necessary – you only need a corpus of plain text documents.

Once these statistical patterns are found, any plain text documents can be succinctly expressed in the new, semantic representation and queried for topical similarity against other documents.

Flowchart Diagram

(original flowchart diagram, no related diagram in Gensim official website)


2018-05-17 10_53_55-Similarity Queries for Security Name by Gensim - Data Collection Technology - Mo.png

Code Example

Train data sample:
F1234567OX~Undrly Alba (Crus) Gth Prop 2 Life~Undrly Alba (Crus) Gth Prop 2 Life
F7654321OY~Undrly Alba (Crus) Mixed Pen~Undrly Alba (Crus) Mixed Pen
FABCDEF9P0~Undrly Alba (Crus) Nth Am Pen~Undrly Alba (Crus) Nth Am Pen
FFEDCBA9P4~Undrly Alba (Crus) Secure Inc Pen~Undrly Alba (Crus) Secure Inc Pen
F1234567P5~Undrly Alba (Crus) UK Pen~Undrly Alba (Crus) UK Pen
It means: security id~security name~security legal name
The code splits every single line via character '~', and only apply security legal name to construct dictionary and model.

print('Begin read data source')
data_train = []
    for security in open(securitynamepath, encoding='utf-8'):
        if len(security.split('~')) == 3:
            data_train.append([word for word in security.split('~')[2].lower().split()
                   if word not in stoplist])
print('End read data source')

To get similarity of security name, the POC applies tf-idf algorithm to build model.

The sample code is less than 100 lines,
To initial dictionary and model like this,it will spend less than one second to get query result.

import time
from gensim import corpora, models, similarities
from collections import defaultdict
import os
 
dictpath = './data/model/security.dict'
modelpath = './data/model/security.mm'
securitynamepath = './data/security/securityname.txt'
start = time.time()
alltext = [security for security in open(securitynamepath, encoding='utf-8')]
end = time.time()
print('Read security name list cost: ', end - start)
 
def startjob(regeneratemodel=False, usertext='DSP BlackRock FMP Sr 229 51 Mn Dir Gr'):
    if regeneratemodel or (not os.path.exists(dictpath) or not os.path.exists(modelpath)):
        generatemodel()
 
    time_start = time.time()
    print('Load model start')
    load_start = time.time()
    corpus = corpora.MmCorpus(modelpath)
    dictionary = corpora.Dictionary.load(dictpath)
    tfidf_model = models.TfidfModel(corpus)
    index = similarities.SparseMatrixSimilarity(
        tfidf_model[corpus],
        num_features=len(dictionary.keys()))
    load_end = time.time()
    print('Load model cost: ', load_end - load_start)
    print('Load model end')
    ###############By LSI#####################
    # corpus_tfidf = tfidf_model[corpus]
    # dictionary = corpora.Dictionary.load(dictpath)
    # lsi_model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2)
    # corpus_lsi = lsi_model[corpus_tfidf]
    # corpus_simi_matrix = similarities.MatrixSimilarity(corpus_lsi)
    # 計算一個新的文本與既有文本的相關(guān)度
    # test_text = usertext.lower().split()
    # test_bow = dictionary.doc2bow(test_text)
    # test_tfidf = tfidf_model[test_bow]
    # test_lsi = lsi_model[test_tfidf]
    # test_simi = corpus_simi_matrix[test_lsi]
    # test_simi = sorted(enumerate(test_simi), key=lambda item: -item[1])
    ###############By LSI#####################
 
    ###############By tfidf#####################
    print('Query start')
    query_start = time.time()
    test_text = usertext.lower().split()
    doc_test_vec = dictionary.doc2bow(test_text)
    
    test_simi = index[tfidf_model[doc_test_vec]]
    test_simi = sorted(enumerate(test_simi), key=lambda item: -item[1])
    ###############By tfidf#####################
 
    outputlist = [test for test in test_simi if test[1] > 0.3]
    for output in outputlist:
        print(alltext[output[0]], output[1])
        if len(alltext[output[0]].split('~')) == 3 and alltext[output[0]].split('~')[1] == usertext:
            print("Congratulations, you find the right answer!")
            break
    time_end = time.time()
    print('Query cost: ', time_end - query_start)
    print('Totally cost: ', time_end - time_start)
    print('Query end')
 
def generatemodel():
    print('Begin genertate model')
    stoplist = set('for a of the and to in'.split())
    print('Begin read data source')
    data_train = []
    count = 0
    for security in open(securitynamepath, encoding='utf-8'):
        if len(security.split('~')) == 3:
            data_train.append([word for word in security.split('~')[2].lower().split()
                   if word not in stoplist])
        count += 1
        print(count)
    print('End read data source')
    #去除只出現(xiàn)一次的單詞,查詢security name相似度的需求不需要這個特性
    # frequency = defaultdict(int)
    # for text in data_train:
    #     for token in text:
    #         frequency[token] += 1
    # data_train = [[token for token in text if frequency[token] > 1]
    #               for text in data_train]
    print(data_train)
    dictionary = corpora.Dictionary(data_train)
    dictionary.save(dictpath)
    corpus = [dictionary.doc2bow(text) for text in data_train]
    corpora.MmCorpus.serialize(modelpath, corpus)
    print('End genertate model')
 
if __name__ == '__main__':
    startjob(False, u'Undrly Alba LASPEN Property')

Output Analyzation

The run console output is:


2018-05-11 11_22_06-similarsecurity - [D__GIT_researchinit_similarsecurity] - ..._main.py - PyCharm .png

The information of output:

"Using TensorFlow backend": does it means Gensim using TensorFlow? But there is no official description about it

There is time cost information in output list:

Load security name cost (amount: 599214 records): 0.26 second

Load dictionary and model: 8.94 seconds

Similarity query for test text: 19.87 seconds

User test text:

Undrly Alba LASPEN Property

Similarity query result:

The result from gensim is key:value structure: Index:Probability, such as: 10:0.9348, means the probalility of the security name which index is 10, is 0.9348

To be easy to get full information, output security id, security name with abbreviation and security legal name by result index.

The result sorts in descending order by probility, such as:

F1234567PM~Undrly Alba LASPEN Property PP~Undrly Alba LASPEN Property PP
0.9348221
F7654321PI~Undrly Alba LASPEN UK Equity PP~Undrly Alba LASPEN UK Equity PP
0.83671427

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,841評論 0 10
  • 文|逆旅人 人生路上,步履不停??傆心敲匆稽c來不及?!恫铰牟煌!肥侵υ:?如果一定要有那么一點“來不及”存在,...
    逆旅人閱讀 483評論 0 0
  • 多長時間了 還沒有遇見你 沒關(guān)系 我還可以再繼續(xù)等下去 也許 我躺著綠野上熟睡的時候 你策馬而過 大地只留有馬蹄的...
    謊言之軀閱讀 248評論 2 1
  • 今天有點事,心情亂了,畫自然也亂了。
    劉家姥姥閱讀 212評論 1 2

友情鏈接更多精彩內(nèi)容