樸素貝葉斯算法進行NLP初試

樸素貝葉斯算法是NLP領域常用的一種算法模型,這里我們用一個簡單的例子來看看怎么樣用他來進行一個NLP的分類例子。(偏向實用,如果要想了解算法原理的話,另外搜索學習)

跟常見的模型建立一樣,主要有一下幾個步驟:

  1. 數據的預處理
  2. 數據集分類標記
  3. 特征提取與建立模型并訓練
  4. 進行測試

這次我用了sklearn來進行這個簡單的小例子,有兩個文本集,hotel和travel,一個文本集全是各種賓館,一個文本集都是旅游信息


具體的代碼如下:

import os
import jieba
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.externals import joblib
import time

"""
1.數據的預處理
"""


def preprocess(path):
    text_with_space = ""
    textfile = open(path, "r", encoding="utf8").read()
    textcute = jieba.cut(textfile)
    for word in textcute:
        text_with_space += word + " "
    return text_with_space


"""
2. 數據集分類標記
"""


def loadtrainset(path, classtag):
    allfiles = os.listdir(path)
    processed_textset = []
    allclasstags = []
    for thisfile in allfiles:
        # print(thisfile)
        path_name = path + "/" + thisfile
        processed_textset.append(preprocess(path_name))
        allclasstags.append(classtag)
    return processed_textset, allclasstags


processed_textdata1, class1 = loadtrainset("/Users/fengyang/PycharmProjects/NLP/dataset/train/hotel", "賓館")
processed_textdata2, class2 = loadtrainset("/Users/fengyang/PycharmProjects/NLP/dataset/train/travel", "旅游")

train_data = processed_textdata1 + processed_textdata2
classtags_list = class1 + class2
# 對文本中的詞語轉換
count_vector = CountVectorizer()
vecot_matrix = count_vector.fit_transform(train_data)

"""
3. 特征提取與訓練
"""
# TFIDF
# 提取特征
train_tfidf = TfidfTransformer(use_idf=False).fit_transform(vecot_matrix)
# 特征訓練
clf = MultinomialNB().fit(train_tfidf, classtags_list)
"""
4. 測試
"""
testset = []

path = "/Users/fengyang/PycharmProjects/NLP/dataset/test/hotel"
allfiles = os.listdir(path)

hotel = 0
travel = 0

for thisfile in allfiles:
    path_name = path + "/" + thisfile
    new_count_vector = count_vector.transform([preprocess(path_name)])
    new_tfidf = TfidfTransformer(use_idf=False).fit_transform(new_count_vector)
    predict_result = clf.predict(new_tfidf)
    print(predict_result)
    print(thisfile)

    if (predict_result == "賓館"):
        hotel += 1
    if (predict_result == "旅游"):
        travel += 1

print("賓館" + str(hotel))
print("旅游" + str(travel))

結果:

['賓館']
三亞市春節(jié)賓館房價不亂漲價違者將受到嚴處_seg_pos.txt
['賓館']
住宿-賓館名錄_seg_pos.txt
['賓館']
nj7_seg_pos.txt
['賓館']
dali09_seg_pos.txt
['賓館']
bj6_seg_pos.txt
['賓館']
xm7_seg_pos.txt
['賓館']
dujiangyan09_seg_pos.txt
['賓館']
wuyishan12_seg_pos.txt
['賓館']
zhuhai06_seg_pos.txt
['賓館']
kuerle01_seg_pos.txt
['賓館']
xm3_seg_pos.txt
賓館11
旅游0

通過結果我們看到,所有的測試本文,一種11個,全部正確。

具體代碼和數據集地址:https://github.com/fredfeng0326/NLP/tree/master/nb_test
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
【社區(qū)內容提示】社區(qū)部分內容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發(fā)布,文章內容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

友情鏈接更多精彩內容