-
信息熵
啥是信息熵?我們高中都學(xué)過熱力學(xué)第二定律,熵是描述系統(tǒng)混亂程度的一個(gè)度量。熵值越大,系統(tǒng)越混亂。信息熵同樣可以描述數(shù)據(jù)是否【純】。

2.維度選擇
我們知道ID3是構(gòu)建決策樹的度量,但是到底該怎么選擇維度呢?這就涉及到了信息增益
首先我們舉個(gè)栗子,假設(shè)數(shù)據(jù)集如下
| outlook | temperature | humidity | windy | target |
|---|---|---|---|---|
| 0 | 0 | 0 | 0 | N |
| 0 | 0 | 0 | 1 | N |
| 1 | 0 | 0 | 0 | Y |
| 2 | 1 | 0 | 0 | Y |
| 2 | 2 | 1 | 0 | Y |
| 2 | 2 | 1 | 1 | N |
| 1 | 2 | 1 | 1 | Y |
target為“N”的3個(gè),“Y”有4個(gè),所以數(shù)據(jù)的信息熵為:
H(x)=-(3/7log3/7+4/7log4/7)=0.985228136034
信息熵
H(outlook)=-2/7H(outlook=0)-2/7H(outlook=1)-3/7H(outlook=2)=0.393555357452
H(temperature)=3/7H(temperature=0)+1/7H(temperature=1)+2/7H(temperature=1)=0.787110714904
H(humidity)=4/7H(humidity=0)+3/7H(humidity=1)=0.96498392888
H(windy)=4/7H(windy=0)+3/7H(windy=1)=0.857142857143信息增益
InfoGain(outlook)=H(x)-H(outlook)=0.591672778582
InfoGain(temperature)=H(x)-H(temperature)=0.19811742113
InfoGain(humidity)=H(x)-H(humidity)=0.0202442071538
InfoGain(windy)=H(x)-H(windy)=0.128085278891
有上述計(jì)算可以看出信息增益最大的是outlook維度,后續(xù)選擇維度跟該方法相同,直到所有維度被選擇完或者分裂后的結(jié)果一致停止分裂。
- ID3的局限性
- ID3只能處理離散型數(shù)據(jù),無(wú)法處理連續(xù)性數(shù)據(jù)
- 如果某個(gè)屬性的數(shù)據(jù)含有缺失值,ID3 無(wú)法處理。
- 信息增益的缺陷:信息增益在類別多的屬性上計(jì)算結(jié)果會(huì)大于類別少的屬性上的計(jì)算結(jié)果,這會(huì)導(dǎo)致 ID3 算法偏向選擇具有較多分枝的屬性,而分枝多的屬性不一定是最優(yōu)的選擇。
4.python代碼 github
光說不練假把式,上代碼
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2018/6/4 下午10:12
# @Author : liujiatian
# @File : decisionTree.py
from __future__ import division
from math import log
import pandas as pd
def createDataSet():
dataSet = [[0, 0, 0, 0, 'N'],
[0, 0, 0, 1, 'N'],
[1, 0, 0, 0, 'Y'],
[2, 1, 0, 0, 'Y'],
[2, 2, 1, 0, 'Y'],
[2, 2, 1, 1, 'N'],
[1, 2, 1, 1, 'Y']]
df = pd.DataFrame(data=dataSet, columns=[
'outlook', 'temperature', 'humidity', 'windy', 'target'])
return df
def calcShannonEnt(dataSet):
'''
計(jì)算信息熵
:return:
'''
entropy = 0.0
length = len(dataSet)
'''
counter={'N':3,'Y':4}
'''
counter = dict(dataSet.iloc[:, -1].value_counts())
for _, count in counter.iteritems():
prob = count / length
log_prob = log(prob, 2)
entropy -= prob * log_prob
return entropy
def chooseFeatureByInfoGain(dataSet):
'''
ID3
根據(jù)最大信息增益選擇最佳分裂維度
:param dataSet:
:return: 維度的序數(shù)
'''
if dataSet.empty:
return
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeature = -1
featureCount = dataSet.shape[1] - 1
dataSetRows = len(dataSet)
# 維度的數(shù)量
for i in range(featureCount):
# 不同元素
uniqueVals = set(list(dataSet.iloc[:, i]))
splitInfo = 0.0
for value in uniqueVals:
subDataFrame = dataSet[dataSet.iloc[:, i] == value]
subDataFrameRows = len(subDataFrame)
prob = subDataFrameRows / dataSetRows
splitInfo += prob * calcShannonEnt(subDataFrame)
infoGain = baseEntropy - splitInfo
if infoGain > bestInfoGain:
bestInfoGain = infoGain
bestFeature = i
return bestFeature
def createTree(dataSet):
'''
創(chuàng)建決策樹
:param dataSet:
:return: {'outlook': {0: 'N', 1: 'Y', 2: {'windy': {0: 'Y', 1: 'N'}}}}
'''
# 如果所有的label都一樣,停止分裂
if len(dataSet.iloc[:, -1].value_counts()) == 1:
target = dataSet.iloc[:, -1].value_counts().index[0]
return target
# 如果只剩下一個(gè)維度,那么取眾數(shù)
if len(dataSet.columns) == 2:
target = dataSet.iloc[:, -1].mode()[0]
return target
# ID3
bestFeatureIndex = chooseFeatureByInfoGain(dataSet)
bestFeature = dataSet.columns[bestFeatureIndex]
myTree = {bestFeature: {}}
for item, subDataSet in dataSet.groupby(bestFeature):
myTree[bestFeature][item] = createTree(subDataSet)
return myTree
if __name__ == '__main__':
dataSet = createDataSet()
print dataSet
print createTree(dataSet)