loss函數(shù)之KLDivLoss

KL散度

KL散度,又叫相對熵,用于衡量兩個分布(離散分布和連續(xù)分布)之間的距離。

設(shè)p(x) 、q(x) 是離散隨機(jī)變量X的兩個概率分布,則pq 的KL散度是:

D_{K L}(p \| q)=E_{p(x)} \log \frac{p(x)}{q(x)}=\sum_{i=1}^{N} p\left(x_{i}\right) \cdot\left(\log p\left(x_{i}\right)-\log q\left(x_{i}\right)\right)

KLDivLoss

對于包含N個樣本的batch數(shù)據(jù) D(x, y),x是神經(jīng)網(wǎng)絡(luò)的輸出,并且進(jìn)行了歸一化和對數(shù)化;y是真實的標(biāo)簽(默認(rèn)為概率),xy同維度。

n個樣本的損失值l_{n}計算如下:

l_{n}=y_{n} \cdot\left(\log y_{n}-x_{n}\right)

class KLDivLoss(_Loss):
    __constants__ = ['reduction']
    def __init__(self, size_average=None, reduce=None, reduction='mean'):
        super(KLDivLoss, self).__init__(size_average, reduce, reduction)
    def forward(self, input, target):
        return F.kl_div(input, target, reduction=self.reduction)

pytorch中通過torch.nn.KLDivLoss類實現(xiàn),也可以直接調(diào)用F.kl_div 函數(shù),代碼中的size_averagereduce已經(jīng)棄用。reduction有四種取值mean,batchmean, sum, none,對應(yīng)不同的返回\ell(x, y)。 默認(rèn)為mean

L=\left\{l_{1}, \ldots, l_{N}\right\}

\ell(x, y)=\left\{\begin{array}{ll}L, & \text { if reduction }=\text { 'none' } \\ \operatorname{mean}(L), & \text { if reduction }=\text { 'mean' } \\ N*\operatorname {mean}(L), & \text { if reduction }=\text { 'batchmean' } \\ \operatorname{sum}(L), & \text { if reduction }=\text { 'sum' }\end{array} \right.

例子:

import torch
import torch.nn as nn
import math

def validate_loss(output, target):
    val = 0
    for li_x, li_y in zip(output, target):
        for i, xy in enumerate(zip(li_x, li_y)):
            x, y = xy
            loss_val = y * (math.log(y, math.e) - x)
            val += loss_val
    return val / output.nelement()

torch.manual_seed(20)
loss = nn.KLDivLoss()
input = torch.Tensor([[-2, -6, -8], [-7, -1, -2], [-1, -9, -2.3], [-1.9, -2.8, -5.4]])
target = torch.Tensor([[0.8, 0.1, 0.1], [0.1, 0.7, 0.2], [0.5, 0.2, 0.3], [0.4, 0.3, 0.3]])
output = loss(input, target)
print("default loss:", output)

output = validate_loss(input, target)
print("validate loss:", output)

loss = nn.KLDivLoss(reduction="batchmean")
output = loss(input, target)
print("batchmean loss:", output)

loss = nn.KLDivLoss(reduction="mean")
output = loss(input, target)
print("mean loss:", output)

loss = nn.KLDivLoss(reduction="none")
output = loss(input, target)
print("none loss:", output)

輸出:

default loss: tensor(0.6209)
validate loss: tensor(0.6209)
batchmean loss: tensor(1.8626)
mean loss: tensor(0.6209)
none loss: tensor([[1.4215, 0.3697, 0.5697],
        [0.4697, 0.4503, 0.0781],
        [0.1534, 1.4781, 0.3288],
        [0.3935, 0.4788, 1.2588]])
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容