變分推斷

在github看到這個(gè)文章寫的不錯(cuò),就轉(zhuǎn)載了,大家一起學(xué)習(xí):
https://github.com/keithyin/mynotes/tree/master/MachineLearning/algorithms

——————————————————————————————————————————
該作者也推薦了下面一個(gè)系列的文章,也寫的不錯(cuò):

——————————————————————————————————————————

Variational Auto-Encoder

大家對(duì)貝葉斯公式應(yīng)該都很熟悉
P(Z|X)=\frac{p(X,Z)}{\int_z p(X,Z=z)dz}

我們稱P(Z|X)posterior distribution。posterior distribution的計(jì)算通常是非常困難的,為什么呢?
假設(shè)Z是一個(gè)高維的隨機(jī)變量,如果要求P(Z=z|X=x),我們不可避免的要計(jì)算\int_z p(X=x,Z=z)dz,由于Z是高維隨機(jī)變量,這個(gè)積分是相當(dāng)難算的。

variational inference就是用來(lái)計(jì)算posterior distribution的。

core idea

variational inference的核心思想包含兩步:

  • 假設(shè)分布q(z;\lambda) (這個(gè)分布是我們搞得定的,搞不定的就沒意義了)
  • 通過(guò)改變分布的參數(shù) \lambda , 使 q(z;\lambda) 靠近 p(z|x)

總結(jié)稱一句話就是,為真實(shí)的后驗(yàn)分布引入了一個(gè)參數(shù)話的模型。 即:用一個(gè)簡(jiǎn)單的分布 q(z;\lambda) 擬合復(fù)雜的分布 p(z|x) 。

這種策略將計(jì)算 p(z|x) 的問(wèn)題轉(zhuǎn)化成優(yōu)化問(wèn)題了
\lambda^* = \arg\min_{\lambda}~divergence(p(z|x),q(z;\lambda))
收斂后,就可以用 q(z;\lambda) 來(lái)代替 p(z|x)

公式推導(dǎo)

\begin{aligned} \text{log}P(x) &= \text{log}P(x,z)-\text{log}P(z|x) \\ &=\text{log}\frac{P(x,z)}{Q(z;\lambda)}-\text{log}\frac{P(z|x)}{Q(z;\lambda)} \end{aligned}
等式的兩邊同時(shí)對(duì)分布Q(z)求期望,得
\begin{aligned} \mathbb E_{q(z;\lambda)}\text{log}P(x) &= \mathbb E_{q(z;\lambda)}\text{log}P(x,z)-\mathbb E_{q(z;\lambda)}\text{log}P(z|x), \\ \text{log}P(x)&=\mathbb E_{q(z;\lambda)}\text{log}\frac{p(x,z)}{q(z;\lambda)}-\mathbb E_{q(z;\lambda)}\text{log}\frac{p(z|x)}{q(z;\lambda)}, \\ &=KL(q(z;\lambda)||p(z|x))+\mathbb E_{q(z;\lambda)}\text{log}\frac{p(x,z)}{q(z;\lambda)},\\ \text{log}P(x)&=KL(q(z;\lambda)||p(z|x))+\mathbb E_{q(z;\lambda)}\text{log}\frac{p(x,z)}{q(z;\lambda)}, \end{aligned}
我們的目標(biāo)是使 q(z;\lambda) 靠近 p(z|x) ,就是\min_\lambda KL(q(z;\lambda)||p(z|x))

由于 KL(q(z;\lambda)||p(z|x)) 中包含 p(z|x) ,這項(xiàng)非常難求。將\lambda看做變量時(shí),\text{log}P(x) 為常量,所以, \min_\lambda KL(q(z;\lambda)||p(z|x)) 等價(jià)于 \max_\lambda \mathbb E_{q(z;\lambda)}\text{log}\frac{p(x,z)}{q(z;\lambda)}

\mathbb E_{q(z;\lambda)}[\text{log}p(x,z)-\text{log}q(z;\lambda)] 稱為Evidence Lower Bound(ELBO)。

現(xiàn)在,variational inference的目標(biāo)變成\max_\lambda \mathbb E_{q(z;\lambda)}[\text{log}p(x,z)-\text{log}q(z;\lambda)]

為什么稱之為ELBO呢?
p(x)一般被稱之為evidence,又因?yàn)?KL(q||p)>=0, 所以 p(x)>=E_{q(z;\lambda)}[\text{log}p(x,z)-\text{log}q(z;\lambda)], 這就是為什么被稱為ELBO

ELBO

繼續(xù)看一下ELBO
\begin{aligned} ELBO(\lambda) &= \mathbb E_{q(z;\lambda)}[\text{log}p(x,z)-\text{log}q(z;\lambda)] \\ &= \mathbb E_{q(z;\lambda)}\text{log}p(x,z) -\mathbb E_{q(z;\lambda)}\text{log}q(z;\lambda)\\ &= \mathbb E_{q(z;\lambda)}\text{log}p(x,z) + H(q) \end{aligned}
The first term represents an energy. The energy encourages q to focus probability mass where the model puts high probability p(\mathbf{x}, \mathbf{z}). The entropy encourages q to spread probability mass to avoid concentrating to one location.

q(Z)

Z包含K個(gè)獨(dú)立部分(K 維, 當(dāng)然,第i維也可能是高維向量),我們假設(shè):
q(Z;\lambda) = \prod_{k=1}^{K}q_k(Z_k;\lambda_k)
這個(gè)被稱為mean field approximation。(關(guān)于mean field approximationhttps://metacademy.org/graphs/concepts/mean_field

ELBO則變成
\begin{aligned} ELBO(\lambda) &= \mathbb E_{q(Z;\lambda)}\text{log}p(X,Z) -\mathbb E_{q(z;\lambda)}\text{log}q(Z;\lambda) \\ &= \int q(Z;\lambda)\text{log}p(X,Z)dZ-\int q(Z;\lambda)\text{log}q(Z;\lambda)dZ\\ &=\int [\prod_{k=1}^{K}q_k(Z_k;\lambda_k)] \text{log}p(X,Z)dZ-\int [\prod_{k=1}^{K}q_k(Z_k;\lambda_k)] \text{log}q(Z;\lambda)dZ \end{aligned}
第一項(xiàng)為 energy, 第二項(xiàng)為H(q)

energy

符號(hào)的含義:

Z = \{Z_j,\overline Z_j \}, \overline Z_j=Z\backslash Z_j
\lambda=\{\lambda_j, \overline\lambda_j\}, \overline \lambda_j=\lambda\backslash\lambda_j

先處理第一項(xiàng):
\begin{aligned} &\int \Bigr[\prod_{k=1}^{K}q_k(Z_k;\lambda_k)\Bigr] \text{log}p(X,Z)dZ = \\ &\int_{Z_j}q_j(Z_j;\lambda_j)\int_{ \overline Z_j}\Bigr[\prod_{k=1,k\neq j}^K q_k(Z_k;\lambda_k)\Bigr]\text{log}p(X,Z)d \overline Z_jdZ_j = \\ &\int_{Z_j}q_j(Z_j;\lambda_j)\Bigr[E_{q(\overline Z_j;\overline \lambda_j)}\text{log}p(X,Z)\Bigr]dZ_j=\\ &\int_{Z_j}q_j(Z_j;\lambda_j)\{\log \exp\Bigr[E_{q(\overline Z_j;\overline \lambda_j)}\text{log}p(X,Z)\Bigr]\}dZ_j=\\ &\int_{Z_j}q_j(Z_j;\lambda_j)\Bigr[\log q_j^* (Z_j;\lambda_j)+\log C\Bigr]dZ_j \end{aligned}
其中q_j^* (Z_j;\lambda_j)=\frac{1}{C}\exp[E_{q(\overline Z_j;\overline \lambda_j)}\text{log}p(X,Z)] , C 保證 q_j^* (Z_j;\lambda_j) 是一個(gè)分布。C 與分布的參數(shù) \overline \lambda_j 有關(guān),與變量無(wú)關(guān)??!

H(q)

再處理第二項(xiàng):
\begin{aligned} &\int \Bigr[\prod_{k=1}^{K}q_k(Z_k;\lambda_k)\Bigr] \text{log}q(Z;\lambda)dZ = \\ &\int \Bigr[\prod_{k=1}^{K}q_k(Z_k;\lambda_k)\Bigr] \sum_{n=1}^K\text{log}q(Z_n;\lambda)dZ = \\ &\sum_j\int \Bigr[\prod_{k=1}^{K}q_k(Z_k;\lambda_k)\Bigr] \text{log}q(Z_j;\lambda_j)dZ=\\ &\sum_j\int \Bigr[\prod_{k=1}^{K}q_k(Z_k;\lambda_k)\Bigr] \text{log}q(Z_j;\lambda_j)dZ=\\ &\sum_j\int_{Z_j} q_j(Z_j;\lambda_j)\text{log}q(Z_j;\lambda_j)dZ_j\int [\prod_{k=1,k\neq j}^{K}q_k(Z_k;\lambda_k)]d\overline Z_j=\\ &\sum_j\int_{Z_j} q_j(Z_j;\lambda_j)\text{log}q(Z_j;\lambda_j)dZ_j \end{aligned}

再看ELBO

經(jīng)過(guò)上面的處理,ELBO變?yōu)?br> \begin{aligned} ELBO &= \int_{Z_i}q_i(Z_i;\lambda_j)\text{log}q_i^* (Z_i;\lambda_i)dZ_i-\sum_j\int_{Z_j} q_j(Z_j;\lambda_j)\text{log}q(Z_j;\lambda_j)dZ_j+\log C\\ &=\{\int_{Z_i}q_i(Z_i;\lambda_j)\text{log}q_i^* (Z_i;\lambda_i)dZ_i-\int_{Z_i} q_i(Z_i;\lambda_j)\text{log}q(Z_i;\lambda_i)dZ_i\} +H(q(\overline Z_i;\overline \lambda_i))+\log C\\ & \end{aligned}
再看上式 \{\} 中的項(xiàng):
\int_{Z_i}q_i(Z_i;\lambda_j)\text{log}q_i^* (Z_i;\lambda_i)dZ_i-\int_{Z_i} q_i(Z_i;\lambda_j)\text{log}q(Z_i;\lambda_i)dZ_i = -KL(q_i(Z_i;\lambda_j)||q_i^* (Z_i;\lambda_i))
所以ELBO又可以寫成:
ELBO=-KL(q_i(Z_i;\lambda_j)||q_i^* (Z_i;\lambda_i))+H(q(\overline Z_i;\overline \lambda_i))+\log C
我們要maxmize ELBO,如何更新 q_i(Z_i;\lambda_i) 呢?

ELBO=-KL(q_i(Z_i;\lambda_i)||q_i^* (Z_i;\lambda_i))+H(q(\overline Z_i;\overline \lambda_i))+\log C
可以看出,當(dāng) q_i(Z_i;\lambda_j)=q_i^* (Z_i;\lambda_i) 時(shí), KL(q_i(Z_i;\lambda_j)||q_i^* (Z_i;\lambda_i))=0 。 這時(shí),ELBO取最大值。
所以參數(shù)更新策略就變成了
\begin{aligned} &q_1(Z_1;\lambda_1)=q_1^* (Z_1;\lambda_1)\\ &q_2(Z_2;\lambda_2)=q_2^* (Z_2;\lambda_2)\\ &q_3(Z_3;\lambda_3)=q_3^* (Z_3;\lambda_3)\\ &... \end{aligned}
關(guān)于 q_i^* (Z_i;\lambda_i)
\begin{aligned} q_i(Z_i;\lambda_i)&=q_i^* (Z_i;\lambda_i)\\ q_i (Z_i;\lambda_i)&=\frac{1}{C}\exp[E_{q(\overline Z_i;\overline \lambda_i)}\text{log}p(X,Z)]\\ &=\frac{1}{C}\exp[E_{q(\overline Z_i;\overline \lambda_i)}\text{log}p(X,Z_i,\overline Z_i)]\\ & \end{aligned}
q_i 是要更新的節(jié)點(diǎn),X 是觀測(cè)的數(shù)據(jù),由于 Markov Blanket(下面介紹),更新公式變成:
\log(q_i(Z_i;\lambda_i))=\int q(mb(Z_i))\log p(Z_i,mb(Z_i),X)d~mb(Z_i)
由于式子中和 Z_i 無(wú)關(guān)的項(xiàng)都被積分積掉了,所以寫成了 Markov Blanket 這種形式

Markov Blanket

In machine learning, the Markov blanket for a node A in a Bayesian network is the set of nodes mb(A) composed of A's parents, its children, and its children's other parents. In a Markov random field, the Markov blanket of a node is its set of neighboring nodes.
Every set of nodes in the network is conditionally independent of A when conditioned on the set mb(A), that is, when conditioned on the Markov blanket of the node A . The probability has the Markov property; formally, for distinct nodes A and B:
Pr(A|mb(A),B)=Pr(A|mb(A))
The Markov blanket of a node contains all the variables that shield the node from the rest of the network. This means that the Markov blanket of a node is the only knowledge needed to predict the behavior of that node.

參考資料

https://en.wikipedia.org/wiki/Markov_blanket
http://edwardlib.org/tutorials/inference
http://edwardlib.org/tutorials/variational-inference

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

友情鏈接更多精彩內(nèi)容