????????第五周馬上又要結(jié)束了,這一周安排的任務(wù)是風(fēng)格遷移模型的壓縮和部署。部署暫時(shí)是打算用TensorFlow Lite之類的庫,但安卓的學(xué)習(xí)進(jìn)度有點(diǎn)落下,所以部署的部分延遲到之后再做,暫時(shí)先完成模型壓縮的任務(wù)。
????????一開始對模型壓縮沒有什么思路,網(wǎng)上搜索得到的方法有很多,例如參數(shù)修剪和共享、低秩分解、遷移/壓縮卷積濾波器和知識精煉等,但僅僅知道了一個(gè)概念根本無從下手。于是去仔細(xì)地讀了讀風(fēng)格遷移的綜述論文:Neural Style Transfer: A Review。期間,注意到其作者Yongcheng Jing在知乎專欄的文章中提到,他在淘寶AI Team的黃真川的幫助下,將TF模型壓縮到了0.99M,于是在文章下評論詢問模型優(yōu)化壓縮的方法,Yongcheng Jing的回答是:使用更小的Kernel。
????????由于沒有找到他說的具體模型架構(gòu),只能自己動(dòng)手調(diào)整。如下是圖像生成網(wǎng)絡(luò)的偽代碼,由3層conv、5層residual block以及3層deconv組成,其中參數(shù)主要集中于residual block中。
def net(image, training):
conv1 = relu(instance_norm(conv2d(image, 3, 32, 9, 1)))
conv2 = relu(instance_norm(conv2d(conv1, 32, 64, 3, 2)))
conv3 = relu(instance_norm(conv2d(conv2, 64, 128, 3, 2)))
res1 = residual(conv3, 128, 3, 1)
res2 = residual(res1, 128, 3, 1)
res3 = residual(res2, 128, 3, 1)
res4 = residual(res3, 128, 3, 1)
res5 = residual(res4, 128, 3, 1)
deconv1 = relu(instance_norm(resize_conv2d(res5, 128, 64, 3, 2, training)))
deconv2 = relu(instance_norm(resize_conv2d(deconv1, 64, 32, 3, 2, training)))
deconv3 = tf.nn.tanh(instance_norm(conv2d(deconv2, 32, 3, 9, 1)))
????????列出每個(gè)Tensor對象的具體信息:
<tf.Variable 'conv1/conv/weight:0' shape=(9, 9, 3, 32) dtype=float32_ref>
<tf.Variable 'conv2/conv/weight:0' shape=(3, 3, 32, 64) dtype=float32_ref>
<tf.Variable 'conv3/conv/weight:0' shape=(3, 3, 64, 128) dtype=float32_ref>
<tf.Variable 'res1/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res1/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res2/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res2/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res3/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res3/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res4/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res4/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res5/residual/conv/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'res5/residual/conv_1/weight:0' shape=(3, 3, 128, 128) dtype=float32_ref>
<tf.Variable 'deconv1/conv_transpose/conv/weight:0' shape=(3, 3, 128, 64) dtype=float32_ref>
<tf.Variable 'deconv2/conv_transpose/conv/weight:0' shape=(3, 3, 64, 32) dtype=float32_ref>
<tf.Variable 'deconv3/conv/weight:0' shape=(9, 9, 32, 3) dtype=float32_ref>
(帶有Adam的參數(shù)未顯示)
????????原始模型由3部分組成,分別是data(20.1MB)、index(2.5KB)、meta(5.7MB),其中data保存了神經(jīng)網(wǎng)絡(luò)中所有的變量值,而meta中保存了網(wǎng)絡(luò)的結(jié)構(gòu)和變量名,壓縮模型時(shí)的目標(biāo)是將data縮小。于是,將中間5個(gè)residual block的kernel從3x3改為1x1,再統(tǒng)計(jì)每一層的參數(shù)(忽略帶有Adam的參數(shù)),得到如下數(shù)據(jù):
| layer name | attribute(before) | attribute(after) | reduce |
|---|---|---|---|
| conv1 | 7776 | 7776 | 0% |
| conv2 | 18432 | 18432 | 0% |
| conv3 | 73728 | 73728 | 0% |
| res1 | 294912 | 32768 | 88.9% |
| res2 | 294912 | 32768 | 88.9% |
| res3 | 294912 | 32768 | 88.9% |
| res4 | 294912 | 32768 | 88.9% |
| res5 | 294912 | 32768 | 88.9% |
| deconv1 | 73728 | 73728 | 0% |
| deconv2 | 18432 | 18432 | 0% |
| deconv3 | 7776 | 7776 | 0% |
| total | 1674432 | 363712 | 78.3% |
????????從結(jié)果可見,模型的參數(shù)減少了78.3%,而實(shí)驗(yàn)的結(jié)果是,模型從20.1MB減小到了4.4MB,符合之前的計(jì)算結(jié)果。同時(shí)還發(fā)現(xiàn),模型的訓(xùn)練時(shí)間減少了,只需1-2個(gè)小時(shí)即可將total loss收斂到20w。
