
face2face.gif
本文實現(xiàn)依賴python2.7、tensorflow、opencv、dlib,訓(xùn)練生成對抗模型(GAN),實現(xiàn)圖像合成。
face2face是image2image或被稱為pix2pix眾多有趣應(yīng)用中的一個。
更多應(yīng)用案例與原理論文,請參考
Image-to-Image Translation with Conditional Adversarial Nets
Image-to-Image Translation in Tensorflow by Christopher Hesse
Dat Tran博客 Face2face

更多應(yīng)用案例
- Step 1 利用opencv和dlib準(zhǔn)備訓(xùn)練集
- Step 2 利用tensorflow訓(xùn)練模型
- Step 3 Export Model & Freeze Model
- Step 4 調(diào)用模型
step 1 準(zhǔn)備訓(xùn)練集
- 在當(dāng)前目錄創(chuàng)建original與landmark文件夾。每個文件夾包含400張含有人臉的圖片。
- 注意視頻格式。訓(xùn)練數(shù)據(jù)為 默克爾演講視頻(網(wǎng)盤地址),網(wǎng)盤的視頻是MP4格式的,需要轉(zhuǎn)換為avi格式才能順利執(zhí)行下面代碼。你也可以找其他視頻。但是視頻中人臉最好在靠中間的位置,不然可能在圖片變換尺寸的時候會被裁剪掉。
- 需要加載人臉特征模型。shape_predictor_68_face_landmarks.dat(
網(wǎng)盤地址)
# -*- coding: utf-8 -*-
from __future__ import division
import cv2
import dlib
import numpy as np
import os
os.makedirs('original') # 創(chuàng)建文件夾,用于保存原始視頻中截取的幀
os.makedirs('landmarks') # 創(chuàng)建文件夾,用于保存描繪有人臉特征的圖片
DOWNSAMPLE_RATIO = 4 # 圖片縮小比例,小圖片加快人臉檢測與特征提取速度
photo_number = 400 # 從視頻中提取400張含有人臉特征的幀
video_path = 'angela_merkel_speech.avi' # 用于訓(xùn)練的含有人臉的視頻路徑
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
def reshape_for_polyline(array):
return np.array(array, np.int32).reshape((-1, 1, 2))
def prepare_training_data():
cap = cv2.VideoCapture(video_path)
count = 0
while cap.isOpened():
ret, frame = cap.read() # 讀取視頻幀
frame_resize = cv2.resize(frame, (0,0), fx=1 / DOWNSAMPLE_RATIO, fy=1 / DOWNSAMPLE_RATIO)
gray = cv2.cvtColor(frame_resize, cv2.COLOR_BGR2GRAY)
faces = detector(gray, 1) # 識別人臉位置
black_image = np.zeros(frame.shape, np.uint8) # 創(chuàng)建一張黑色圖片用于描繪人臉特征
if len(faces) == 1:
for face in faces:
detected_landmarks = predictor(gray, face).parts() # 提取人臉特征
landmarks = [[p.x * DOWNSAMPLE_RATIO, p.y * DOWNSAMPLE_RATIO] for p in detected_landmarks]
jaw = reshape_for_polyline(landmarks[0:17])
left_eyebrow = reshape_for_polyline(landmarks[22:27])
right_eyebrow = reshape_for_polyline(landmarks[17:22])
nose_bridge = reshape_for_polyline(landmarks[27:31])
lower_nose = reshape_for_polyline(landmarks[30:35])
left_eye = reshape_for_polyline(landmarks[42:48])
right_eye = reshape_for_polyline(landmarks[36:42])
outer_lip = reshape_for_polyline(landmarks[48:60])
inner_lip = reshape_for_polyline(landmarks[60:68])
color = (255, 255, 255) # 人臉特征用白色描繪
thickness = 3 # 線條粗細(xì)
cv2.polylines(img=black_image,
pts=[jaw,left_eyebrow, right_eyebrow, nose_bridge],
isClosed=False,
color=color,
thickness=thickness)
cv2.polylines(img=black_image,
pts=[lower_nose, left_eye, right_eye, outer_lip,inner_lip],
isClosed=True,
color=color,
thickness=thickness)
# 保存圖片
cv2.imwrite("original2/{}.png".format(count), frame)
cv2.imwrite("landmarks2/{}.png".format(count), black_image)
count += 1
# 執(zhí)行準(zhǔn)備數(shù)據(jù)函數(shù)
prepare_training_data()
- 改變圖片尺寸(調(diào)整為正方形)、拼接圖片(用于訓(xùn)練)
這步涉及的函數(shù)有點多,主要是利用tensorflow對jpeg與png圖片的讀取、保存、裁剪、縮放、拼接,直接根據(jù)下面步驟執(zhí)行就可以。不過建議對tensorflow圖片處理細(xì)節(jié)感興趣的小伙伴看源代碼,會有很多收獲。
github repo affinelayer/pix2pix-tensorflow
# Clone the repo from Christopher Hesse's pix2pix TensorFlow implementation
git clone https://github.com/affinelayer/pix2pix-tensorflow.git
# Move the original and landmarks folder into the pix2pix-tensorflow folder
mv face2face-demo/landmarks face2face-demo/original pix2pix-tensorflow/photos
# Go into the pix2pix-tensorflow folder
cd pix2pix-tensorflow/
# Resize original images
python tools/process.py \
--input_dir photos/original \
--operation resize \
--output_dir photos/original_resized
# Resize landmark images
python tools/process.py \
--input_dir photos/landmarks \
--operation resize \
--output_dir photos/landmarks_resized
# Combine both resized original and landmark images
python tools/process.py \
--input_dir photos/landmarks_resized \
--b_dir photos/original_resized \
--operation combine \
--output_dir photos/combined
# Split into train/val set
python tools/split.py \
--dir photos/combined
執(zhí)行完上面的代碼,模型的訓(xùn)練數(shù)據(jù)就已經(jīng)準(zhǔn)備就緒了。整個 process.py文件,基本是以下結(jié)構(gòu)。我覺得這是值得一書的東西,以備不時之需。
import tensorflow as tf
# 創(chuàng)建一個萬金油般的create_op函數(shù)
def create_op(func, **placeholders):
op = func(**placeholders)
def f(**kwargs):
feed_dict = {}
for argname, argvalue in kwargs.items():
placeholder = placeholders[argname]
feed_dict[placeholder] = argvalue
return tf.get_default_session().run(op, feed_dict=feed_dict)
return f
# 創(chuàng)建你的operation函數(shù)
encode_jpeg = create_op(
func=tf.image.encode_jpeg,
image=tf.placeholder(tf.uint8),
)
# 調(diào)用你的operation函數(shù)
decode_jpeg(contents=contents)
step 2 訓(xùn)練模型
- 網(wǎng)絡(luò)結(jié)構(gòu)簡介
我之前做相關(guān)分享的ppt, 人臉識別原理與pix2pix分享 網(wǎng)盤地址第23頁開始有pix2pix相關(guān)內(nèi)容。上圖是理論結(jié)構(gòu),但是為了加快訓(xùn)練速度,代碼實現(xiàn)的是下圖網(wǎng)絡(luò)結(jié)構(gòu)。理論上的網(wǎng)絡(luò)結(jié)構(gòu)每個encode與decode模塊細(xì)節(jié)實際使用結(jié)構(gòu)encode與decode模塊細(xì)節(jié) - 如果你比較著急可以直接執(zhí)行以下代碼,開始訓(xùn)練。我使用的GPU是英偉達(dá)的titanx,花了90分鐘。
python pix2pix.py \
--mode train \
--output_dir face2face-model \
--max_epochs 200 \
--input_dir photos/combined/train \
--which_direction AtoB
- 如果希望深入了解細(xì)節(jié),請看下面代碼。但是以下代碼不用直接執(zhí)行用于訓(xùn)練模型:) 如果預(yù)先沒有CNN卷積神經(jīng)網(wǎng)絡(luò)相關(guān)的知識,那么下面的代碼會讓氣氛很尷尬的呢。
- 定義卷積
def conv(batch_input, out_channels, stride):
```輸入結(jié)構(gòu):[batch, in_height, in_width, in_channels],
卷積核結(jié)構(gòu): [filter_width, filter_height, in_channels, out_channels]
輸出結(jié)構(gòu): [batch, out_height, out_width, out_channels]
選用4x4的卷積核 + padding 1 + 步長stride,輸出結(jié)構(gòu) VALID```
with tf.variable_scope("conv"):
in_channels = batch_input.get_shape()[3] # 輸入圖片的通道數(shù)
# 初始化 4X4卷積核,使用random_normal_initializer初始化
filter = tf.get_variable("filter",
[4, 4, in_channels, out_channels],
dtype=tf.float32,
initializer=tf.random_normal_initializer(0, 0.02))
# padding 1
padded_input = tf.pad(batch_input,
[[0, 0], [1, 1], [1, 1], [0, 0]],
mode="CONSTANT")
# 2D 卷積 步長為傳參的stride
conv = tf.nn.conv2d(padded_input, filter, [1, stride, stride, 1], padding="VALID")
return conv
- 定義激活函數(shù)
使用leaky ReLu激活函數(shù),下圖是leakReLu與ReLu的對比- ReLu 激活函數(shù)優(yōu)點:
a) 在刺激大于0的區(qū)域,不會出現(xiàn)梯度為0的問題。
b) 計算效率高。
c) 模型loss下降收斂快。大約是tanh與sigmoid激活函數(shù)的6倍。 - Leaky ReLu 激活函數(shù)優(yōu)點:
a) ReLu的優(yōu)點都有。
b) 不會出現(xiàn)梯度為0的問題。
c) 無論什么時候神經(jīng)元都會被激活。
- ReLu 激活函數(shù)優(yōu)點:

leaky ReLu與ReLu的對比
你可能對 tf.identity(x) 的作用帶有疑問,what is tf.identity used for?
def lrelu(x, a):
with tf.name_scope("lrelu"):
# leak: a*x/2 - a*abs(x)/2; linear: x/2 + abs(x)/2
x = tf.identity(x)
return (0.5 * (1 + a)) * x + (0.5 * (1 - a)) * tf.abs(x)
- 定義batchnorm
def batchnorm(input):
with tf.variable_scope("batchnorm"):
input = tf.identity(input)
# 定義batch norm 中需要訓(xùn)練的兩個參數(shù)offset與scale
channels = input.get_shape()[3]
offset = tf.get_variable("offset",
[channels],
dtype=tf.float32,
initializer=tf.zeros_initializer())
scale = tf.get_variable("scale",
[channels], dtype=tf.float32,
initializer=tf.random_normal_initializer(1.0, 0.02))
mean, variance = tf.nn.moments(input, axes=[0, 1, 2], keep_dims=False)
variance_epsilon = 1e-5
normalized = tf.nn.batch_normalization(input,
mean, variance,
offset, scale,
variance_epsilon=variance_epsilon)
return normalized
step 3 Export Model & Freeze Model
- reduce model,我們需要生成模型用于圖像生成,而判別模型可以去掉,以減少模型參數(shù)。這里我就不把生成模型重新復(fù)制一遍貼出來了。詳細(xì)請看 github repo datitran/face2face-demo/reduce_model.py。思路是:
- 首先把pix2pix.py中與生成模型相關(guān)部分復(fù)制了一份
- 然后加載訓(xùn)練好的模型
- 最后保存一個新模型。
reduce_model.py 中值得一書的事情。新建的generate_output函數(shù),用于輸入圖片,生成圖片。reduce_model.py 中所有 tf.variable_scope('名字')都與加載的訓(xùn)練好的模型一模一樣,這樣加載的模型會把它的參數(shù)與新模型的tf.variable_scope('名字')一一對應(yīng)起來。由于新模型只保留了生成模型相關(guān)的tf.variable_scope('名字'),所以新模型的參數(shù)大大減少,實現(xiàn)model reduce.
x = tf.placeholder(tf.uint8, shape=(256, 512, 3), name='image_tensor') # input tensor
y = generate_output(x) # 輸入圖片,輸出生成的圖片
with tf.Session() as sess:
# 加載訓(xùn)練好的模型
saver = tf.train.Saver()
checkpoint = tf.train.latest_checkpoint(args.input_folder)
saver.restore(sess, checkpoint)
# 輸出新模型
saver = tf.train.Saver()
saver.save(sess, './reduced_model')
- freeze model,我們把模型保存成一個.pb文件以方便調(diào)用
import tensorflow as tf
from tensorflow.python.framework import graph_util
def freeze_graph(model_folder):
# 獲取模型路徑
checkpoint = tf.train.get_checkpoint_state(model_folder)
input_checkpoint = checkpoint.model_checkpoint_path
output_graph = './frozen_model.pb'
output_node_names = 'generate_output/output'
# 加載 graph
saver = tf.train.import_meta_graph(input_checkpoint + '.meta',
clear_devices=True)
# 取出 graph
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
# 開一個新會話,加載參數(shù),選擇需要的節(jié)點,保存模型文件
with tf.Session() as sess:
saver.restore(sess, input_checkpoint) # 加載graph的參數(shù)
# tensorflow內(nèi)置函數(shù),將變量轉(zhuǎn)為常量
output_graph_def = graph_util.convert_variables_to_constants(
sess, # 用于取回參數(shù)
input_graph_def, # 用于取回節(jié)點node
[output_node_names] # 選擇需要的節(jié)點名)
# 將模型寫入 .pb文件
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('./reduced_model')
step 4 調(diào)用模型
freeze model大約200MB,模型訓(xùn)練用的是400張圖片,200epoch。
import tensorflow as tf
def load_graph(frozen_graph_filename):
""" 加載 freezed model """
graph = tf.Graph()
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(frozen_graph_filename, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
return graph
graph = load_graph(frozen_model_file)
image_tensor = graph.get_tensor_by_name('image_tensor:0')
output_tensor = graph.get_tensor_by_name('generate_output/output:0')
sess = tf.Session(graph=graph)
# 圖片必須是256X256,人臉在靠近中間的位置
generated_image = sess.run(output_tensor,
feed_dict = {image_tensor: image})


