在嘗試eager execution的時(shí)候,突然發(fā)現(xiàn)一個(gè)tape.gradient()死活都是none的情況
當(dāng)時(shí)的代碼是這樣的
def total_loss(pred, images, labels, bboxes, landmarks):
"""
Return
--------------------
(total_loss, cls_loss, bbox_loss, landmark_loss)
"""
c_loss = cls_loss(pred[0], labels)
b_loss = bbox_loss(pred[1], bboxes, labels)
l_loss = landmark_loss(pred[2], landmarks, labels)
return c_loss + 0.5 * b_loss + 0.5 * l_loss#, c_loss, b_loss, l_loss
def grad(model, images, labels, bboxes, landmarks):
pred = model(images)
with tf.GradientTape() as tape:
loss_value = total_loss(pred, images, labels, bboxes, landmarks)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
然后在多次嘗試下,最終發(fā)現(xiàn)如果把代碼改成下面這樣,就能得到梯度
def total_loss(model, images, labels, bboxes, landmarks):
"""
Return
--------------------
(total_loss, cls_loss, bbox_loss, landmark_loss)
"""
pred = model(images)
c_loss = cls_loss(pred[0], labels)
b_loss = bbox_loss(pred[1], bboxes, labels)
l_loss = landmark_loss(pred[2], landmarks, labels)
return c_loss + 0.5 * b_loss + 0.5 * l_loss#, c_loss, b_loss, l_loss
def grad(model, images, labels, bboxes, landmarks):
with tf.GradientTape() as tape:
# must execute model(x) in the context of tf.GradientTape()
loss_value = total_loss(model, images, labels, bboxes, landmarks)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
看到唯一的區(qū)別了嗎?這也是根本原因所在。
計(jì)算model的輸出必須在tf.GradientTape()的上下文中進(jìn)行。
StackOverflow上有一個(gè)同樣的問題,解釋的很好