TensorBoard是通過讀取tensorflow產(chǎn)生的事件文檔來運(yùn)行的.這些文檔中包含著代碼運(yùn)行過程中產(chǎn)生的總結(jié)信息.下面本文將會(huì)對tensorboard進(jìn)行詳細(xì)的介紹.
1.生命周期
??首先,創(chuàng)建想要收集數(shù)據(jù)的tensorflow graph,然后指明想要收集數(shù)據(jù)的節(jié)點(diǎn).summary是一個(gè)operation.收集操作.
??例如,假設(shè)你在MNIST數(shù)據(jù)集上訓(xùn)練卷積神經(jīng)網(wǎng)絡(luò).你呢,想記錄隨著學(xué)習(xí)的深入,學(xué)習(xí)率的變化程度喲,還有目標(biāo)函數(shù)的變化喲.想要收集這些信息呢,就需要使用 tf.summary.scalar .然后呢就要給這些總結(jié)信息 scalar_summary一個(gè)相對比較有意義的標(biāo)簽 tag,例如 'learning rate' ,'loss function'.等等
??也許呢,你還想可視化的觀察一下某一層的激活函數(shù)的輸出情況,或者是梯度和權(quán)重的分布情況.我們可以通過 tf.summary.histogram 來收集這些信息.想要對這些總總結(jié)信息有更深入的了解,就需要查看一下文檔. summary operations.
??Tensorflow中的operation在運(yùn)行之前什么都不會(huì)做.當(dāng)然,如果你運(yùn)行這個(gè)operation的一個(gè)下行operation也是可以觸發(fā)這個(gè)operation的運(yùn)行的.Summary的節(jié)點(diǎn)是外圍節(jié)點(diǎn),沒有任何的操作節(jié)點(diǎn)是需要依賴這些節(jié)點(diǎn)的.因此,我們需要把這些總結(jié)節(jié)點(diǎn)一個(gè)個(gè)的自己運(yùn)行完,這很煩呀,所以有個(gè)快捷鍵.tf.summary.merge_all 可以將這些節(jié)點(diǎn)合并成一個(gè)節(jié)點(diǎn),然后運(yùn)行這個(gè)節(jié)點(diǎn)就好啦.具體運(yùn)行的方法是使用writer.add_summary觸發(fā).觸發(fā)一次就可以寫入這一批執(zhí)行的信息,不多不少.然后每次運(yùn)行的時(shí)候都會(huì)產(chǎn)生輸出信息,把這個(gè)節(jié)點(diǎn)傳給tf.summary.FileWriter就可以寫入到文件當(dāng)中.
?? FileWriter 使用logdir作為輸入?yún)?shù).這個(gè)參數(shù)可以說是非常重要的.所有的輸出信息都會(huì)在這個(gè)目錄中輸出.同時(shí),還可以選擇graph作為參數(shù).通過接收graph,tensorboard就可以將整個(gè)圖像進(jìn)行可視化的展示啦. Tensor shape information.
??現(xiàn)在,你就可以修改你的graph增加一個(gè)FileWriter了,如果你愿意的話,你就可以在每一步都進(jìn)行一下summary 的操作,然后把每一步的信息都顯示出來.不過最好呢,是每隔n步再進(jìn)行一次信息的總結(jié).下面是 simple MNIST tutorial的一個(gè)簡單的改編版本的代碼,在代碼中我們增加了一些總結(jié)的操作,每隔10步就會(huì)執(zhí)行一次.運(yùn)行tensorboard --logdir=/tmp/tensorflow/mnist, 你就可以看到可視化的數(shù)據(jù).源代碼在這里.
def variable_summaries(var):
"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar('mean', mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):
"""Reusable code for making a simple neural net layer.
It does a matrix multiply, bias add, and then uses relu to nonlinearize.
It also sets up name scoping so that the resultant graph is easy to read,
and adds a number of summary ops.
"""
# Adding a name scope ensures logical grouping of the layers in the graph.
with tf.name_scope(layer_name):
# This Variable will hold the state of the weights for the layer
with tf.name_scope('weights'):
weights = weight_variable([input_dim, output_dim])
variable_summaries(weights)
with tf.name_scope('biases'):
biases = bias_variable([output_dim])
variable_summaries(biases)
with tf.name_scope('Wx_plus_b'):
preactivate = tf.matmul(input_tensor, weights) + biases
tf.summary.histogram('pre_activations', preactivate)
activations = act(preactivate, name='activation')
tf.summary.histogram('activations', activations)
return activations
hidden1 = nn_layer(x, 784, 500, 'layer1')
with tf.name_scope('dropout'):
keep_prob = tf.placeholder(tf.float32)
tf.summary.scalar('dropout_keep_probability', keep_prob)
dropped = tf.nn.dropout(hidden1, keep_prob)
# Do not apply softmax activation yet, see below.
y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity)
with tf.name_scope('cross_entropy'):
# The raw formulation of cross-entropy,
#
# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),
# reduction_indices=[1]))
#
# can be numerically unstable.
#
# So here we use tf.losses.sparse_softmax_cross_entropy on the
# raw logit outputs of the nn_layer above.
with tf.name_scope('total'):
cross_entropy = tf.losses.sparse_softmax_cross_entropy(labels=y_, logits=y)
tf.summary.scalar('cross_entropy', cross_entropy)
with tf.name_scope('train'):
train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(
cross_entropy)
with tf.name_scope('accuracy'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy', accuracy)
# Merge all the summaries and write them out to /tmp/mnist_logs (by default)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(FLAGS.summaries_dir + '/train',
sess.graph)
test_writer = tf.summary.FileWriter(FLAGS.summaries_dir + '/test')
tf.global_variables_initializer().run()
&esmp;?剩下的部分需要介紹的就不是很多啦.
&esmp;?&esmp;?1.name_scope對變量進(jìn)行分類有助于在tensorboard中進(jìn)行查看
&esmp;?&esmp;?2.session 是可以保存和恢復(fù)的
&esmp;?&esmp;?3.meta_data是運(yùn)行時(shí)的統(tǒng)計(jì)信息,包括運(yùn)行內(nèi)存啦什么的.