
功能:Computes softmax cross entropy between `logits` and `labels`
`logits` and `labels` must have the same shape, e.g. `[batch_size, num_classes]` and the same dtype (either `float16`, `float32`,or `float64`).
Backpropagation will happen only into `logits`. To calculate a cross entropy loss that allows backpropagation into both `logits` and `labels`, see
? @{tf.nn.softmax_cross_entropy_with_logits_v2}.
_sentinel: Used to prevent positional parameters. Internal, do not use.
labels: Each row `labels[i]` must be a valid probability distribution.實(shí)際的標(biāo)簽。
logits: Unscaled log probabilities.神經(jīng)網(wǎng)絡(luò)最后一層的輸出,如果有batch的話,它的大小就是[batchsize,num_classes],單樣本的話,大小就是num_classes
dim: The class dimension. Defaulted to -1 which is the last dimension.
name: A name for the operation (optional).
Returns: A 1-D `Tensor` of length `batch_size` of the same type as `logits` with the softmax cross entropy loss.
第一步:先對網(wǎng)絡(luò)最后一層的輸出做一個softmax,這一步通常是求取輸出屬于某一類的概率,對于單樣本而言,輸出就是一個num_classes大小的向量([Y1,Y2,Y3...]其中Y1,Y2,Y3...分別代表了是屬于該類的概率)
第二步:對softmax的輸出向量[Y1,Y2,Y3...]和樣本的實(shí)際標(biāo)簽做一個交叉熵,公式如下:

顯而易見,預(yù)測越準(zhǔn)確,結(jié)果的值越小,最后求一個平均,得到我們想要的loss。
這個函數(shù)的返回值并不是一個數(shù),而是一個向量,如果要求交叉熵,要再做一步tf.reduce_sum操作,就是對向量里面所有元素求和,最后才得到,如果求loss,則要做一步tf.reduce_mean操作,對向量求均值