Loss turns to be NAN at the first round step in my tensorflow CNN
Loss turns to be NAN at the first round step in my tensorflow CNN.
1 .Network :
3 hidden layyer ( 2 convolutional layer +1 hidden fullconnect layyer ) + readout layyer.
2. The 3 hidden layyer :
a) Weights :
W = tf.Variable( tf.truncated_normal(wt,stddev=0.1,name='wights' ))
b) Bias :
b = tf.Variable( tf.fill([W.get_shape().as_list()[-1] ],0.9),name = 'biases' )
c) Activition:
relu
d) Dropout
0.6 .
**loss trun to be nan even if dropout is 0.0
softmax
4: loss function:
tf.reduce_mean(-tf.reduce_sum(_lables * tf.log(_logist), reduction_indices=[1]))
5.optimizer:
tf.train.AdamOptimizer
learning_rate:0.0005
**loss truns to be nan even if learning_rate = 0
Since we don't have the entire source code, it's hard to see the problem. However, you may try to use 'tf.nn.softmax_cross_entropy_with_logits' in your cost function. For example:
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(predictions, labels))
You can find an entire code example using 'tf.nn.softmax_cross_entropy_with_logits' at https://github.com/nlintz/TensorFlow-Tutorials.
So far I've hit two cases where nan
can result:
square
on a number, and the result is too large) sqrt
and log
don't take negative inputs, so they would return nan
) 上一篇: 张量流中的多层感知器不像预期的那样工作