dict issue

So I am new to Tensorflow, I am trying to understand exactly when to use feed_dict and when it is unnecessary.

However, I am confused by how the feed_dict works.

For example: Will 1 be the same as 2 and 3?

1. accuracy, cost = sess.run([accuracy, cost], feed_dict = {X:X_batch, Y:Y_batch})

2. accuracy  = sess.run(accuracy, feed_dict = {X:X_batch, Y: Y_batch})
   cost = sess.run(cost, feed_dict = {X:X_batch, Y:Y_batch})

3. accuracy = sess.run(accuracy, feed_dict = {X:X_batch, Y:Y_batch})
   cost = sess.run(cost)

I don't know that if tensorflow receives the same feed_dict in cost and in tensorflow graph computing accuracy already computes cost, do it go over the neural net again to evaluate the value, or it will return the value computed without going through the net again?

Also, since cost was already computed in the graph, if I want to retrieve the latest cost computed, Can I just do it in the manner as 3 does?

Also, from Hvass-Labs/TensorFlow-Tutorials/TensorFlow Tutorial #02 Convolutional Neural Network,

in function plot_conv_weights(weights, input_channel=0)

weights = sess.run(conv_weigh)

Since training weights require we fill the placeholder X and Y with values, but here I saw no feed_dict.

So how exactly feed_dict works?

ps: So I have asked this question in tensorflow github but they closed my question, and show me how tf.Session().run() works.

From what I understand from the documents, tf.Operation if fetched will return None. And tf.Operation are node in the tensorflow graph that does computations for two tensor.

However, I don't think this document is related to my questions...


To understand feed_dict , you need to understand how TensorFlow works. In TF everything is lazy evaluated.

Here is a simple example:

import tensorflow as tf
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.add(a, b)

sess = tf.Session()
print(sess.run(c, feed_dict={a: 1, b: 2}))
sess.close()

From the code you can see that we have two placeholders that are not filled with any numbers. Our goal is to calculate the sum of both a and b . Through feed_dict we are basically filling it with values. This concept would also apply to your question.

Regarding your question if this would be the same:

1. accuracy, cost = sess.run([accuracy, cost], feed_dict = {X:X_batch, Y:Y_batch})

2. accuracy  = sess.run(accuracy, feed_dict = {X:X_batch, Y: Y_batch})
   cost = sess.run(cost, feed_dict = {X:X_batch, Y:Y_batch})

And the answer is yes. Another concept of TensorFlow is that the computation is separated from the graph which means that as long as your computation runs in the same session , you will also get the same results for both acurracy and cost . But of course we would always prefer 1).

For your last question, concerning the weights function. On his notebook you can see there is not calculation involved:

# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)

He's just plotting the output of the weights which is incurred by the optimize function.

链接地址: http://www.djcxy.com/p/5510.html

上一篇: TensorFlow:在训练和测试集上神经网络的准确度始终为100%

下一篇: 字典问题