loaded data in TensorFlow
I am using TensorFlow to run some Kaggle competitions. Since I don't have much training data, I am using TF constants to pre-load all of my training and test data into the Graph for efficiency. My code looks like this
... lots of stuff ... with tf.Graph().as_default(): train_images = tf.constant(train_data[:36000,1:], dtype=tf.float32) ... more stuff ... train_set = tf.train.slice_input_producer([train_images, train_labels]) images, labels = tf.train.batch(train_set, batch_size=100) # this is where my model graph is built model = MLP(hidden=[512, 512]) logits = model._create_model(images) loss = model._create_loss_op(logits, labels) train = model._create_train_op(loss) # I know I am not supposed to call _something() methods # from outside of the class. I used to call these internally # but refactoring is still in progress
Now, when I was using feed dictionary to feed the data, I could only build the model once, but easily switch the inputs between, for example, my training data and my validation data (and my test data). But with pre-loading it seems that I have to build a separate copy of the graph for every set of inputs I have. Currently, I do exactly that and I use variable reuse to make sure the same weights and biases are being used by the graphs. But I cannot help, but feel that this is a weird way of doing things. So, for example, here are some bits and pieces of my MLP class and my validation code
class MLP(object): ... lots of stuff happens here ... def _create_dense_layer(self, name, inputs, n_in, n_out, reuse=None, activation=True): with tf.variable_scope(name, reuse=reuse): weights = self._weights([n_in, n_out]) self.graph.add_to_collection('weights', weights) layer = tf.matmul(inputs, weights) if self.biases: biases = self._biases([n_out]) layer = layer + biases if activation: layer = self.activation(layer) return layer ... and back to the training code ... valid_images = tf.constant(train_data[36000:,1:], dtype=tf.float32) valid_logits = model._create_model(valid_images, reuse=True) valid_accuracy = model._create_accuracy_op(valid_logits, valid_labels)
So, do I really have to create a complete copy of my model for each set of data I want to use it on or am I missing something in TF and there is an easier way of doing it?
链接地址: http://www.djcxy.com/p/32056.html上一篇: TensorFlow中的模板()
下一篇: 在TensorFlow中加载数据