Hello I have a question about Tensorflow. I have some LSTM models trained and I can access the weights and biases of the synaptic connections however I can't seem to access the input, new input, output and forget gate weights of the LSTM cell. I can get the gate tensors out but when I try to .eval() them in a Session I get errors. I'm using the class BasicLSTMCell found in tensorflow/p
你好,我有一个关于Tensorflow的问题。 我有一些LSTM模型经过培训,我可以访问突触连接的权重和偏差,但我似乎无法访问LSTM单元的输入,新输入,输出和忘记门重。 我可以得到门张量,但是当我尝试在会话中尝试.eval()时,我会收到错误。 我在我的网络中使用在tensorflow / python / ops / rnn_cell.py中找到的BasicLSTMCell类 ` class BasicLSTMCell(RNNCell): """Basic LSTM recurrent network cell. The implement
I've modified tensor flow example to fit on my data, given here: data But my neural network is not learning at all, I tried to use different no. of hidden layers, learning rate and optimization functions, but it didn't help.My code is given below: from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from
我修改了张量流量示例以适合我的数据,这里给出:数据 但是我的神经网络根本没有学习,我试着用不同的号码。 隐藏层,学习率和优化功能,但没有帮助。我的代码如下: from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow.contrib import learn import matplotlib.pyplot as plt from sklearn.pipeline import Pipelin
I am trying to write a two layer neural network to train a class labeler. The input to the network is a 150-feature list of about 1000 examples; all features on all examples have been L2 normalized. I only have two outputs, and they should be disjoint--I am just attempting to predict whether the example is a one or a zero. My code is relatively simple; I am feeding the input data into the
我正在尝试编写一个双层神经网络来训练一个类标签。 网络输入是一个包含约1000个示例的150个特征列表; 所有例子的所有特征都被L2标准化了。 我只有两个输出,它们应该是不相交的 - 我只是试图预测这个例子是一个还是一个零。 我的代码相对简单; 我将输入数据输入到隐藏层,然后将隐藏层输入到输出中。 由于我真的只想看到这一行动的实施,我正在对每一步的整个数据集进行培训。 我的代码如下。 基于我提到的其他NN实
I am creating a computational graph in Tensorflow and I want to use the pretrained vectors. I have a method the preloads the vectors of all my words in the dataset into a matrix. def preload_vectors(word2vec_path, word2id, vocab_size, emb_dim): if word2vec_path: print('Load word2vec_norm file {}'.format(word2vec_path)) with open(word2vec_path,'r') as f: head
我在Tensorflow中创建了一个计算图,并且我想使用预训练矢量。 我有一种方法将数据集中所有单词的向量预加载到矩阵中。 def preload_vectors(word2vec_path, word2id, vocab_size, emb_dim): if word2vec_path: print('Load word2vec_norm file {}'.format(word2vec_path)) with open(word2vec_path,'r') as f: header=f.readline() print(vocab_size, emb_dim)
I've built some neural networks with TensorFlow, like basic MLPs and convolutional neural networks. Now I want to move on to recurrent neural networks. However, I'm not experienced in natural language processing. Therefore the TensorFlow NLP tutorials for RNNs are not easy to read for me (and not really interesting, too). Basically I want to start off with something simple, not a LST
我用TensorFlow构建了一些神经网络,就像基本的MLP和卷积神经网络一样。 现在我想转向循环神经网络。 但是,我在自然语言处理方面没有经验。 因此,针对RNN的TensorFlow NLP教程对我来说不容易阅读(并且也不是很有趣)。 基本上我想从简单的事情开始,而不是LSTM。 在TensorFlow中,如何构建一个简单的循环神经网络,如Elman网络? 我只能找到TensorFlow的GRU或LSTM RNN实例,主要是针对NLP。 有谁知道一些简单的经常
I have the following code in TensorFlow : def func(a): b = tf.Variable(10) * a return a with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(func(tf.constant(4)))) It works well. But when I substitute a with b as follows: def func(a): b = tf.Variable(10) * a return b with tf.Session() as sess: sess.run(tf.global_variables_initialize
我在TensorFlow有以下代码: def func(a): b = tf.Variable(10) * a return a with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(func(tf.constant(4)))) 它运作良好。 但是当我用b代替a时: def func(a): b = tf.Variable(10) * a return b with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(func(tf.constan
I am trying to use make_template() to avoid passing reuse flag throughout my model. But it seems that make_template() doesn't work correctly when it is used inside of a python class. I pasted ]my model code and the error I am getting below. It is a simple MLP to train on the MNIST dataset. Since the code is kinda long, the main part here is the _weights() function. I try to wrap it usin
我试图使用make_template()来避免在我的模型中传递重用标志。 但似乎make_template()在python类中使用时无法正确工作。 我粘贴]我的模型代码和我得到的错误。 在MNIST数据集上训练是一个简单的MLP。 由于代码有点长,这里的主要部分是_weights()函数。 我尝试使用make_template()来封装它,然后使用get_variables()在其中创建并重新使用整个模型中的权重。 _weights()由_create_dense_layer()使用,并由_cr
I am using TensorFlow to run some Kaggle competitions. Since I don't have much training data, I am using TF constants to pre-load all of my training and test data into the Graph for efficiency. My code looks like this ... lots of stuff ... with tf.Graph().as_default(): train_images = tf.constant(train_data[:36000,1:], dtype=tf.float32) ... more stuff ... train_set = tf.train.s
我正在使用TensorFlow来运行一些Kaggle比赛。 由于我没有太多训练数据,因此我使用TF常量将所有训练和测试数据预加载到图形中以提高效率。 我的代码看起来像这样 ... lots of stuff ... with tf.Graph().as_default(): train_images = tf.constant(train_data[:36000,1:], dtype=tf.float32) ... more stuff ... train_set = tf.train.slice_input_producer([train_images, train_labels]) images, labels =
I am just at the beginning of my career in machine learning and wanted to create simple CNN to classify 2 different kind of leaves (belonging to 2 different species of trees). Before gathering huge amount of pictures of leaves, I decided to create very small, simple CNN in Tensorflow and train it on only one image, to check, wheter the code is ok. I normalized the photo of size 256x256(x 3 chan
我只是在机器学习的职业生涯的开始,并想创建简单的CNN来分类2种不同类型的树叶(属于2种不同树种)。 在收集大量的树叶图片之前,我决定在Tensorflow中创建非常小巧,简单的CNN,并仅在一幅图像上进行训练,以检查代码是否正常。 我将尺寸为256x256(x3通道)的照片标准化为<0,1>,并创建了4层(2个conv和2个稠密)网络。 不幸的是,从一开始,损失几乎总是趋向于一些常数值(通常是一些整数)。 我认为图片有些问题
I'm trying to train a regressor model that can predict 4 scalar float outputs. As it currently stands, the network very quickly diverges with loss increasing to NaN. I can't figure out what's going on. Below is a self-contined sample tested with TensorFlow 1.1.0 on Windows 10 with a NVidia GPU. from __future__ import absolute_import from __future__ import division from __future__
我试图训练一个可以预测4个标量浮点输出的回归模型。 就目前而言,网络非常迅速地分散,损失增加到NaN。 我无法弄清楚发生了什么事。 下面是一个使用NVidia GPU在Windows 10上使用TensorFlow 1.1.0进行测试的自我控制样本。 from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy import tensorflow as tf IMAGE_HEIGHT = 320 IMAGE_WIDTH = 160 N