Matrix exponentiation in Python

I'm trying to exponentiate a complex matrix in Python and am running into some trouble. I'm using the scipy.linalg.expm function, and am having a rather strange error message when I try the following code: import numpy as np from scipy import linalg hamiltonian = np.mat('[1,0,0,0;0,-1,0,0;0,0,-1,0;0,0,0,1]') # This works t_list = np.linspace(0,1,10) unitary = [linalg.expm(-(1j)*t*hami

Python中的矩阵求幂

我试图在Python中取代复杂的矩阵,并且遇到了一些麻烦。 我使用scipy.linalg.expm函数,当我尝试下面的代码时,出现了一个相当奇怪的错误消息: import numpy as np from scipy import linalg hamiltonian = np.mat('[1,0,0,0;0,-1,0,0;0,0,-1,0;0,0,0,1]') # This works t_list = np.linspace(0,1,10) unitary = [linalg.expm(-(1j)*t*hamiltonian) for t in t_list] # This doesn't t_list = np.linspace(0,10,100) unitar

Confused about weight and bias dependencies affecting learning

I had a working LSTM model that had one weight/bias layer from its recurrent state to the output. I then also coded up the same system, but with two layers. This means that I would have the LSTM, then a hidden layer, and then the output. I wrote the lines to define this double layer model, but did not use them a single time. But, now that those layers exist but are not used at all, it wouldn&

对影响学习的体重和偏见依赖性感到困惑

我有一个有效的LSTM模型,它具有从其经常状态到输出的一个权重/偏置层。 然后我也编写了相同的系统,但有两层。 这意味着我将拥有LSTM,然后是隐藏层,然后是输出。 我写了几行来定义这个双层模型,但并没有单独使用它们。 但是,现在这些图层存在但完全没有使用,它不会学习! 所以我的权重和偏见是这样定义的: weights = {

Tensorflow DNN Multiple Classification

I'm trying to create a DNN in Python 3.5 with Tensorflow for classifying a tuple into one of 3 classes. # define initial hyperparameters batch_size = 100 train_steps = 5000 hidden_units=[10,20,10] # build model dnn = tf.contrib.learn.DNNClassifier(hidden_units=hidden_units, feature_columns=feature_cols, n_classes=3) input_fn = tf.estimator.inputs.pandas_input_fn(x=X_train, y=y_train,

Tensorflow DNN多重分类

我试图用Tensorflow在Python 3.5中创建一个DNN,将元组分类到3个类中的一个。 # define initial hyperparameters batch_size = 100 train_steps = 5000 hidden_units=[10,20,10] # build model dnn = tf.contrib.learn.DNNClassifier(hidden_units=hidden_units, feature_columns=feature_cols, n_classes=3) input_fn = tf.estimator.inputs.pandas_input_fn(x=X_train, y=y_train,

Tensorflow using feed

I am using the following code to make a Neural netowrk for classification of some data. (https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py) I want to compare the output of my prediction to the labels to better visualize how the NN works. So I am using this piece of code: #y : Labels tf_y = y [yp] = sess.run([tf_y], feed_dict

使用Feed的Tensorflow

我正在使用下面的代码来创建一个用于分类某些数据的神经网络。 (https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py) 我想将我的预测结果与标签进行比较,以便更好地了解NN的工作方式。 所以我使用这段代码: #y : Labels tf_y = y [yp] = sess.run([tf_y], feed_dict = { x : test_input, y : test_output } ) ypp = tf.argm

Tensorflow: The priority of value assigning operations

I try to understand how the Tensorflow computation graph operates, more deeply. Assume that we have the following code: A = tf.truncated_normal(shape=(1, ), stddev=0.1) B = tf.Variable([0.3], dtype=tf.float32) C = A * B grads = tf.gradients(C, [A, B]) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for i in range(1000): results = sess.run([C, grads], {A: [2], B:

Tensorflow:值赋值操作的优先级

我试图更深入地了解Tensorflow计算图如何操作。 假设我们有以下代码: A = tf.truncated_normal(shape=(1, ), stddev=0.1) B = tf.Variable([0.3], dtype=tf.float32) C = A * B grads = tf.gradients(C, [A, B]) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for i in range(1000): results = sess.run([C, grads], {A: [2], B:[5]}) 如预期的那样,我得到A和B的结果10和梯度5。 我

How to add regularizations in TensorFlow?

I found in many available neural network code implemented using TensorFlow that regularization terms are often implemented by manually adding an additional term to loss value. My questions are: Is there a more elegant or recommended way of regularization than doing it manually? I also find that get_variable has an argument regularizer . How should it be used? According to my observation,

如何在TensorFlow中添加正则化?

我发现在使用TensorFlow实现的许多可用的神经网络代码中,正则化术语通常是通过手动添加额外的损失值来实现的。 我的问题是: 有没有一种更优雅或推荐的正规化方式比手动方式? 我还发现, get_variable有一个参数regularizer 。 它应该如何使用? 根据我的观察,如果我们通过正规化器(例如tf.contrib.layers.l2_regularizer ,将计算一个表示正则化术语的张量并将其添加到一个名为tf.GraphKeys.REGULARIZATOIN_LOSSES

dict in c++ for Tensorflow models

This question is related to this one: Export Tensorflow graphs from Python for use in C++ I'm trying to export a Tensorflow model from Python to C++. The problem is, my neural net starts with a placeholder to receive input, which requires a feed_dict. I cannot find any c++ API to supply a feed_dict for my model. What can I do? If there's no API for supplying feed_dicts, how should

用于Tensorflow模型的c ++中的dict

这个问题与这个问题有关:从Python导出Tensorflow图以用于C ++ 我试图从Python导出一个Tensorflow模型到C ++。 问题是,我的神经网络以一个占位符开始接收输入,这需要一个feed_dict。 我无法找到任何c ++ API为我的模型提供feed_dict。 我能做什么? 如果没有提供feed_dicts的API,我应该如何更改我的模型,以便可以在没有占位符的情况下为c ++目的而训练和导出它? tensorflow::Session::Run()方法是Python tf.Sessio

Set weight and bias tensors of tensorflow conv2d operation

I have been given a trained neural network in torch and I need to rebuild it exactly in tensorflow. I believe I have correctly defined the network's architecture in tensorflow but I am having trouble transferring the weight and bias tensors. Using a third party package, I converted all the weight and bias tensors from the torch network to numpy arrays then wrote them to disk. I can load th

设置张量流量运算的权重和偏置张量

我已经在火炬中获得了训练有素的神经网络,并且我需要以张量流形式重建它。 我相信我已经在tensorflow中正确定义了网络的体系结构,但我无法转移权重和偏置张量。 使用第三方软件包,我将火炬网络中的所有重量和偏差张量转换为numpy数组,然后将它们写入磁盘。 我可以将它们加载回我的python程序中,但我无法找到一种方法将它们分配到我的tensorflow网络中的相应图层。 例如,我有一个在tensorflow中定义的卷积层 kernel_1

Storing TensorFlow network weights in Python multi

I'm totally new to TensorFlow and Python, so please excuse me for posting such a basic question, but I'm a bit overwhelmed with learning both things at once. EDIT: I found a solution myself and posted it below, however, more efficient solutions are wellcome Short version of the question: How can I extract every weight and bias at any point from a neural network using TensorFlow and sto

在Python中存储TensorFlow网络权重

我对TensorFlow和Python完全不熟悉,所以请原谅我发布这样一个基本问题,但是我有点不知所措,一次学习两件事。 编辑:我自己找到了一个解决方案,并在下面发布,但是,更有效的解决方案是很好的 问题的简短摘要:如何使用TensorFlow从神经网络的任何点提取每个重量和偏差,并将其存储到形状为[图层] [神经元 - 上一层] [神经元 - 当前层]的Python数组中, 。 目标不是存储在硬盘上,而是存储在与最后一个代码snipet下面解释

Tensorflow define a operation that builds the product of all tensor components

I want to define a operation in tensorflow that calculates something like: x is provided by a tensor. Finally the operation should be compared to a known value and parameters alpha, beta i and b should be learned. (I guess) The product of all inputs causes trouble. This is one version that I tried to deploy, with no success. # input X = tf.placeholder(tf.float32, [None, 2], name="X&qu

Tensorflow定义了构建所有张量组件产品的操作

我想在tensorflow中定义一个计算如下的操作: x由张量提供。 最后,应该将操作与已知值进行比较,并且应该学习参数α,βi和b。 (我猜)所有输入的产品都会造成麻烦。 这是我试图部署的一个版本,但没有成功。 #input X = tf.placeholder(tf.float32,[None,2],name =“X”)Y = tf.placeholder(tf.float32,[None,1],name =“Y”) # hidden beta = tf.get_variable("beta", shape=[2], initializer=tf.contrib.layers