Scapy.all import * does not work

So, I wrote a little script in Ubuntu for scapy. #!/usr/bin/env python import sys #from scapy.all import * try import scapy except ImportError: del scapy from scapy import all as scapy i= IP() t= TCP() i.dst='192.168.56.100' t.dport=22 pakket=i/t answered,unanswered=sr(pakket) answered.nsummary() i wrote the 'try' because of another topic here (tried it as a solution).

Scapy.all导入*不起作用

所以,我在Ubuntu中为scapy写了一个小脚本。 #!/usr/bin/env python import sys #from scapy.all import * try import scapy except ImportError: del scapy from scapy import all as scapy i= IP() t= TCP() i.dst='192.168.56.100' t.dport=22 pakket=i/t answered,unanswered=sr(pakket) answered.nsummary() 我写了'尝试'因为另一个话题在这里(尝试它作为一个解决方案)。 我的这个代码的当前

Memory leak using tornado's gen.engine

I have code which, in a simplified form, looks like this: from tornado import gen, httpclient, ioloop io_loop = ioloop.IOLoop.instance() client = httpclient.AsyncHTTPClient(io_loop=io_loop) @gen.engine def go_for_it(): while True: r = yield gen.Task(fetch) @gen.engine def fetch(callback): response = yield gen.Task(client.fetch, 'http://localhost:8888/') callback(response)

内存泄漏使用龙卷风的gen.engine

我有一个简单的代码,如下所示: from tornado import gen, httpclient, ioloop io_loop = ioloop.IOLoop.instance() client = httpclient.AsyncHTTPClient(io_loop=io_loop) @gen.engine def go_for_it(): while True: r = yield gen.Task(fetch) @gen.engine def fetch(callback): response = yield gen.Task(client.fetch, 'http://localhost:8888/') callback(response) io_loop.add_callback(go_fo

Python Program for Visual Reaction Time

I'm not a programmer. I am doing a project for Biology where I will be conducting an experiment on reaction times. Briefly, the subject should click anywhere on the screen as soon as a dot or circle (some graphic) appears on the screen. Details: Program must start at a set clock time (eg 16:03:00) which will be typed in every time Timer must start when program starts (t=0) Graphics

视觉反应时间的Python程序

我不是程序员。 我正在做一个生物学项目,我将在反应时间进行实验。 简而言之,只要屏幕上出现圆点或圆圈(某些图形),主题应该在屏幕上的任意位置单击。 细节: 程序必须从设定的时间开始(例如16:03:00),每次都要输入 定时器必须在程序启动时启动(t = 0) 根据相对于开始的预定时间(例如,1.5s,2s,3.5s,...),图形将出现在相同的点(坐标)2分钟。 每次对象按下鼠标时,都必须记录相对于定时器的时间。

Tracing from an effect back to its cause in a large Python codebase

I have a reasonably large project in Django, which is a reasonably large framework, and I'm using a reasonably large number of apps, middlewares, context processors, etc. The scale means that when a part of the codebase runs for requests where I don't want it to, identifying why it did is hard. Straight code inspection is much too time-consuming, as is single-stepping through the entire

从一个效果追溯到它在大型Python代码库中的原因

我在Django有一个相当大的项目,这是一个相当大的框架,我使用了相当多的应用程序,中间件,上下文处理器等等。规模意味着当代码库的一部分运行不希望它,确定它为什么很难。 直接代码检查过于耗时,就像在调试器中单步执行整个请求一样。 在这个特殊情况下,我的问题是我在每个响应中都设置了“Vary:Cookie”,其中包括一些我想大量缓存的内容以及我不需要任何Cookie的地方。 我怀疑,但不知道如何证明,一些中间件或背景处

The Difference between os.system and subprocess calls

I have created a program that creates a web architecture in a local server then loads the necessary browser to display the html and php pages on localhost. The os.system call kills the python process but doesn't kill the other processes -- for example, httpd.exe and mysqld.exe The subprocess call kills the httpd.exe and mysqld.exe programs but continues to run the python code, and no code

os.system和子进程调用之间的区别

我创建了一个程序,在本地服务器中创建Web架构,然后加载必要的浏览器以在本地主机上显示html和php页面。 os.system调用会os.system python进程,但不会os.system其他进程 - 例如httpd.exe和mysqld.exe subprocess mysqld.exe调用会杀死httpd.exe和mysqld.exe程序,但会继续运行python代码,并且在subprocess mysqld.exe调用后不会执行任何代码。 我将如何去执行python代码后杀死或隐藏所有必要的进程? 这是我的代码。

Why can't pip find packages listed in `pip search` results?

First it's there: $ pip search pylibpcap pylibpcap - pylibpcap is a python module for the libpcap packet capture library. Then it's not: $ pip install pylibpcap Downloading/unpacking pylibpcap Could not find any downloads that satisfy the requirement pylibpcap No distributions at all found for pylibpcap Storing complete log in /home/u0/riley/.pip/pip.log What gives?

为什么不能找到在`pip search`结果中列出的软件包?

首先它在那里: $ pip search pylibpcap pylibpcap - pylibpcap is a python module for the libpcap packet capture library. 然后它不是: $ pip install pylibpcap Downloading/unpacking pylibpcap Could not find any downloads that satisfy the requirement pylibpcap No distributions at all found for pylibpcap Storing complete log in /home/u0/riley/.pip/pip.log 是什么赋予了? 我意识到

python unpack little endian

I'm trying to use Python read a binary file. The file is in LSB mode. I import the struct module and use unpack like this: f=open(sys.argv[1],'rb') contents= unpack('<I',f.read(4))[0] print contents f.close() The data in the file is 0XC0000500 in LSB mode, and the actual value is 0X000500C0. So you can see the LSB mode's smallest size is per byte. However, I use a Mac machine, p

python解压缩小端

我正在尝试使用Python读取二进制文件。 该文件处于LSB模式。 我导入结构模块并像这样使用解包: f=open(sys.argv[1],'rb') contents= unpack('<I',f.read(4))[0] print contents f.close() 文件中的数据在LSB模式下为0XC0000500,实际值为0X000500C0。 所以你可以看到LSB模式的最小尺寸是每个字节。 但是,我使用Mac机,可能是因为我的gcc或机器的版本(我不太确定,我刚刚阅读了关于sizeof和sys的http://docs.python.

partial directory listing

Is it possible to get a partial directory listing? In Python, I have a process that tries to get os.listdir of a directory containing >100,000 of files and it takes forever. I'd like to be able, let's say, to get a listing of the first 1,000 files quickly. How can I achieve this? I found a solution that gives me a random order of the files :) (At least I can't see a pattern)

部分目录列表

是否有可能获得部分目录列表? 在Python中,我有一个尝试获取包含大于100,000个文件的目录的os.listdir的进程,并且这需要永久。 我希望能够让我们快速获得前1000个文件的列表。 我怎样才能做到这一点? 我找到了一个解决方案,给了我一个随机的文件顺序:)(至少我看不到一个模式) 首先,我在python maillist中找到了这篇文章。 有3个文件需要复制到磁盘( opendir.pyx, setup.py, test.py )。 接下来,您需要pytho

Python's

I want to implement pickling support for objects belonging to my extension library. There is a global instance of class Service initialized at startup. All these objects are produced as a result of some Service method invocations and essentially belong to it. Service knows how to serialize them into binary buffers and how deserialize buffers back into objects. It appeared that Pythons __ red

Python的

我想为属于我的扩展库的对象实施酸洗支持。 有一个启动时初始化的Service类的全局实例。 所有这些对象都是作为一些Service方法调用的结果而产生的,并且实质上属于它们。 Service知道如何将它们序列化为二进制缓冲区以及反序列化缓冲区如何返回到对象中。 似乎Pythons __ reduce__应该服务于我的目的 - 实施酸洗支持。 我开始实现一个,并意识到unpickler存在一个问题(第一个元素od预期会由__ reduce__返回)。 这个unpi

How to correctly include uncertainties in fitting with python

I am trying to fit some data points with y uncertainties in python. The data are labeled in python as x,y and yerr. I need to do a linear fit on that data in loglog scale. As a reference if the fit results are properly, i compare the python results with the ones from Scidavis I tried curve_fit with def func(x, a, b): return np.exp(a* np.log(x)+np.log(b)) popt, pcov = curve_fit(func, x,

如何正确包含用python拟合的不确定性

我试图用python中的y个不确定性来拟合一些数据点。 数据在Python中被标记为x,y和yerr。 我需要在loglog规模上对这些数据进行线性拟合。 作为一个参考,如果适合的结果是正确的,我比较蟒蛇的结果与Scidavis的结果 我尝试了curve_fit def func(x, a, b): return np.exp(a* np.log(x)+np.log(b)) popt, pcov = curve_fit(func, x, y,sigma=yerr) 以及kmpfit def funcL(p, x): a,b = p return ( np.exp(a*np.log