Comparing Performance in Python: equal versus non equal
When comparing any variable, there is the choice of comparing equality versus comparing inequality. For variables with one element, not a string/list/tuple/etc..., the difference is probably either non-existent or negligible.
The Question: When comparing two multi-element variables, is checking whether they are equal slower or faster than comparing whether they are not equal.
My gut is telling me comparing whether they are not equal should be faster. I'm curious if anybody can tell me if this is true, and for which types of multi-element types this is so.
Note : I have looked, and haven't found any posts here that answers my question. It might just be obvious, but I'd like to have more opinions than just my own.
You could always just check:
>>> timeit("{'a': 1, 'b': 2} == {'a': 2, 'b': 1}")
0.29072967777517983
>>> timeit("{'a': 1, 'b': 2} != {'a': 2, 'b': 1}")
0.2906114293159803
The difference seems to be negligible ... another test case perhaps?
>>> timeit("range(30) == range(35)")
0.7179841181163837
>>> timeit("range(30) != range(35)")
0.725536848004765
Again, negligible.
>>> timeit("a == b", "a = {'a': 1, 'b': 2}; b = {'a': 2, 'b': 1}")
0.06806470555693522
>>> timeit("a != b", "a = {'a': 1, 'b': 2}; b = {'a': 2, 'b': 1}")
0.06724365965146717
And with the object creation moved out. Admittedly small examples, but still, I imagine both use short-circuiting where appropriate when it becomes obvious that they differ.
I think it is directly in correlation with object.__eq__()
and object.__ne__
.
This methods are launched when you use ==
(equal) or !=
(non-equal) and in function of objects you want to compare it could be faster or slower depending of how the method is written.
See Datamodel Basic customization in official doc.
My experience with some Cortex M3 assembly (or at least my professors say that's what it is), is that when you check equality or non-equality, there is one compare command that sets up 3 bits, and one if statement (if you can call it that), which looks at a particular one. Essentially you compare A
and B
, the 3 bits are Smaller, Equal and Bigger, then when you check anything it should be either checking if it is either bigger or smaller (2 checks, so 2 cycles) for non-equality, or applying a NOT to the equality flag, which depending on the architecture may be 2 different actions or a single cycle. Thus I speculate that it depends on compilers, assemblers and the CPU's architecture.
This however should mean that you can make two programs, each making an immense amount of such checks, and time their execution, where immense would run into the tens of thousands (judging by execution times in C/C++). In my humble opinion this is a fairly feasible task, which can be timed by hand so you don't mess with timers, and as timers have some peculiarities as to their precision in many languages, they might not even catch the execution time of single statements. Or you can time the immense loops and see what the computer says.
Keep in mind however that if you get 1.1x time in the not-equal loop, this does not mean that inequality takes 1.1x the time of equality checks, as the loop has a far bigger cycle, it will probably be 2x the time. With more tests with more checks per loop it should be very easy to determine the time it takes for the loop and the time spent on the checks. Hope this helps.
链接地址: http://www.djcxy.com/p/10048.html下一篇: 在Python中比较性能:相等与不相等