Using bitwise operations

  • How often you use bitwise operation "hacks" to do some kind of optimization? In what kind of situations is it really useful?
  • Example: instead of using if:

    if (data[c] >= 128) //in a loop
        sum += data[c];
    

    you write:

    int t = (data[c] - 128) >> 31;
    sum += ~t & data[c];
    

    Of course assuming it does the same intended result for this specific situation.

  • Is it worth it? I find it unreadable. How often do you come across this?
  • Note: I saw this code here in the chosen answers :Why is processing a sorted array faster than an unsorted array?


    Bitwise operations are so useful that prof. Knuth wrote a book abot them: http://www.amazon.com/The-Computer-Programming-Volume-Fascicle/dp/0321580508

    Just to mention a few simplest ones: int multiplication and division by a power of two (using left and right shift), mod with respect to a power of two, masking and so on. When using bitwise ops just be sure to provide sufficient comments about what's going on.

    However, your example, data[c]>128 is not applicable IMO, just keep it that way. But if you want to compute data[c] % 128 then data[c] & 0x7f is much faster (where & represents bitwise AND).


    While that code was an excellent way to show what's going on, I usually wouldn't use code like that. If it had to be fast, there are usually even faster solutions, such as using SSE on x86 or NEON on ARM. If none of that is available, sure, I'll use it, provided it helps and it's necessary.

    By the way, I explain how it works in this answer

    Like Skylion, one thing I've used a lot is figuring out whether a number is a power of two. Think a while about how you'd do that.. then look at this: (x & (x - 1)) == 0 && x != 0

    It's tricky the first time you see it, I suppose, but once you get used to it it's just so much simpler than any alternative that doesn't use bitmath. It works because subtracting 1 from a number means that the borrow starts at the rightmost end of the number and runs through all the zeroes, then stops at the first 1 which turns into a zero. ANDing that number with the original then makes the rightmost 1 zero. Powers of two only have one 1, which disappears, leaving zero. All other numbers will have at least one 1 left, except zero, which is a special case. A common variant doesn't test for zero, and is OK with treating it as power of two or knows that zero can't happen.

    Similarly there are other things that you can easily do with bitmath, but not so easy without. As they say, use the right tool for the job. Sometimes bitmath is the right tool.


    There are several instances where using such hacks may be useful. For instance, they can remove some Java Virtual Machine "Optimizations" such as branch predictors. I have found them useful only once in a few cases. The main one is multiplying by -1. If you are doing it hundreds of times across a massive array it is more efficient to simply flip the first bit, than to actually multiple. Another example I have used it is to know if a number is a power of 2 (since it's so easy to figure out in binary.) Basically, bit hacks are useful when you want to cheat. Here is a human analogy. If you have list of numbers and you need to know if they are greater than 29, You can automatically know if the first digit is larger than 3, then the whole thing is larger than 30 an vice versa. Bitwise operations simply allow you to perform similar cheats to binary.

    链接地址: http://www.djcxy.com/p/6202.html

    上一篇: 位操作以避免分支错误预测

    下一篇: 使用按位操作