How does the + operator work in C?

When understanding how primitive operators such as + , - , * and / are implemented in C, I found the following snippet from an interesting answer.

// replaces the + operator
int add(int x, int y) {
    while(x) {
        int t = (x & y) <<1;
        y ^= x;
        x = t;
    }
    return y;
}

Seems that this function demonstrates how + actually works in the background. However, it's too confusing for me to understand it. I believed that such operations are done using assembly directives generated by the compiler for a long time!

My question is: Is the + operator implemented as the code posted on MOST implementations? Does this take advantage of two's complement or other implementation-dependent features? And I'll appreciate it very much if someone can explain how it works.

Hmm...Maybe this question is a bit off-topic on SO, but I suppose it's somewhat good to look through these operators.


To be pedantic, the C specification does not specify how addition is implemented.

But to be realistic, the + operator on integer types smaller than or equal to the word size of your CPU get translated directly into an addition instruction for the CPU, and larger integer types get translated into multiple addition instructions with some extra bits to handle overflow.

The CPU internally uses logic circuits to implement the addition, and does not use loops, bitshifts, or anything that has a close resemblance to how C works.


When you add two bits, following is the result: (truth table)

a | b | sum (a^b) | carry bit (a&b) (goes to next)
--+---+-----------+--------------------------------
0 | 0 |    0      | 0
0 | 1 |    1      | 0
1 | 0 |    1      | 0
1 | 1 |    0      | 1

So if you do bitwise xor, you can get the sum without carry. And if you do bitwise and you can get the carry bits.

Extending this observation for multibit numbers a and b

a+b = sum_without_carry(a, b) + carry_bits(a, b) shifted by 1 bit left
    = a^b + ((a&b) << 1)

Once b is 0 :

a+0 = a

So algorithm boils down to:

Add(a, b)
  if b == 0
    return a;
  else
    carry_bits = a & b;
    sum_bits = a ^ b;
    return Add(sum_bits, carry_bits << 1);

If you get rid of recursion and convert it to a loop

Add(a, b)
  while(b != 0) {
    carry_bits = a & b;
    sum_bits = a ^ b;

    a = sum_bits;
    b = carrry_bits << 1;  // In next loop, add carry bits to a
  }
  return a;

With above algorithm in mind explanation from code should be simpler:

int t = (x & y) << 1;

Carry bits. Carry bit is 1 if 1 bit to the right in both operands is 1.

y ^= x;  // x is used now

Addition without carry (Carry bits ignored)

x = t;

Reuse x to set it to carry

while(x)

Repeat while there are more carry bits


A recursive implementation (easier to understand) would be:

int add(int x, int y) {
    return (y == 0) ? x : add(x ^ y, (x&y) << 1);
}

Seems that this function demonstrates how + actually works in the background

No. Usually (almost always) integer addition translates to machine instruction add. This just demonstrate an alternate implementation using bitwise xor and and.


Seems that this function demonstrates how + actually works in the background

No. This is translated to the native add machine instruction, which is actually using the hardware adder, in the ALU .

If you're wondering how does the computer add, here is a basic adder.

Everything in the computer is done using logic gates, which are mostly made of transistors. The full adder has half-adders in it.

For a basic tutorial on logic gates, and adders, see this. The video is extremely helpful, though long.

In that video, a basic half-adder is shown. If you want a brief description, this is it:

The half adder add's two bits given. The possible combinations are:

  • Add 0 and 0 = 0
  • Add 1 and 0 = 1
  • Add 1 and 1 = 10 (binary)
  • So now how does the half adder work? Well, it is made up of three logic gates, the and , xor and the nand . The nand gives a positive current if both the inputs are negative, so that means this solves the case of 0 and 0. The xor gives a positive output one of the input is positive, and the other negative, so that means that it solves the problem of 1 and 0. The and gives a positive output only if both the inputs are positive, so that solves the problem of 1 and 1. So basically, we have now got our half-adder. But we still can only add bits.

    Now we make our full-adder. A full adder consists of calling the half-adder again and again. Now this has a carry. When we add 1 and 1, we get a carry 1. So what the full-adder does is, it takes the carry from the half-adder, stores it, and passes it as another argument to the half-adder.

    If you're confused how can you pass the carry, you basically first add the bits using the half-adder, and then add the sum and the carry. So now you've added the carry, with the two bits. So you do this again and again, till the bits you have to add are over, and then you get your result.

    Surprised? This is how it actually happens. It looks like a long process, but the computer does it in fractions of a nanosecond, or to be more specific, in half a clock cycle. Sometimes it is performed even in a single clock cycle. Basically, the computer has the ALU (a major part of the CPU ), memory, buses, etc..

    If you want to learn computer hardware, from logic gates, memory and the ALU, and simulate a computer, you can see this course, from which I learnt all this: Build a Modern Computer from First Principles

    It's free if you do not want an e-certificate. The part two of the course is coming up in spring this year

    链接地址: http://www.djcxy.com/p/9988.html

    上一篇: 构建阶段选项卡隐藏或丢失

    下一篇: +运算符如何在C中工作?