Is 'switch' faster than 'if'?
Is a switch
statement actually faster than an if
statement?
I ran the code below on Visual Studio 2010's x64 C++ compiler with the /Ox
flag:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#define MAX_COUNT (1 << 29)
size_t counter = 0;
size_t testSwitch()
{
clock_t start = clock();
size_t i;
for (i = 0; i < MAX_COUNT; i++)
{
switch (counter % 4 + 1)
{
case 1: counter += 4; break;
case 2: counter += 3; break;
case 3: counter += 2; break;
case 4: counter += 1; break;
}
}
return 1000 * (clock() - start) / CLOCKS_PER_SEC;
}
size_t testIf()
{
clock_t start = clock();
size_t i;
for (i = 0; i < MAX_COUNT; i++)
{
const size_t c = counter % 4 + 1;
if (c == 1) { counter += 4; }
else if (c == 2) { counter += 3; }
else if (c == 3) { counter += 2; }
else if (c == 4) { counter += 1; }
}
return 1000 * (clock() - start) / CLOCKS_PER_SEC;
}
int main()
{
printf("Starting...n");
printf("Switch statement: %u msn", testSwitch());
printf("If statement: %u msn", testIf());
}
and got these results:
Switch statement: 5261 ms
If statement: 5196 ms
From what I've learned, switch
statements apparently use jump tables to optimize the branching.
Questions:
What would a basic jump table look like, in x86 or x64?
Is this code using a jump table?
Why is there no performance difference in this example? Is there any situation in which there is a significant performance difference?
Disassembly of the code:
testIf:
13FE81B10 sub rsp,48h
13FE81B14 call qword ptr [__imp_clock (13FE81128h)]
13FE81B1A mov dword ptr [start],eax
13FE81B1E mov qword ptr [i],0
13FE81B27 jmp testIf+26h (13FE81B36h)
13FE81B29 mov rax,qword ptr [i]
13FE81B2E inc rax
13FE81B31 mov qword ptr [i],rax
13FE81B36 cmp qword ptr [i],20000000h
13FE81B3F jae testIf+0C3h (13FE81BD3h)
13FE81B45 xor edx,edx
13FE81B47 mov rax,qword ptr [counter (13FE835D0h)]
13FE81B4E mov ecx,4
13FE81B53 div rax,rcx
13FE81B56 mov rax,rdx
13FE81B59 inc rax
13FE81B5C mov qword ptr [c],rax
13FE81B61 cmp qword ptr [c],1
13FE81B67 jne testIf+6Dh (13FE81B7Dh)
13FE81B69 mov rax,qword ptr [counter (13FE835D0h)]
13FE81B70 add rax,4
13FE81B74 mov qword ptr [counter (13FE835D0h)],rax
13FE81B7B jmp testIf+0BEh (13FE81BCEh)
13FE81B7D cmp qword ptr [c],2
13FE81B83 jne testIf+89h (13FE81B99h)
13FE81B85 mov rax,qword ptr [counter (13FE835D0h)]
13FE81B8C add rax,3
13FE81B90 mov qword ptr [counter (13FE835D0h)],rax
13FE81B97 jmp testIf+0BEh (13FE81BCEh)
13FE81B99 cmp qword ptr [c],3
13FE81B9F jne testIf+0A5h (13FE81BB5h)
13FE81BA1 mov rax,qword ptr [counter (13FE835D0h)]
13FE81BA8 add rax,2
13FE81BAC mov qword ptr [counter (13FE835D0h)],rax
13FE81BB3 jmp testIf+0BEh (13FE81BCEh)
13FE81BB5 cmp qword ptr [c],4
13FE81BBB jne testIf+0BEh (13FE81BCEh)
13FE81BBD mov rax,qword ptr [counter (13FE835D0h)]
13FE81BC4 inc rax
13FE81BC7 mov qword ptr [counter (13FE835D0h)],rax
13FE81BCE jmp testIf+19h (13FE81B29h)
13FE81BD3 call qword ptr [__imp_clock (13FE81128h)]
13FE81BD9 sub eax,dword ptr [start]
13FE81BDD imul eax,eax,3E8h
13FE81BE3 cdq
13FE81BE4 mov ecx,3E8h
13FE81BE9 idiv eax,ecx
13FE81BEB cdqe
13FE81BED add rsp,48h
13FE81BF1 ret
testSwitch:
13FE81C00 sub rsp,48h
13FE81C04 call qword ptr [__imp_clock (13FE81128h)]
13FE81C0A mov dword ptr [start],eax
13FE81C0E mov qword ptr [i],0
13FE81C17 jmp testSwitch+26h (13FE81C26h)
13FE81C19 mov rax,qword ptr [i]
13FE81C1E inc rax
13FE81C21 mov qword ptr [i],rax
13FE81C26 cmp qword ptr [i],20000000h
13FE81C2F jae testSwitch+0C5h (13FE81CC5h)
13FE81C35 xor edx,edx
13FE81C37 mov rax,qword ptr [counter (13FE835D0h)]
13FE81C3E mov ecx,4
13FE81C43 div rax,rcx
13FE81C46 mov rax,rdx
13FE81C49 inc rax
13FE81C4C mov qword ptr [rsp+30h],rax
13FE81C51 cmp qword ptr [rsp+30h],1
13FE81C57 je testSwitch+73h (13FE81C73h)
13FE81C59 cmp qword ptr [rsp+30h],2
13FE81C5F je testSwitch+87h (13FE81C87h)
13FE81C61 cmp qword ptr [rsp+30h],3
13FE81C67 je testSwitch+9Bh (13FE81C9Bh)
13FE81C69 cmp qword ptr [rsp+30h],4
13FE81C6F je testSwitch+0AFh (13FE81CAFh)
13FE81C71 jmp testSwitch+0C0h (13FE81CC0h)
13FE81C73 mov rax,qword ptr [counter (13FE835D0h)]
13FE81C7A add rax,4
13FE81C7E mov qword ptr [counter (13FE835D0h)],rax
13FE81C85 jmp testSwitch+0C0h (13FE81CC0h)
13FE81C87 mov rax,qword ptr [counter (13FE835D0h)]
13FE81C8E add rax,3
13FE81C92 mov qword ptr [counter (13FE835D0h)],rax
13FE81C99 jmp testSwitch+0C0h (13FE81CC0h)
13FE81C9B mov rax,qword ptr [counter (13FE835D0h)]
13FE81CA2 add rax,2
13FE81CA6 mov qword ptr [counter (13FE835D0h)],rax
13FE81CAD jmp testSwitch+0C0h (13FE81CC0h)
13FE81CAF mov rax,qword ptr [counter (13FE835D0h)]
13FE81CB6 inc rax
13FE81CB9 mov qword ptr [counter (13FE835D0h)],rax
13FE81CC0 jmp testSwitch+19h (13FE81C19h)
13FE81CC5 call qword ptr [__imp_clock (13FE81128h)]
13FE81CCB sub eax,dword ptr [start]
13FE81CCF imul eax,eax,3E8h
13FE81CD5 cdq
13FE81CD6 mov ecx,3E8h
13FE81CDB idiv eax,ecx
13FE81CDD cdqe
13FE81CDF add rsp,48h
13FE81CE3 ret
Update:
Interesting results here. Not sure why one is faster and one is slower, though.
There are several optimizations a compiler can make on a switch. I don't think the oft-mentioned "jump-table" is a very useful one though, as it only works when the input can be bounded some way.
C Pseudocode for a "jump table" would be something like this -- note that the compiler in practice would need to insert some form of if test around the table to ensure that the input was valid in the table. Note also that it only works in the specific case that the input is a run of consecutive numbers.
If the number of branches in a switch is extremely large, a compiler can do things like using binary search on the values of the switch, which (in my mind) would be a much more useful optimization, as it does significantly increase performance in some scenarios, is as general as a switch is, and does not result in greater generated code size. But to see that, your test code would need a LOT more branches to see any difference.
To answer your specific questions:
Clang generates one that looks like this:
test_switch(char): # @test_switch(char)
movl %edi, %eax
cmpl $19, %edi
jbe .LBB0_1
retq
.LBB0_1:
jmpq *.LJTI0_0(,%rax,8)
jmp void call<0u>() # TAILCALL
jmp void call<1u>() # TAILCALL
jmp void call<2u>() # TAILCALL
jmp void call<3u>() # TAILCALL
jmp void call<4u>() # TAILCALL
jmp void call<5u>() # TAILCALL
jmp void call<6u>() # TAILCALL
jmp void call<7u>() # TAILCALL
jmp void call<8u>() # TAILCALL
jmp void call<9u>() # TAILCALL
jmp void call<10u>() # TAILCALL
jmp void call<11u>() # TAILCALL
jmp void call<12u>() # TAILCALL
jmp void call<13u>() # TAILCALL
jmp void call<14u>() # TAILCALL
jmp void call<15u>() # TAILCALL
jmp void call<16u>() # TAILCALL
jmp void call<17u>() # TAILCALL
jmp void call<18u>() # TAILCALL
jmp void call<19u>() # TAILCALL
.LJTI0_0:
.quad .LBB0_2
.quad .LBB0_3
.quad .LBB0_4
.quad .LBB0_5
.quad .LBB0_6
.quad .LBB0_7
.quad .LBB0_8
.quad .LBB0_9
.quad .LBB0_10
.quad .LBB0_11
.quad .LBB0_12
.quad .LBB0_13
.quad .LBB0_14
.quad .LBB0_15
.quad .LBB0_16
.quad .LBB0_17
.quad .LBB0_18
.quad .LBB0_19
.quad .LBB0_20
.quad .LBB0_21
I can say that it is not using a jump table -- 4 comparison instructions are clearly visible:
13FE81C51 cmp qword ptr [rsp+30h],1
13FE81C57 je testSwitch+73h (13FE81C73h)
13FE81C59 cmp qword ptr [rsp+30h],2
13FE81C5F je testSwitch+87h (13FE81C87h)
13FE81C61 cmp qword ptr [rsp+30h],3
13FE81C67 je testSwitch+9Bh (13FE81C9Bh)
13FE81C69 cmp qword ptr [rsp+30h],4
13FE81C6F je testSwitch+0AFh (13FE81CAFh)
A jump table based solution does not use comparison at all.
EDIT 2014 : There has been some discussion elsewhere from people familiar with the LLVM optimizer saying that the jump table optimization can be important in many scenarios; eg in cases where there is an enumeration with many values and many cases against values in said enumeration. That said, I stand by what I said above in 2011 -- too often I see people thinking "if I make it a switch, it'll be the same time no matter how many cases I have" -- and that's completely false. Even with a jump table you get the indirect jump cost and you pay for entries in the table for each case; and memory bandwidth is a Big Deal on modern hardware.
Write code for readability. Any compiler worth its salt is going to see an if / else if ladder and transform it into equivalent switch or vice versa if it would be faster to do so.
To your question:
1.What would a basic jump table look like, in x86 or x64?
Jump table is memory address that holds pointer to the labels in something like array structure. following example will help you understand how jump table looks like
00B14538 D8 09 AB 00 D8 09 AB 00 D8 09 AB 00 D8 09 AB 00 Ø.«.Ø.«.Ø.«.Ø.«.
00B14548 D8 09 AB 00 D8 09 AB 00 D8 09 AB 00 00 00 00 00 Ø.«.Ø.«.Ø.«.....
00B14558 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00B14568 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Where 00B14538 is the pointer to the Jump table , and value like D8 09 AB 00 represents label pointer.
2.Is this code using a jump table? No in this case.
3.Why is there no performance difference in this example?
There is no performance difference because instruction for both case looks same, no jump table.
4.Is there any situation in which there is a significant performance difference?
If you have very long sequence of if check, in that case using jump table reduces performance hit but that comes with the cost of memory.
Motto: Compiler is smart enough handle such case :)
The compiler is free to compile the switch statement as a code which is equivalent to if-statement, or to create a jump table. It will likely chose one on the other based on what will execute fastest or generate the smallest code somewhat depending on what you have specified in you compiler options -- so worst case it will be the same speed as if-statements
I would trust the compiler to do the best choice and focus on what makes the code most readable.
If the number of cases becomes very large a jump table will be much faster than a series of if. However if the steps between the values is very large, then the jump table can become large, and the compiler may choose not to generate one.
链接地址: http://www.djcxy.com/p/91196.html下一篇: '开关'比'如果'更快?