"Observable behaviour" and compiler freedom to eliminate/transform pieces c++ code
After reading this discussion I realized that I almost totally misunderstand the matter :)
As the description of C++ abstract machine is not rigorous enough(comparing, for instance, with JVM specification), and if a precise answer isn't possible I would rather want to get informal clarifications about rules that reasonable "good" (non-malicious) implementation should follow.
The key concept of part 1.9 of the Standard addressing implementation freedom is so called as-if rule:
an implementation is free to disregard any requirement of this Standard as long as the result is as if the requirement had been obeyed, as far as can be determined from the observable behavior of the program.
The term "observable behavior", according to the standard (I cite n3092), means the following:
— Access to volatile objects are evaluated strictly according to the rules of the abstract machine.
— At program termination, all data written into files shall be identical to one of the possible results that execution of the program according to the abstract semantics would have produced.
— The input and output dynamics of interactive devices shall take place in such a fashion that prompting output is actually delivered before a program waits for input. What constitutes an interactive device is implementation-defined.
So, roughly speaking, the order and operands of volatile access operations and io operations should be preserved; implementation may make arbitrary changes in the program which preserve these invariants (comparing to some allowed behaviour of the abstract c++ machine)
Is it reasonable to expect that non-malicious implementation treates io operations wide enough (for instance, any system call from user code is treated as such operation)? (Eg RAII mutex lock/unlock wouldn't be thrown away by compiler in case RAII wrapper contains no volatiles)
How deeply the "behavioral observation" should immerse from user-defined c++ program level into library/system calls? The question is, of course, only about library calls that not intended to have io/volatile access from the user viewpoint (eg as new/delete operations) but may (and usually does) access volatiles or io in the library/system implementation. Should the compiler treat such calls from the user viewpoint (and consider such side effects as not observable) or from "library" viewpoint (and consider the side effects as observable) ?
If I need to prevent some code from elimination by compiler, is it a good practice not to ask all the questions above and simply add (possibly fake) volatile access operations (wrap the actions needed to volatile methods and call them on volatile instances of my own classes) in any case that seems suspicious?
Or I'm totally wrong and the compiler is disallowed to remove any c++ code except of cases explicitly mentioned by the standard (as copy elimination)
The important bit is that the compiler must be able to prove that the code has no side effects before it can remove it (or determine which side effects it has and replace it with some equivalent piece of code). In general, and because of the separate compilation model, that means that the compiler is somehow limited as to what library calls have observable behavior and can be eliminated.
As to the deepness of it, it depends on the library implementation. In gcc, the C standard library uses compiler attributes to inform the compiler of potential side effects (or absence of them). For example, strlen
is tagged with a pure attribute that allows the compiler to transform this code:
char p[] = "Hi theren";
for ( int i = 0; i < strlen(p); ++i ) std::cout << p[i];
into
char * p = get_string();
int __length = strlen(p);
for ( int i = 0; i < __length; ++i ) std::cout << p[i];
But without the pure attribute the compiler cannot know whether the function has side effects or not (unless it is inlining it, and gets to see inside the function), and cannot perform the above optimization.
That is, in general, the compiler will not remove code unless it can prove that it has no side effects, ie will not affect the outcome of the program. Note that this does not only relate to volatile
and io, since any variable change might have observable behavior at a later time.
As to question 3, the compiler will only remove your code if the program behaves exactly as if the code was present (copy elision being an exception), so you should not even care whether the compiler removes it or not. Regarding question 4, the as-if rule stands: If the outcome of the implicit refactor made by the compiler yields the same result, then it is free to perform the change. Consider:
unsigned int fact = 1;
for ( unsigned int i = 1; i < 5; ++i ) fact *= i;
The compiler can freely replace that code with:
unsigned int fact = 120; // I think the math is correct... imagine it is
The loop is gone, but the behavior is the same: each loop interaction does not affect the outcome of the program, and the variable has the correct value at the end of the loop, ie if it is later used in some observable operation, the result will be as-if the loop had been executed.
Don't worry too much on what observable behavior and the as-if rule mean, they basically mean that the compiler must yield the output that you programmed in your code, even if it is free to get to that outcome by a different path.
EDIT
@Konrad raises a really good point regarding the initial example I had with strlen
: how can the compiler know that strlen
calls can be elided? And the answer is that in the original example it cannot, and thus it could not elide the calls. There is nothing telling the compiler that the pointer returned from the get_string()
function does not refer to memory that is being modified elsewhere. I have corrected the example to use a local array.
In the modified example, the array is local, and the compiler can verify that there are no other pointers that refer to the same memory. strlen
takes a const pointer and so it promises not to modify the contained memory, and the function is pure so it promises not to modify any other state. The array is not modified inside the loop construct, and gathering all that information the compiler can determine that a single call to strlen
suffices. Without the pure specifier, the compiler cannot know whether the result of strlen
will differ in different invocations and has to call it.
The abstract machine defined by the standard will, given a specific input, produce one of a set of specific output. In general, all that is guaranteed is that for that specific input, the compiled code will produce one of the possible specific output. The devil is in the details, however, and there are a number of points to keep in mind.
The most important of these is probably the fact that if the program has undefined behavior, the compiler can do absolutely anything. All bets are off. Compilers can and do use potential undefined behavior for optimizing: for example, if the code contains something like *p = (*q) ++
, the compiler can conclude that p
and q
aren't aliases to the same variable.
Unspecified behavior can have similar effects: the actual behavior may depend on the level of optimization. All that is requires is that the actual output correspond to one of the possible outputs of the abstract machine.
With regards to volatile
, the stadnard does say that access to volatile objects is observable behavior, but it leaves the meaning of "access" up to the implementation. In practice, you can't really count much on volatile
these days; actual accesses to volatile objects may appear to an outside observer in a different order than they occur in the program. (This is arguably in violation of the intent of the standard, at the very least. It is, however, the actual situation with most modern compilers, running on a modern architecture.)
Most implementations treat all system calls as “IO”. With regards to mutexes, of course: as far as C++03 is concerned, as soon as you start a second thread, you've got undefined behavior (from the C++ point of view—Posix or Windows do define it), and in C++11, synchronization primatives are part of the language, and constrain the set of possible outputs. (The compiler can, of course, elimiate the synchronizations if it can prove that they weren't necessary.)
The new
and delete
operators are special cases. They can be replaced by user defined versions, and those user defined versions may clearly have observable behavior. The compiler can only remove them if it has some means of knowing either that they haven't been replaced, of that the replacements have no observable behavior. In most systems, replacement is defined at link time, after the compiler has finished its work, so no changes are allowed.
With regards to your third question: I think you're looking at it from the wrong angle. Compilers don't “eliminate” code, and no particular statement in a program is bound to a particular block of code. Your program (the complete program) defines a particular semantics, and the compiler must do something which produces an executable program having those semantics. The most obvious solution for the compiler writer is to take each statement separately, and generate code for it, but that's the compiler writer's point of view, not yours. You put source code in, and get an executable out; but lots of statements don't result in any code, and even for those that do, there isn't necessarily a one to one relationship. In this sense, the idea of “preventing some code elimination” doesn't make sense: your program has a semantics, specified by the standard, and all you can ask for (and all that you should be interested in) is that the final executable have those semantics. (Your fourth point is similar: the compiler doesn't “remove” any code.)
I can't speak for what the compilers should do, but here's what some compilers actually do
#include <array>
int main()
{
std::array<int, 5> a;
for(size_t p = 0; p<5; ++p)
a[p] = 2*p;
}
assembly output with gcc 4.5.2:
main:
xorl %eax, %eax
ret
replacing array with vector shows that new/delete are not subject to elimination:
#include <vector>
int main()
{
std::vector<int> a(5);
for(size_t p = 0; p<5; ++p)
a[p] = 2*p;
}
assembly output with gcc 4.5.2:
main:
subq $8, %rsp
movl $20, %edi
call _Znwm # operator new(unsigned long)
movl $0, (%rax)
movl $2, 4(%rax)
movq %rax, %rdi
movl $4, 8(%rax)
movl $6, 12(%rax)
movl $8, 16(%rax)
call _ZdlPv # operator delete(void*)
xorl %eax, %eax
addq $8, %rsp
ret
My best guess is that if the implementation of a function call is not available to the compiler, it has to treat it as possibly having observable side-effects.
链接地址: http://www.djcxy.com/p/79918.html