Do you (really) write exception safe code?
Exception handling (EH) seems to be the current standard, and by searching the web, I can not find any novel ideas or methods that try to improve or replace it (well, some variations exist, but nothing novel).
Though most people seem to ignore it or just accept it, EH has some huge drawbacks: exceptions are invisible to the code and it creates many, many possible exit points. Joel on software wrote an article about it. The comparison to goto
fits perfect, it made me think again about EH.
I try to avoid EH and just use return values, callbacks or whatever fits the purpose. But when you have to write reliable code, you just can't ignore EH these days : It starts with the new
, which may throw an exception, instead of just returning 0 (like in the old days). This makes about any line of C++ code vulnerable to an exception. And then more places in the C++ foundational code throw exceptions... std lib does it, and so on.
This feels like walking on shaky grounds .. So, now we are forced to take care about exceptions!
But its hard, its really hard. You have to learn to write exception safe code, and even if you have some experience with it, it will still be required to double check any single line of code to be safe! Or you start to put try/catch blocks everywhere, which clutters the code until it reaches a state of unreadability.
EH replaced the old clean deterministical approach (return values..), which had just a few but understandable and easily solveable drawbacks with an approach that creates many possible exit points in your code, and if you start writing code that catches exceptions (what you are forced to do at some point), then it even creates a multitude of paths through your code (code in the catch blocks, think about a server program where you need logging facilities other than std::cerr ..). EH has advantages, but that's not the point.
My actual questions:
Your question makes an assertion, that "Writing exception safe code is very hard". I will answer your questions first, and then, answer the hidden question behind them.
Answering questions
Do you really write exception safe code?
Of course I do.
This is the reason Java lost a lot of its appeal to me as a C++ programmer (lack of RAII semantics), but I am digressing: This is a C++ question.
It is in fact necessary when you need to work with STL or Boost code. For example, C++ threads ( boost::thread
or std::thread
) will throw an exception to exit gracefully.
Are you sure your last "production ready" code is exception safe?
Can you even be sure, that it is?
Writing exception-safe code is like writing bug-free code.
You can't be 100% sure your code is exception safe. But then, you strive for it, using well known patterns, and avoiding well known anti-patterns.
Do you know and/or actually use alternatives that work?
There are no viable alternatives in C++ (ie you'll need to revert back to C, and avoid C++ libraries, as well as external surprises like Windows SEH).
Writing exception safe code
To write exception safe code, you must know first what level of exception safety each instruction you write is.
For example, a new
can throw an exception, but assigning a built-in (eg an int, or a pointer) won't fail. A swap will never fail (don't ever write a throwing swap), a std::list::push_back
can throw...
Exception guarantee
The first thing to understand is that you must be able to evaluate the exception guarantee offered by all of your functions:
Example of code
The following code seems like correct C++, but in truth, offers the "none" guarantee, and thus, it is not correct:
void doSomething(T & t)
{
if(std::numeric_limits<int>::max() > t.integer) // 1. nothrow/nofail
t.integer += 1 ; // 1'. nothrow/nofail
X * x = new X() ; // 2. basic : can throw with new and X constructor
t.list.push_back(x) ; // 3. strong : can throw
x->doSomethingThatCanThrow() ; // 4. basic : can throw
}
I write all my code with this kind of analysis in mind.
The lowest guarantee offered is basic, but then, the ordering of each instruction makes the whole function "none", because if 3. throws, x will leak.
The first thing to do would be to make the function "basic", that is putting x in a smart pointer until it is safely owned by the list:
void doSomething(T & t)
{
if(std::numeric_limits<int>::max() > t.integer) // 1. nothrow/nofail
t.integer += 1 ; // 1'. nothrow/nofail
std::auto_ptr<X> x(new X()) ; // 2. basic : can throw with new and X constructor
X * px = x.get() ; // 2'. nothrow/nofail
t.list.push_back(px) ; // 3. strong : can throw
x.release() ; // 3'. nothrow/nofail
px->doSomethingThatCanThrow() ; // 4. basic : can throw
}
Now, our code offers a "basic" guarantee. Nothing will leak, and all objects will be in a correct state. But we could offer more, that is, the strong guarantee. This is where it can become costly, and this is why not all C++ code is strong. Let's try it:
void doSomething(T & t)
{
// we create "x"
std::auto_ptr<X> x(new X()) ; // 1. basic : can throw with new and X constructor
X * px = x.get() ; // 2. nothrow/nofail
px->doSomethingThatCanThrow() ; // 3. basic : can throw
// we copy the original container to avoid changing it
T t2(t) ; // 4. strong : can throw with T copy-constructor
// we put "x" in the copied container
t2.list.push_back(px) ; // 5. strong : can throw
x.release() ; // 6. nothrow/nofail
if(std::numeric_limits<int>::max() > t2.integer) // 7. nothrow/nofail
t2.integer += 1 ; // 7'. nothrow/nofail
// we swap both containers
t.swap(t2) ; // 8. nothrow/nofail
}
We re-ordered the operations, first creating and setting X
to its right value. If any operation fails, then t
is not modified, so, operation 1 to 3 can be considered "strong": If something throws, t
is not modified, and X
will not leak because it's owned by the smart pointer.
Then, we create a copy t2
of t
, and work on this copy from operation 4 to 7. If something throws, t2
is modified, but then, t
is still the original. We still offer the strong guarantee.
Then, we swap t
and t2
. Swap operations should be nothrow in C++, so lets hope the swap you wrote for T
is nothrow (if it isn't, rewrite it so it is nothrow).
So, if we reach the end of the function, everything succeeded (No need of a return type) and t
has its excepted value. If it fails, then t
has still its original value.
Now, offering the strong guarantee could be quite costly, so don't strive to offer the strong guarantee to all your code, but if you can do it without a cost (and C++ inlining and other optimization could make all the code above costless), then do it. The function user will thank you for it.
Conclusion
It takes some habit to write exception-safe code. You'll need to evaluate the guarantee offered by each instruction you'll use, and then, you'll need to evaluate the guarantee offered by a list of instructions.
Of course, the C++ compiler won't back up the guarantee (in my code, I offer the guarantee as a @warning doxygen tag), which is kinda sad, but it should not stop you from trying to write exception safe code.
Normal failure vs. bug
How can a programmer guarantee that a nofail function will always succeed? After all, the function could have a bug.
This is true. The exception guarantees are supposed to be offered by bug-free code. But then, in any language, calling a function supposes the function is bug-free. No sane code protects itself against the possibility of it having a bug. Write code the best you can, and then, offer the guarantee with the supposition it is bug free. And if there is a bug, correct it.
Exceptions are for exceptional processing failure, not for code bugs.
Last words
Now, the question is "Is this worth it ?".
Of course it is. Having a "nothrow/nofail" function knowing that the function won't fail is a great boon. The same can be said for a "strong" function, which enables you to write code with transactional semantics, like databases, with commit/rollback features, the commit being normal execution of the code, throwing exceptions being the rollback.
Then, the "basic" is the very least guarantee you should offer. C++ is a very strong language there, with its scopes, enabling you to avoid any resource leaks (something a garbage collector would find it difficult to offer for database, connection or file handles).
So, as far as I see it, it is worth it.
Edit 2010-01-29: About non-throwing swap
nobar made a comment that, I believe, is quite relevant, because it is part of "how do you write exception safe code":
swap()
functions. It should be noted, however, that std::swap()
can fail based on the operations that it uses internally the default std::swap
will make copies and assignments, which, for some objects, can throw. Thus, the default swap could throw, either used for your classes, or even for STL classes. As far as the C++ standard is concerned, the swap operation for vector
, deque
, and list
won't throw, whereas it could for map
if the comparison functor can throw on copy construction (See The C++ Programming Language, Special Edition, appendix E, E.4.3.Swap).
Looking at Visual C++ 2008 implementation of the vector's swap, the vector's swap won't throw if the two vectors have the same allocator (ie, the normal case), but will make copies if they have different allocators. And thus, I assume it could throw in this last case.
So, the original text still holds: Don't ever write a throwing swap, but nobar's comment must be remembered: Be sure the objects you're swapping have a non-throwing swap.
Edit 2011-11-06: Interesting article
Dave Abrahams, who gave us the basic/strong/nothrow guarantees, described in an article his experience about making the STL exception safe:
http://www.boost.org/community/exception_safety.html
Look at the 7th point (Automated testing for exception-safety), where he relies on automated unit testing to make sure every case is tested. I guess this part is an excellent answer to the question author's "Can you even be sure, that it is?".
Edit 2013-05-31: Comment from dionadar
t.integer += 1;
is without the guarantee that overflow will not happen NOT exception safe, and in fact may technically invoke UB! (Signed overflow is UB: C++11 5/4 "If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.") Note that unsigned integer do not overflow, but do their computations in an equivalence class modulo 2^#bits.
Dionadar is refering to the following line, which indeed has undefined behaviour.
t.integer += 1 ; // 1. nothrow/nofail
The solution here is to verify if the integer is already at its max value (using std::numeric_limits<T>::max()
) before doing the addition.
My error would go in the "Normal failure vs. bug" section, that is, a bug. It doesn't invalidate the reasoning, and it does not mean exception safe code is useless because impossible to attain. You can't protect yourself against the computer switching off, or compiler bugs, or even your bugs, or other errors. You can't attain perfection, but you can try to get as near as possible.
I corrected the code with Dionadar's comment in mind.
Writing exception-safe code in C++ is not so much about using lots of try { } catch { } blocks. It's about documenting what kind of guarantees your code provides.
I recommend reading Herb Sutter's Guru Of The Week series, in particular installments 59, 60 and 61.
To summarize, there are three levels of exception safety you can provide:
Personally, I discovered these articles quite late, so much of my C++ code is definitely not exception-safe.
Some of us have been using exception for over 20 years. PL/I has them, for example. The premise that they are a new and dangerous technology seems questionable to me.
链接地址: http://www.djcxy.com/p/13262.html上一篇: 来自布尔的NullPointerException
下一篇: 你(真的)写出异常安全的代码吗?