Is there an acceptable limit for memory leaks?
I've just started experimenting with SDL in C++, and I thought checking for memory leaks regularly may be a good habit to form early on.
With this in mind, I've been running my 'Hello world' programs through Valgrind to catch any leaks, and although I've removed everything except the most basic SDL_Init()
and SDL_Quit()
statements, Valgrind still reports 120 bytes lost and 77k still reachable.
My question is: Is there an acceptable limit for memory leaks, or should I strive to make all my code completely leak-free?
Be careful that Valgrind isn't picking up false positives in its measurements.
Many naive implementations of memory analyzers flag lost memory as a leak when it isn't really.
Maybe have a read of some of the papers in the external links section of the Wikipedia article on Purify. I know that the documentation that comes with Purify describes several scenarios where you get false positives when trying to detect memory leaks and then goes on to describe the techniques Purify uses to get around the issues.
BTW I'm not affiliated with IBM in any way. I've just used Purify extensively and will vouch for its effectiveness.
Edit: Here's an excellent introductory article covering memory monitoring. It's Purify specific but the discussion on types of memory errors is very interesting.
HTH.
cheers,
Rob
You have to be careful with the definition of "memory leak". Something which is allocated once on first use, and freed on program exit, will sometimes be shown up by a leak-detector, because it started counting before that first use. But it's not a leak (although it may be bad design, since it may be some kind of global).
To see whether a given chunk of code leaks, you might reasonably run it once, then clear the leak-detector, then run it again (this of course requires programmatic control of the leak detector). Things which "leak" once per run of the program usually don't matter. Things which "leak" every time they're executed usually do matter eventually.
I've rarely found it too difficult to hit zero on this metric, which is equivalent to observing creeping memory usage as opposed to lost blocks. I had one library where it got so fiddly, with caches and UI furniture and whatnot, that I just ran my test suite three times over, and ignored any "leaks" which didn't occur in multiples of three blocks. I still caught all or almost all the real leaks, and analysed the tricky reports once I'd got the low-hanging fruit out of the way. Of course the weaknesses of using the test suite for this purpose are (1) you can only use the parts of it that don't require a new process, and (2) most of the leaks you find are the fault of the test code, not the library code...
Living with memory leaks (and other careless issues) is, at its best, (in my opinion) very bad programming. At its worst it makes software unusable.
You should avoid introducing them in the first place and run the tools you and others have mentioned to try to detect them.
Avoid sloppy programming - there are enough bad programmers out there already - the world doesn't need another one.
EDIT
I agree - many tools can provide false positives.
链接地址: http://www.djcxy.com/p/19918.html上一篇: 允许的#字节内存大小已耗尽
下一篇: 内存泄漏是否有可接受的限制?