Should I make stack segment large or heap segment large?
I'm programming a design for a microprocessor with very limited memory and I must use "a lot" of memory in different functions. I can't have a large stack segment, heap segment, data segment, I must choose which to make big and which to make small. I have about 32KB total,
I use about 20KB for the text segment, that gives me 12KB for the rest. And I need a buffer of 4KB to pass to different functions (SPI Flash sector size). Where should initialize that large buffer?
So my choices are:
1) If I declare the buffer at the beginning of the function, the stack would need to be made large
spiflash_read(...)
{
u8 buffer[4096]; // allocated on stack
syscall_read_spi(buffer,...)
}
2) Allocate dynamically, the heap will need to be made large
spiflash_read(...)
{
u8 *buffer = (u8*) malloc(4096); // allocated in heap
syscall_read_spi(buffer,...)
}
3) Allocate statically, huge down side it can't be used outside the "SPI Library".
static u8 buffer[4096]; // allocated in data section.
spiflash_read(...)
{
syscall_read_spi(buffer,...)
}
My question is which is the best way to implement this design? Can someone please explain the reasoning?
Static allocation is always run-time safe since if you have run out of memory, your linker will tell you at buid time rather than the code crashing at run-time. However, unless the memory is required permanently during execution, it can be wasteful, since the allocated memory cannot be re-used for multiple purposes unless you explicitly code it that way.
Dynamic memory allocation is run-time checkable - if you run out of heap, malloc() returns a null pointer. It is however beholden upon you to test the return value, and to release memory as necessary. Dynamic memory blocks are typically 4 or 8 byte aligned and carry a heap management data overhead that make them inefficient for very small allocations. Also frequent allocation and deallocation of widely varying block sizes can lead to heap fragmentation and wasted memory - it can be disastrous for "always-on" applications. If you never intend to release the memory, and it will always be allocated, and you know apriori how much you need, then you may be better off with static allocation. If you have the library source, you could modify malloc to immediatly halt on memory allocation failure to avoid having to check every allocation. If the allocations sizes are typically of a few common sizes, a fixed-block allocator rather then the standard malloc() might be preferable. It would be more deterministic, and you could implement usage monitoring to aid optimisation of block sizes and numbers of each size.
Stack allocation is the most efficient as it automatically gets and returns memory as necessary. However it also has little or no run-time checking support. Typically when a stack overflow occurs, the code will fail non-deterministically - and not necessarily anywhere near the root cause. Some linkers can generate stack analysis output that will calculate worst-case stack usage through the call tree; you should use this if you have that facility, but remember that if you have a multithreaded system, there will be multiple stacks, and you need ot check the worst case for the entry point to each. Also the lonker will not analyse interrupt stack usage, and your system may have a separate interrupt stack, or share the system stack.
The way I would tackle this is certainly not to place large arrays or objects on the stack but follow the following process:
Use the linker stack analysis to calculate worst case stack usage, allow additional stack for ISRs if necessary. Allocate that much stack.
Allocate all objects required for the duration of execution statically.
If your library includes heap diagnostic functions, you might use them within your code to monitor heap usage to check how close you are to exhaustion.
The linker analysis "worst-case" is likley to be larger that waht you see in practice - the worst case paths my never be executed. You could pre-fill teh stack with a specific byte (say 0xEE) or pattern, then after extensive testing and operation, check for the "high-tide" mark and optimise the stack that way. Use this technique with caution; your testing may not cover all forseeable circumstances.
it depends on whether or not you need to buffer all the time. If 90% of your work is spent working on that buffer then I would put it in data segment
If it is just needed transiently for a given function then put it on the stack. This is cheap to do and means you can reuse the space. It means that you must have a large stack tho
Otherwise put it on the heap.
Really if you are this memory constrained you should do a detailed analysis of what your memory consumption is. Once you get so small you cannot treat this like 'normal', throw it at the OS/runtime, development. I have seen embedded dev shops that are not allowed to do any dynamic mem allocation; every thing is pre-calculated and allocated statically. Although they might have multi-purpose memory areas (a common IO buffer for example). Back in my COBOL days that was the only way you could work (youngsters today..., grumble, grumble....)
The traditional answer is that you should rig your runtime so your stack and your heap grow toward each other. This allows you to ignore which one needs to be "bigger" and just worry about what happens if you didn't allocate enough space TOTAL.
链接地址: http://www.djcxy.com/p/82370.html上一篇: 内存中的变量存储在c,Heap或Stack中的变量中?
下一篇: 我应该让堆栈段大还是堆段大?