memory segments and physical RAM
The memory map of a process appears to be fragmented into segments (stack, heap, bss, data, and text),
In a modern OS supporting virtual memory, it is the address space of the process that is divided into these segments. And in general case that address space of the process is projected onto the physical RAM in a completely random fashion (with some fixed granularity, 4K typically). Address space pages located next to each other do not have to be projected into the neighboring physical pages of RAM. Physical pages of RAM do not have to maintain the same relative order as the process's address space pages. This all means that there is no such separation into segments in RAM and there can't possibly be.
In order to optimize memory access an OS might (and typically will) try to map sequential pages of the process address space to sequential pages in RAM, but that's just an optimization. In general case, the mapping is unpredictable. On top of that the RAM is shared by all processes in the system, with RAM pages belonging to different processes being arbitrarily interleaved in RAM, which eliminates any possibility of having such "segments" in RAM. There's no process-specific ordering or segmentation in RAM. RAM is just a cache for virtual memory mechanism.
Again, every process works with its own virtual address space. This is where these segments can exist. The process has no direct access to RAM. The process doesn't even need to know that RAM exists.
These segments are largely a convenience for the program loader and operating system (though they also provide a basis for coarse-grained protection; execution permission can be limited to text and writes prohibited from rodata).1
The physical memory address space might be fragmented but not for the sake of such application segments. For example, in a NUMA system it might be convenient for hardware to use specific bits to indicate which node owns a given physical address.
For a system using address translation, the OS can somewhat arbitrarily place the segments in physical memory. (With segmented translation, external fragmentation can be a problem; a contiguous range of physical memory addresses may not be available, requiring expensive moving of memory segments. With paged translation, external fragmentation is not a possible. Segmented translation has the advantage of requiring less translation information: each segment requiring only a base and bound with other metadata whereas a memory section would typically have many more than two pages each of which has a base address and metadata.)
Without address translation, placement of segments would necessarily be less arbitrary. Fortunately, most programs do not care about the specific address where segments are placed. (Single address space OSes
(Note that it can be convenient for sharable sections to be in fixed locations. For code this can be used to avoid indirection through a global offset table without requiring binary rewriting in the program loader/dynamic linker. This can also reduce address translation overhead.)
Application-level programming is generally sufficiently abstracted from such segmentation that its existence is not noticeable. However, pure abstractions are naturally unfriendly to intense optimization for physical resource use, including execution time.
In addition, a programming system may choose to use a more complex placement of data (without the application programmer needing to know the implementation details). For example, use of coroutines may encourage using a cactus/spaghetti stack where contiguity is not expected. Similarly, a garbage collecting runtime might provide additional divisions of the address space, not only for nurseries but also for separating leaf objects, which have no references to collectable memory, from non-leaf objects (reducing the overhead of mark/sweep). It is also not especially unusual to provide two stack segments, one for data whose address is not taken (or at least is fixed in size) and one for other data.
1One traditional layout of these segments (with a downward growing stack) in a flat virtual address space for Unix-like OSes places text at the lowest address, rodata immediate above that, initialized data immediately above that, zero-initialized data (bss) immediately above that, heap growing upward from the top of bss, and stack growing downward from the top of the application's portion of the virtual address space.
Having heap and stack growing toward each other allows arbitrary growth of each (for a single thread using that address space!). This placement also allows a program loader to simply copy the program file into memory starting at the lowest address, groups memory by permission, and can sometimes allow a single global pointer to address all of the global/static data range (rodata, data, and bss).
The memory map to a process appears fragmented into segments (stack, heap, bss, data, and text)
That's the basic mapping used by Unix; other operating systems use different schemes. Generally, though, they split the process memory space into separate segments for executing code, stack, data, and heap data.
I was wondering are these segments are just abstraction for the processes for convience and the physical RAM is just a linear array of addresses or the physical RAM is also fragmented into these segments?
Depends. Yes, these segments are created and managed by the OS for the benefit of the process. But physical memory can be arranged as linear addresses, or banked segments, or non-contiguous blocks of RAM. It's up to the OS to manage the total system memory space so that each process can access its own portion of it.
Virtual memory adds yet another layer of abstraction, so that what looks like linear memory locations are in fact mapped to separate pages of RAM, which could be anywhere in the physical address space.
Also if the RAM is not fragmanted and is just a linear array then how the OS provides the process the abstraction of these segments?
The OS manages all of this by using virtual memory mapping hardware. Each process sees contiguous memory areas for its code, data, stack, and heap segments. But in reality, the OS maps the pages within each of these segments to physical pages of RAM. So two identical running processes will see the same virtual address space composed of contiguous memory segments, but the memory pages comprising these segments will be mapped to entirely different physical RAM pages.
But bear in mind that physical RAM may not actually be one contiguous block of memory, but may in fact be split across multiple non-adjacent blocks or memory banks. It is up to the OS to manage all of this in a way that is transparent to the processes.
Also how the programming would change if the memory map to a process would appear just as a linear array and not divided into segments?, and then the MMU would just translate these virtual addresses into physical ones.
The MMU always operates that way, translating virtual memory addresses into physical memory addresses. The OS sets up and manages the mapping of each page of each segment for each process. Each time the process exceeds its stack allocation, for example, the OS traps a segment fault and adds another page to the process's stack segment, mapping the virtual page to a physical page selected from available memory.
Virtual memory also allows the OS to swap out process pages temporarily to disk, so that the total amount of virtual memory occupied by all of the running processes can easily exceed the actual physical memory RAM space of a system. Only the currently active executing processes actually have access to real physical RAM pages.
链接地址: http://www.djcxy.com/p/82380.html上一篇: 程序如何知道bss段的位置
下一篇: 内存段和物理RAM