Do initrd really reduce kernel image size in case of Bootpimage?
According to Wikipedia in article about initrd " Many Linux distributions ship a single, generic kernel image - one that the distribution's developers intend will boot on as wide a variety of hardware as possible. The device drivers for this generic kernel image are included as loadable modules because statically compiling many drivers into one kernel causes the kernel image to be much larger, perhaps too large to boot on computers with limited memory. This then raises the problem of detecting and loading the modules necessary to mount the root file system at boot time, or for that matter, deducing where or what the root file system is. To avoid having to hardcode handling for so many special cases into the kernel, an initial boot stage with a temporary root file-system — now dubbed early user space — is used. This root file-system can contain user-space helpers which do the hardware detection, module loading and device discovery necessary to get the real root file-system mounted. "
My question is if we add modules etc needed to load actual filesystem in initrd not in actual kernel image to save save then what we will achieve in case of Bootpimage where both kernel and initrd are combined to form a single bootpimage. This size of kernel would increase even by using initrd.
Can someone clarify ?
Define "the size of the kernel".
Yes, if you have a minimal kernel image plus an initrd full of hundreds of modules, it will probably take up more total storage space than the equivalent kernel with everything compiled in, what with all the module headers and such. However, once it's booted, determined what hardware it's on, loaded a few modules and thrown all the rest away (the init in initrd), it will take up considerably less memory . The all-built-in kernel image on the other hand, once booted, is still just as big in memory as on disk, wasting space with all that unneeded driver code.
Storage is almost always considerably cheaper and more abundant than RAM, so optimising for storage space at the cost of reducing available memory once the system is running would generally be a bit silly. Even for network booting, sacrificing runtime capability for total image size for the sake of booting slightly faster makes little sense. The few kinds of systems where such considerations might have any merit almost certainly wouldn't be using generic multiplatform kernels in the first place.
There are several aspects to size and this maybe confusing.
Run time size
tl-dr; Using an initrd with modules gives a generic image a minimum run time memory footprint with current (3.17) Linux kernel code.
My question is if we add modules etc needed to load actual filesystem in initrd not in actual kernel image to save save then what we will achieve in case of Bootpimage where both kernel and initrd are combined to form a single bootpimage. This size of kernel would increase even by using initrd.
You are correct in that the same amount of data will be transfered no matter which mechanism you chose. In fact, the initrd with module loading will be bigger than a fully statically linked kernel and the boot time will be slower. Sounds bad.
A customized kernel which is specifically built for the device and contains no extra hardware driver nor module support is always the best. The Debian handbook on kernel compilation give two reason that a use may want to make a custom kernel.
The second option is often the most critical parameter. To minimize the amount of memory that a running kernel consumes. The initrd (or initramfs ) is a binary disk image that is loaded as a ram disk. It is all user code with the single task of probing the devices and using module loading to get the correct drivers for the system. After this job is done, it mounts a real boot device or the normal root file system. When this happens, the initrd image is discarded.
The initrd does not consume run-time memory. You get both a generic image and one that has a fairly minimal run time footprint.
I will say that the efforts made by distro people have on occasion created performance issues. Typically ARM drivers were only compiled for one SOC; although the source supported an SOC family, but only one could be selected through conditions. In more recent kernels the ARM drivers always support the whole SOC family. The memory overhead is minimal. However, using a function pointer for a low-level driver transfer function can limit the bandwidth of the controller.
The cacheflush routine have an option for multi-cache. The function pointers cause compilers to automatically spill. However, if you compile for a specific cache type, the compiler can inline functions. This often generates much better and smaller code. Most drivers do not have this type of infra-structure. But you will have better run-time behavior if you compile a monolithic kernel that is tuned for your CPU. Several critical kernel functions will use inlined functions.
Drivers will not usually be faster when compiled in to the kernel. Many systems support hot-plug via USB, PCMCIA, SDIO, etc. These systems have a memory advantage with module loading as well.
链接地址: http://www.djcxy.com/p/66798.html