Traditionally, the Linux kernel has always been — by nature — a monolithic kernel. While others such as the GNU project and Apple bucked the tried and tested trend, Linux was developed from the beginning to be monolithic. Moving forward 21-odd years, we have found that this approach proved to be very successful; the GNU project’s Hurd never took off as quickly as it should have, and other conceptual issues with micro- and hybrid-kernel designs were discovered as research continued.
Be that as it may, a monolithic kernel is not without its disadvantages. The benefits of incredible performance come at a cost of greater likelihood that a bug in a critical part of the kernel could bring the entire thing down. Even compiling a graphics driver into the kernel could lock up the entire thing, if it were buggy (at least, that I what I have read; I am nowhere near grasping the lever of expertise that kernel developers have).
Therefore, the trade-off is quite simple; you get a stable, extremely fast kernel at the expense of having to be extra cautious about bugs. Or something like that.
With regards to some of us who compile their kernels from source, the increasing kernel size doesn’t really matter; we only build what we need into the kernel. But this does not disregard the issue that the kernel is still monolithic.
But what is so wrong with monolithic kernels? Is it all hype and hope about microkernels and exokernels with all their features? Are monolithic kernels hated upon simply because they are built off of an ancient kernel design concept? My answer is no, and no. While microkernels such as the Mach kernel proved to be troublesome in terms of performance, some exokernels such as MIT’s XOK have proved to yield speed improvements up to a magnitude of eight. That’s right, an 8-times performance increase over UNIX and UNIX-like operating systems, according to their reports. As for monolithic kernels being shunned simply because they resemble an old technology, this is simply false. The real problems are the increasing code complexities as time goes on, with even the creator of Linux, Linus Torvalds, calling Linux bloated. Furthermore, however, the issue lies within the central notion of the Linux kernel: it acts as a single point of failure.
Now, that is a problem. Anything can cause a kernel ‘oops’, as it is called, which has the possibility of bringing the entire kernel down.
There have been efforts to make Linux far more modular, and these efforts have largely succeeded. You can even compile graphics drivers and file systems as loadable kernel modules, so long as you know what you are doing.
I recently gave kernel modules another go after being inspired by the conceptual benefits of modular kernel design. Owing to the fact that GNU Hurd or any other microkernel isn’t yet ready from prime time, I decided to wade through my kernel config and modularise what I felt could be done simply to yield the perceived benefits of modularity. As it turns out, I only modularised radeon — my graphics driver — and my networking drivers. Of course, I then added these to /etc/conf.d/modules and rebooted.
The result? My kernel went from taking anywhere between 6 and 10 seconds to 3.5 seconds. Now, that is a major improvement considering I only modularised three simple components of my kernel. What about real-world performance? I saw no difference. Running Xonotic gave me the same frame rates I had expected to see, and networking was as usual.
Some will claim that modularity of the kernel only complicates things. Others say it is a security issue. Some aren’t even aware of the ability to modularise the kernel. What about me? I’m not going to give any advice, because I believe people have the right to make their own decisions without being told or forced what to do (read: why I think the GPL sucks for social cohesion). But if it is anything to you, take my above measurements as something to take on board. It may well be something to consider for the future.