Thread: SOS - Can a monolithic kernel be used in a micro-kernel style OS?

  1. #1
    Registered User
    Join Date
    Sep 2001
    Posts
    4,912

    SOS - Can a monolithic kernel be used in a micro-kernel style OS?

    (Oh - the title stands for Sean's Operating System - a name which I certainly never want to actually GIVE to this project)

    So as some of you know I've been looking at making my own OS for a while now. I am aware this may be a very long-term project - I'm happy spending 4 years on it before I call it "done for the time being". The more I look at what I actually want to do, I'm liking the idea more and more of taking an existing kernel, stripping it down, and building it back up to where I want it. I'm really not that interested in how the memory manager and scheduler work - so long as they work. What I want this project to help me learn is about how Operating Systems communicate with the rest of the world - so I'd rather spend my time redoing the device drivers and interface!

    When I was thinking about doing this from scratch - I really liked the microkernel concept. To me it seems safer and more stable - and the ability to add and remove functionality at run-time is something I'd really like to have. Since the only microkernel that has a lot of documentation and a decent community for the x86 right now seems to be MINIX (I haven't heard good things about MACH - plus it's not open source - and the GNU version isn't ported to x86 yet) - I'm willing to consider monolithic OS's now!

    So it basically comes down to Linux or BSD. Linux is winning for a variety of reasons. I like NetBSD's emphasis on portability - but uClinux doesn't need an MMU, and I've heard that the Linux kernel itself is ported to more architectures - it just isn't taken advantage of so much in a single distribution.

    In any case - I'm going to go through the LinuxFromScratch book to learn more about the process and the kernel - I very well may start over after going through it. But one thing I'd like to do is use run-time-loadable kernel modules for anything I can. Kind of have just a minimal kernel run, and then have the drivers and other processes run outside of that. Is there any huge drawback to that? I'm sure it would take some time to load all the modules, but I'm okay with that.

    And considering what I've said - does anyone have any recommendations on a better kernel to start with? The other advantage to Linux is the massive amount of resources available for it. Regardless of how good it is - software and support are so readily available.
    Last edited by sean; 11-19-2008 at 04:21 PM. Reason: Edited to make my point clearer

  2. #2
    l'Anziano DavidP's Avatar
    Join Date
    Aug 2001
    Location
    Plano, Texas, United States
    Posts
    2,743
    Operating systems aren't my cup of tea, although I do think there are some interesting aspects to them.

    Being at BYU, have you taken (or are you taking) CS 345? The one thing I don't like about that class is that they focus so much on the scheduler and memory management when what I really wanted to know is exactly what you want to find out more about: the OS communicating with devices and hardware. I personally think that's more interesting.

    But anyways....sorry...no kernel recommendations here Hopefully someone more knowledgeable than me will reply to your post
    My Website

    "Circular logic is good because it is."

  3. #3
    Registered User
    Join Date
    Sep 2001
    Posts
    4,912
    Being off my mission a mere 4 months, I'm actually taking 142 from Professor Burton. Yes, I know that's a total joke, but try telling the department that! Several of the guys I work with have taken it - I talk to them regular about it - really the only thing I'd be interested in from that class is inter-process messaging and making shells (maybe...)

  4. #4
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I'm not sure what your actual goal is. If you want to play with the OS and just mess about with drivers, then take Linux - it's got SO many drivers that you could have a go at playing with. Or you can write your own "from scratch" drivers to replace existing ones.

    If, on the other hand, you really want a micro-/nano-kernel a'la for example QnX or Symbian, then you are probably better of starting with something else - not quite sure what. Or maybe writing your own. You don't HAVE to use the MMU if you don't want to - but it probably helps debugging those bad drivers that you will produce at some point (writing drivers is not easy, and no doubt sooner or later you will cause your system to crash completely - having an MMU that at least prevents you from writing to completely random locations in memory is a good start to detect that you've got a bug there!). The MMU setup doesn't have to support all forms of memory management - perhaps just a physical to virtual mapping per process that is fairly static - no swapping in/out from/to disk, etc.

    Edit: An extra challenge would be to actually write the kernel itself in C++. And using only signals (messages) a'la OSE.
    http://www.enea.com/EPiBrowser/Liter...e%20kernel.pdf
    Look at the box of "8 system calls".

    Unfortunately, I can't find a good API document on the web - I found some stale links to the API spec, but no current ones.

    --
    Mats
    Last edited by matsp; 11-19-2008 at 05:23 PM.
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #5
    Registered User
    Join Date
    Sep 2001
    Posts
    4,912
    if, on the other hand, you really want a micro-/nano-kernel
    Size wasn't my motive for going for a microkernel - it was the design. Monolithic systems actually seem really counterintuitive to me - it's not my style. A small OS would be nice - but LFS claims they got an Apache server running on a 5mb linux system. That's alright! And with all the ports to embedded systems and gaming consoles I'll be fine with whatever I want to get into... I think portability if a bigger issue than the size.

    take Linux - it's got SO many drivers that you could have a go at playing with. Or you can write your own "from scratch" drivers to replace existing ones.
    Definitely drivers are a huge advantage to Linux. Certainly I'd want to modify them - but not much - just enough so that I would REALLY understand what was going on.

    You don't HAVE to use the MMU if you don't want to
    That's good to know! I'm looking at the uClinux kernel. I'd like to have the requirement for architecture be as low as possible - and not having to completely switch out the kernel would be nice. Is there any other drawback to not using the MMU on a pc?

    I'm not sure what your actual goal is.
    Well let me put it this way - and if this concept already exists under a different name - let me know! Let's call it a "protocol system" - kinda like C3P0. I want it to be highly portable - that's going to be the tricky part, I think. And I'd like to include drivers to communicate with as many networks, file systems and devices as possible. Other than that - I want to take a minimalist approach. I'd like to be able to add other applications as I need them - maybe my last project would be adding a virtual machine or something. But the main thing I'm going for is to have a portable base-OS, a good knowledge of how the drivers work, and be able to make/install my own software.

    I don't intend to add a GUI - but I will want to do some pretty nifty things from the console - maybe do my own shell.


    edit: Your post was VERY helpful, matsp - I feel pretty confident that Linux is the right choice here. So I guess my main question remains - is there anything wrong with using run-time-loadable modules for everything (assuming I can get/make them)? And is there any other drawback to not using the MMU?

  6. #6
    Registered User
    Join Date
    Sep 2001
    Posts
    4,912
    Dangit! Every time I think I'm on to something new!

    http://expertsense.blogspot.com/2008...crokernel.html

    At least that means it's a plausible idea!

  7. #7
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by sean View Post
    Size wasn't my motive for going for a microkernel - it was the design. Monolithic systems actually seem really counterintuitive to me - it's not my style. A small OS would be nice - but LFS claims they got an Apache server running on a 5mb linux system. That's alright! And with all the ports to embedded systems and gaming consoles I'll be fine with whatever I want to get into... I think portability if a bigger issue than the size.
    Having a single large kernel makes SOME things very easy. It solves a lot of chicken-and-egg problems (along the lines of, how do you load the disk-driver from the hard-disk, when you haven't got a driver for the hard-disk).

    Well let me put it this way - and if this concept already exists under a different name - let me know! Let's call it a "protocol system" - kinda like C3P0. I want it to be highly portable - that's going to be the tricky part, I think. And I'd like to include drivers to communicate with as many networks, file systems and devices as possible. Other than that - I want to take a minimalist approach. I'd like to be able to add other applications as I need them - maybe my last project would be adding a virtual machine or something. But the main thing I'm going for is to have a portable base-OS, a good knowledge of how the drivers work, and be able to make/install my own software.
    Supporting LOTS of device drivers, lots of CPU/Machine architectures and lots of protocols means A LOT of work if you are starting from scratch - or even if you "just" change how the OS interfaces to certain driver types - there will be LOTS of drivers to modify and ensure they still work.

    Also if you support lots of different arcitectures and models, you need HUGE amounts of hardware to check that everything works on all architectures. To ensure that fixing one bug doesn't break some other functionality, you really need an automated build system that builds and tests the code on a (wide?) variety of your target systems so that you don't get "bitrot" where you are fixing a lot of things in some scenario, and then you try to do something completely different and "everything" has got broken. In a complex setup, it is VERY easy to unintentionally break one thing when fixing another, and it will be much harder to repair that damage if you also have 47 other fixes that have gone in since you broke something - so early detection is a life-saver here.
    I don't intend to add a GUI - but I will want to do some pretty nifty things from the console - maybe do my own shell.


    edit: Your post was VERY helpful, matsp - I feel pretty confident that Linux is the right choice here. So I guess my main question remains - is there anything wrong with using run-time-loadable modules for everything (assuming I can get/make them)? And is there any other drawback to not using the MMU?
    I think Linux actually has a large amount of runtime loaded modules - it used to not be the case, but todays 2.6 kernel is by default all modules (except for the few that you must have to be able to boot the system).

    The drawback of not having an MMU is that there is no protection between processes, and any process can access kernel space. I have had bugs in RTOS that doesn't have MMU where an infinite stack-fault is causing the entire OS kernel to be destroyed - with an MMU it would be possible to stop before the kernel gets damaged, which also helps in the sense that it's easier to identify that it was the stack that went horribly wrong, rather than just some random other crash. If the processor has an MMU (and nearly all 32-bit processors nowadays do) then I would recommend having it "on".

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  8. #8
    Registered User
    Join Date
    Sep 2001
    Posts
    4,912
    Having a single large kernel makes SOME things very easy. It solves a lot of chicken-and-egg problems (along the lines of, how do you load the disk-driver from the hard-disk, when you haven't got a driver for the hard-disk).
    Yeah - I was thinking about that this morning, actually. I decided I would set-up a few built-in's. Ideally I wanna have my OS run off of a USB drive - so maybe I'll include that driver, the CD file system driver, and a standard HD driver in there.

    Supporting LOTS of device drivers, lots of CPU/Machine architectures and lots of protocols means A LOT of work if you are starting from scratch - or even if you "just" change how the OS interfaces to certain driver types - there will be LOTS of drivers to modify and ensure they still work.
    Which is why I'm willing to spend several years on this before getting something really useable. In any case - the more I look at the best way to do this, the more I'm thinking of it as "learning how to very heavily customize another operating system" rather than "creating my own kernel". Linux and BSD certainly do a way better job of managing resources than I could hope to come close to for a very long time - since that's not what I'm interested in, why bother, you know?

    Also if you support lots of different arcitectures and models, you need HUGE amounts of hardware to check that everything works on all architectures. To ensure that fixing one bug doesn't break some other functionality, you really need an automated build system that builds and tests the code on a (wide?) variety of your target systems so that you don't get "bitrot" where you are fixing a lot of things in some scenario, and then you try to do something completely different and "everything" has got broken. In a complex setup, it is VERY easy to unintentionally break one thing when fixing another, and it will be much harder to repair that damage if you also have 47 other fixes that have gone in since you broke something - so early detection is a life-saver here.
    Also true. I don't intend to actually have it be ready-to-go on a bunch of stuff right away - but I want to make sure that I'm designing with that in mind from the beginning, so when I start being able to test it on other hardware, the porting is minimal.

    I think Linux actually has a large amount of runtime loaded modules - it used to not be the case, but todays 2.6 kernel is by default all modules (except for the few that you must have to be able to boot the system).
    Grr... I keep thinking of stuff for my own design only to find out that the Linux kernel already implemented the idea long ago, and did it very well. Even MORE reason to just make my own distro and call it my dream system!

    Thanks for all the replies matsp - that's all helped A LOT!

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Tab Controls - API
    By -KEN- in forum Windows Programming
    Replies: 7
    Last Post: 06-02-2002, 09:44 AM