MINIX3: Scheduler Research I

Before I can edit the scheduling algorithm for my independent study, I need to understand the intricacies of how it works. That’s what I tasked myself with this past week, and as it turns out, there’s a lot to know.

MINIX is structured in 4 layers. Top to bottom, they are the User Processes layer, the System Server Process layer, the Device Drivers layer, and finally the Kernel. Originally, the scheduler was built into the Kernel (like you would find on a traditional operating system, I would assume), however an independent project by Björn Patrick Swift moved the vast majority of the scheduling handling into the user space — specifically the System Server Process layer. This change allowed for a much more flexible scheduler that can be easily modified without worrying about large impactful (and potentially damaging) changes to the rest of the Kernel. On top of this, it allows for multiple user mode scheduling policies to be in place at the same time. The current user mode implementation of the scheduler is an event-driven scheduler, which largely sits idle and waits for a request from the Kernel. In order for this the happen, though, the kernel needs a scheduling implementation as well.

The Kernel scheduling implementation runs in a round-robin style, and chooses the highest priority process from a series of n priority queues. When a process in a priority queue runs out of quantum (or time left for it to be executed, which is predefined), it triggers the process’ RTS_NO_QUANTUM flag. This is what dequeues the process and marks it so it may no longer be run. The two system library routines exposed to user mode from the Kernel are sys_schedctl and sys_schedule. sys_schedctl is called by a user mode scheduler in order to take over the scheduling for a process. sys_schedule is then used by a scheduler to move the process to different priority queues or give it different quantum, based on its policy. It can also be used on currently running processes for the same functionality.

Having a minimal Kernel scheduler is important because without it some processes would run out of quantum during system startup, before the user mode scheduler has even started. Not only this, but in the event that the user mode scheduler crashes, it could be possible for the Kernel to take over scheduling until the user mode scheduler is restarted (I’m unsure if this is currently implemented or not, though). There are plenty of benefits to having a simple, inefficient Kernel implementation whilst placing the majority of the workload on user mode schedulers most of the time.

The report I linked previously that details all of the workings of the scheduler is fairly long (20 pages), so I’m going to continue reading it and taking notes on it. I don’t want each one of these blog posts to get too long and wordy, so I’ll make a few posts (all of them this coming week) on this topic, each a continuation of the last.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s