Incorporating I/O

Let's have a look at how you will incorporate I/O in the scheduling policies.

We'll cover the following

First,​ we will relax assumption 4 — of course, all programs perform I/O. Imagine a program that didn’t take any input: it would produce the same output each time. Imagine one without output: it is the proverbial tree falling in the forest, with no one to see it; it doesn’t matter that it ran.

A scheduler clearly has a decision to make when a job initiates an I/O request, because the currently-running job won’t be using the CPU during the I/O; it is blocked waiting for I/O completion. If the I/O is sent to a hard disk drive, the process might be blocked for a few milliseconds or longer, depending on the current I/O load of the drive. Thus, the scheduler should probably schedule another job on the CPU at that time.

The scheduler also has to make a decision when the I/O completes. When that occurs, an interrupt is raised, and the OS runs and moves the process that issued the I/O from blocked back to the ready state. Of course, it could even decide to run the job at that point. How should the OS treat each job? To understand this issue better, let us assume we have two jobs, AA and BB, which each need 50 ms of CPU time. However, there is one obvious difference: AA runs for 10 ms and then issues an I/O request (assume here that I/Os each take 10 ms), whereas BB simply uses the CPU for 50 ms and performs no I/O. The scheduler runs AA first, then BB after (see the figure below).

Get hands-on with 1200+ tech skills courses.