robin scheduling algorithm

I'm studying operating system on this book and my prof's slides. I'm arrived at the "Process scheduling algorithms" chapter. Talking about the RoundRobin (RR) algorithm i found some inconsistencies. I understand that is a preemptive version of the FCFS algorithm with a time-slice (quatum). From now on, I will use the following notation:

#1 = prof's version
#2 = book's version
#3 = other version

Here's the inconsistency (suppose a quantum of 100ms):

#1 The RR uses two queue (Q1, Q2):

  • Q1: queue for processes that did not end their quantum;
  • Q2: queue for processes that did end their quantum;

  • The scheduler takes the process from the head of Q1;
  • If the process ends before the quantum expires, the process release the CPU on purpose and the scheduler takes the next process from Q1
  • If the process doesn't end before the quantum expires, is preempted and placed at the end of Q2;
  • When a process is ready it's placed at the end of Q1;
  • When Q1 is empty, Q1 and Q2 are swapped;
  • So when a process is blocked for an I/O request (eg after 30ms) and its quantum is not expired yet, is placed at the end of Q1 (I guess) and when it will be scheduled again it will use the CPU for his remaining time (70ms in this case).

    #2 (The book did not talk about multiple queues, so I assume it will use just one queue)

  • The scheduler takes the process from the head of the ready queue;
  • If the process ends before the quantum expires, the process release the CPU on purpose and the scheduler takes the next process from the ready queue
  • If the process doesn't end before the quantum expires, is preempted and placed at the end of the ready queue;
  • #3 Source

  • The scheduler takes the first process in the ready queue;
  • If the process ends before the quantum expires, the process release the CPU on purpose and the scheduler takes the next process from the ready queue
  • If the process doesn't end before the quantum expires, is preempted and placed at the end of the ready queue;
  • If the process is blocked by a I/O request, it's placed in a waiting queue and when it will became ready will be placed again in the ready queue;
  • To me, these are 3 different implementations of the RR scheduling algorithm. I think that the most valuable is the #3 because the #1 can cause a starvation (if the process is placed in Q2 and new processes keep coming in Q1, then the process will never be scheduled again) and #2 will waste CPU time when a process is blocked for an I/O request. So, my question is: which one is the right one?


    Round robin in theory

    Round Robin scheduling can be quite good visualized when thinking about an analog clock: The hand is turning around at constant speed, so it's in the slice of a single digit for 1/12 of the time it takes for one complete run.

    A single digit thus has some slice of the total available amount of resource. And, most importantly, there's a fixed order in which the digits get served: After the hand just passed some digit, it'll only get visited again after the hand passed all the other digits.

    Looking at the variants you presented, number #2 the version from the book matches this exactly: After a task has been served it is put at the end of a (often so called) ready queue, and thus only gets served again after all of the other tasks have been served once.

    Round robin is, as a theoretical scheduling algorithm, only considering the scheduling of multiple consumers (tasks) to a single resource (CPU).

    Variants

    Some common variations of the basic round robin scheduling are to either use different slice sizes for different tasks, or to dynamically adjust the slice of a task based on some metric, or even to provide more than a single slice to some tasks.

    In practice

    When you are scheduling tasks, you have to schedule them to more than the CPU as single resource, there will be other resources that need to be managed, like IO devices.

    Very simple schedulers just ignore that fact, and leave tasks that are currently waiting for some other resource in the task queue for the CPU.

    So when such a task gets its time slice, all it'll do is find that it still needs to wait for that other resource and hands back the CPU, just to get put back into the task queue by the scheduler. Starting the task, checking that the task still needs to wait for the other resource, and stopping the task takes some time that could better be spend for a task that actually can use the CPU.

    To solve this, one usually has a task queue for each resource that is managed, ie one for the CPU, one for each IO device, etc. When a task is doing a blocking IO call, it is then removed from the queue for the CPU and put into the queue of the device it is accessing. This way, tasks that are waiting for a resource other than the CPU don't sit in the CPU task queue (and thus waste no time getting started and immediately stopped again).

    This is what #3 is talking about (when you look again you see that they're talking about "waiting queue s ")

    Another kind of "waiting queue"

    Often there's also a "waiting queue" when you're talking about multi level schedulers: In that case, the ready queue is the queue of tasks that are ready and get scheduled by the primary scheduler (using round robin, for example). If a task blocks because of an IO operation, it is - as described above - put into the queue of the corresponding resource. When that resource gets available again, the task is first put into the waiting queue, from which a secondary scheduler (which is just a task for the primary scheduler) eventually takes it and puts it into the ready queue of the primary scheduler.

    What your prof is about

    The version #1 is probably better to understand if you rename the queues into something like "queue with tasks that did not yet run in this turn" and "queue with tasks that did already run in this turn". This is basically just a "workaround" if you don't want to have circular lists. So it's round robin, too, but a bit obscured.

    [..] can cause a starvation (if the process is placed in Q2 and new processes keep coming in Q1, then the process will never be scheduled again) [..]

    This is a very good observation. This could be solved if new, ready tasks get inserted at the end of Q2 instead of Q1. If suitable, this is a very good start for a discussion when your prof is asking for questions.


    The first implementation can not only cause starvation, it can also cause deadlocks. If only we modify the step 5 of first implementation, the method can be made just fine.

    The second approach is round-robin in its purest form.

    The third approach is not round-robin but is in fact a smarter version which understands that I/O bound process should not be given another chance too soon as it will probably not be ready yet.

    If you will continue reading that book you will read the next scheduling algorithm called Multi-level Queue. That is actually the best of all implementations of different variations of Round-robin. In Multi-level queue we put all incoming processes in one queue and then depending upon whether they finished in their first time slice or not, put them in another queue. This new queue has higher time slice and ends up holding CPU bound processes.

    Using Multi-level queues (like 4-5 of them) the CPU combs out all the incoming processes into various classes and then picks optimum number of processes from each queue so that it is neither over-subscribed nor under-subscribed.


    Any process which arrives in the system is queue at the end of the ready queue. A process which is at the head of the ready queue is selected and allowed to execute on the CPU for a time quantum q.At the expiry of q, the process is queued at the tail of the ready queue.The next process scheduled is the one at the head of ready queue.This is know Round Robin Scheduling.

                                       OR
    

    In round robin scheduling, processes are dispatched FIFO but are given a limited amount of CPU time called a time-slice or a quantum.If a process does not complete before its CPU times expires,the CPU is preempted and given to the next waiting process,the preempted process is than placed at the back of the ready queue.

    链接地址: http://www.djcxy.com/p/84834.html

    上一篇: 轮询调度问题

    下一篇: 循环调度算法