What is Scheduling?
Scheduling is to decide which task that should execute at this moment.

As long as you only have one task in your application, you do not need any scheduling mechanism. But as long as you have two or more tasks then you need some kind of mechanism that decides in which order the tasks should execute and you also need to decide if the tasks should be able to preempt each other or not. A scheduling mechanism can be as simple as you just tell each task when it is time for them to execute and they cannot preempt each other till much more sophisticated ones.

Worst Case Execution Time (WCET)
You always need to find out what is the worst case execution time for each task, so that you can make sure that each task has a chance to meet its deadline. There are two possibilities two find out what is the WCET for a task. Either count, which is time consuming and sometimes impossible, or use a tool like an ICE, a logical analyzer or a profiler. Or if you do not have a tool, execute X number of times and measure the time with a clock.

Many CPUs have both instruction and data cashes, which may make it much harder to find out WCET for a task, as the execution time may vary depending on how many instructions and how much data there are in the cashes at the moment. So you just have to measure with cashes both switched on and off.

And of course you have to have all the source code for the task and its subroutines, which may not be the case for subroutines that is part of the RTOS and libraries.

And maybe your own code is not that simple to measure, there may be many execution paths that the task may use, see as a simple (and maybe stupid) example in figure 1.
SCH_Figure1

 

 

 

 

 

 

 

 

 

 

 

So measuring WCET may be very complicated, so at least what you can do is to focus on those tasks that uses the CPU most.

Endless Loop
An application with only one task (maybe we should call it a main loop instead). In this case your program will execute in just a single loop, see figure 2.
SCH_Figure2

 

 

 

 

 

 

 

 

This type of design works very well for ”small” applications e.g. a washing machine (I have no experience of designing programs for washing machines, so if I am wrong please let me know). This single loop may do a number of things and have a number of deadlines. As long as it never takes longer time (WCET) to execute the whole loop than the size of the shortest deadline, this design works very well. But as soon as WCET is larger that the shortest deadline, you must either speed up the execution or change your design.

Basic Cyclic Executive
If you have been using an endless loop design but your program is getting ”large”, you can split it up into several tasks, so that each task when it has finished its execution triggers the next task. So the tasks are not able to preempt each other. In this case you may run one or more tasks several times during a complete cycle, if there are task with shorter deadlines than the execution time for the complete cycle.

Priority Preemption Scheduling
The examples above works fine for relatively ”simple” applications. They are normally relatively easy to implement and to test. But as the application ”grows and gets more complicated” and the number of tasks and deadlines increase you may need a more sophisticated scheduling mechanism. The most common one is Priority Preemption Scheduling, which has been used by many RTOSs and in many, many projects, so it is a scheduling mechanism that is well-proven and well documented.

Priority Preemption Scheduling means that each task gets a priority, and the tasks are able to preempt each other, meaning that a task with higher priority (or equal if round-robin) is able to stop another task when it is executing and take over the CPU. With Priority Preemption Scheduling the task with highest priority in the ”ready queue” is the task that is executing, see figure 3.

SCH_Figure3

 

 

 

 

 

 

 

 

 

 

Rate Monotonic Scheduling (RMS)
Priority Preemption Scheduling does not use the deadlines, and meeting the deadlines is the most important thing for a real-time application. RMS is a theory that was developed some decades ago, to find out what is the total worst execution time for tasks with specific deadlines, so that the application in all cases will meet all the deadlines, see figure 4.

SCH_Figure4

 

 

 

 

 

 

 

 

In this example all tasks are supposed to be periodic, no interrupts, no overhead from the RTOS, no shared resources etc. But now you must be able to find out WCET for each task. But if you manage to do this, then the theory will tell you if you will meet all the deadlines or not, guaranteed if the utilization number is smaller than that in the table in figure 4. But what if the utilization number is higher, does it then necessarily mean that you will not meet your deadlines? No! Simple example: 2 tasks, period 10 ms, execution time 4.5 ms means that utilization number is 90%, but you will always meet the deadlines.

The theory has been extended to handle the restrictions above, but still it may be very complicated to use in practice. But it could at least give you a good hint if you have a chance to meet your deadlines or not. If you would like to learn more about RMS, there is a lot of articles that you can find on internet.

Deadline Scheduling
Would it not be nice if the scheduler uses the deadlines instead of priorities to schedule the tasks to make sure that each task meets its deadline?

In this case we can use a utilization formula:
Sum (Ci / Pi)  <= 1
Three main theories exist:

  1. EDF Earliest Deadline
  2. FirstLLF Least Laxity First
  3. MUF Maximum Urgency First

Basically all this theories try too always schedule the tasks in the way that they should always meet their deadlines, in principle that the task which is closest to its deadline should execute first. But what if two tasks are equally close to their deadlines, who should execute first? So maybe you have to set priorities anyhow.

Positive things with these theories:

  • Scheduler ”Understands” the definition of hard real-time
  • Responsiveness of a preemptive scheduler
  • Utilize your CPU’s capacity to the fullest

Negative things with these theories:

  • Extra scheduling overhead, if built on top of commercial RTOS scheduler
  • If failure to meet deadline, failure modes can be horrific
  • Which deadline scheduling strategy to choose?
  • Better ones require much more input from the application

So again it turns out that things works much better in theory than in practice, at least so far, but I am sure that one day, it will also work in practice. If you would like to learn more about Deadline Scheduling, you can find much more on Internet.