Kernel Description (kernel-v1)


  • Create and manage threads
  • Cooperative (non-preemptive) scheduling
  • Strictly time-driven: conditions are detected based on the scheduler’s interval
  • One kernel per processor core, max. 32 threads per core/kernel
  • Threads are implemented with coroutines
  • Threads are allocated to a specific core/kernel
  • Possible thread triggers:
    • none: scheduling is based on inter-thread synchronisation, eg. via signals
    • period: thread is scheduled on a fixed timing period
    • delay: thread will be scheduled after a (non-blocking) delay
    • device: thread will be scheduled when the flags of a peripheral device get set
  • Usually threads of control programs run in a loop
  • Threads can be allocated new code to execute
  • Use PSP (process stack pointer) for threads, MSP (main stack pointer) for exceptions


The scheduler periodically evaluates the enabled threads and their trigger conditions, puts the ready threads on a run-queue, and then empties that queue by running the thread’s code, initially from the start, thereafter from the last yield point. A running thread cannot be pre-empted, and will “possess” its core until it transfers back control to the scheduler.

The responsiveness of the whole control program, comprised of all its threads, is determined by:

  • the scheduler interval (which can be different on each core).
  • the design of the threads: period, priority, execution time without yielding, etc.

All ready threads on the run-queue cannot take longer to execute than the scheduler interval, else the timing gets behind. The scheduler is just another coroutine, and its timeliness depends on the threads’ cooperative behaviour.

The cooperative scheduling allows for relatively easy reasoning about the whole program. It’s nonetheless quite powerful: all landings on, and returns from, the moon used programs based on a cooperative scheduler for navigation and attitude control of the spaceships.1 It puts, however, a lot of responsibility into the hands of the programmer.

With the current kernel version, all threads on the run-queue are always executed, before the scheduler will re-evaluate conditions again. That is, a higher priority thread cannot starve lower prio ones, which has cons and pros. The priority only determines the position in the run-queue.


Interrupts can be used, but only outside the kernel. That is, an interrupt must not, say, attempt to enable a process. This would most likely corrupt the kernel’s data. A device can, however, be configured to trigger an interrupt which handles the reception of data, for example, and the corresponding thread can wait for the corresponding hardware flag to be set, and then process the received data. See example program NoBusyWaiting for how device flags can be used to schedule a thread.

Thread Allocation and Use

All data structures for the threads and their coroutines are created when the kernel is installed. The control program then assigns parameterless procedures to the threads for scheduling and execution, also allocating the required stack space.

Once allocated, threads “live” forever. They can be suspended, of course. And re-used by allocating a different piece of code to execute. The kernel and scheduling algorithm become simpler with this concept. Suspended threads don’t use up process execution time, and the scheduler skips over them when evaluating conditions.

The procedure with the thread’s code must not terminate. It either loops, or the thread must suspend itself right at the end. Such a “one-shot” thread, ie. without loop, can be re-used with another code allocation. It cannot simply be re-enabled, however, since then it would terminate. Therefore, if a procedure shall be run “one-shot” again and again, it’s easier to program a loop, as now the thread can be simply re-enabled to run through again.

Each thread gets its stack memory space. Its size can be defined at the first allocation, but not changed anymore, for example when given another procedure to execute. Determining the required stack space can be tricky. The best practical approach is to start the process design and implementation with a “reasonable” value, and then measure the usage with the tools provided in module Memory, and adjust.

Since threads use the PSP, they don’t have to account for the stack needs of exceptions, which will use the MSP. The thread stacks just have to provide memory space for the exception stacking.

Shared Resources, Synchronisation

Since the threads have full control of “their” processor core while executing, there’s no need for mutual lockout for shared data and resources between threads, if

  • the thread finishes its data mutations, or use of a resource, in one run-cycle,
  • the threads sharing data or resources run on the same core.

In case we need mutual lockout and synchronisation:

See Also

Example Programs

  1. There were also programmable timers with interrupts to handle precision timing, eg. TIME3/T3RUPT for the so called waitlist, and TIME4/T4RUPT for periodic sensor readings. – The Apollo Guidance Computer, Architecture and Operation, by Frank O’Brien, worth a read to learn how to compute with close to no processing power, and still navigate deep space, and land two humans on a celestial body nearly 400,000 km away. ↩︎