cyg_scheduler_start
should only be called once,
to mark the end of system initialization. In typical configurations it
is called automatically by the system startup, but some applications
may bypass the standard startup in which case
cyg_scheduler_start
will have to be called
explicitly. The call will enable system interrupts, allowing I/O
operations to commence. Then the scheduler will be invoked and control
will be transferred to the highest priority runnable thread. The call
will never return.
The various data structures inside the eCos kernel must be protected
against concurrent updates. Consider a call to
cyg_semaphore_post
which causes a thread to be
woken up: the semaphore data structure must be updated to remove the
thread from its queue; the scheduler data structure must also be
updated to mark the thread as runnable; it is possible that the newly
runnable thread has a higher priority than the current one, in which
case preemption is required. If in the middle of the semaphore post
call an interrupt occurred and the interrupt handler tried to
manipulate the same data structures, for example by making another
thread runnable, then it is likely that the structures will be left in
an inconsistent state and the system will fail.
To prevent such problems the kernel contains a special lock known as
the scheduler lock. A typical kernel function such as
cyg_semaphore_post
will claim the scheduler lock,
do all its manipulation of kernel data structures, and then release
the scheduler lock. The current thread cannot be preempted while it
holds the scheduler lock. If an interrupt occurs and a DSR is supposed
to run to signal that some event has occurred, that DSR is postponed
until the scheduler unlock operation. This prevents concurrent updates
of kernel data structures.
The kernel exports three routines for manipulating the scheduler lock.
cyg_scheduler_lock
can be called to claim the
lock. On return it is guaranteed that the current thread will not be
preempted, and that no other code is manipulating any kernel data
structures. cyg_scheduler_unlock
can be used to
release the lock, which may cause the current thread to be preempted.
cyg_scheduler_read_lock
can be used to query the
current state of the scheduler lock. This function should never be
needed because well-written code should always know whether or not the
scheduler is currently locked, but may prove useful during debugging.
The implementation of the scheduler lock involves a simple counter.
Code can call cyg_scheduler_lock
multiple times,
causing the counter to be incremented each time, as long as
cyg_scheduler_unlock
is called the same number of
times. This behaviour is different from mutexes where an attempt by a
thread to lock a mutex multiple times will result in deadlock or an
assertion failure.
Typical application code should not use the scheduler lock. Instead other synchronization primitives such as mutexes and semaphores should be used. While the scheduler is locked the current thread cannot be preempted, so any higher priority threads will not be able to run. Also no DSRs can run, so device drivers may not be able to service I/O requests. However there is one situation where locking the scheduler is appropriate: if some data structure needs to be shared between an application thread and a DSR associated with some interrupt source, the thread can use the scheduler lock to prevent concurrent invocations of the DSR and then safely manipulate the structure. It is desirable that the scheduler lock is held for only a short period of time, typically some tens of instructions. In exceptional cases there may also be some performance-critical code where it is more appropriate to use the scheduler lock rather than a mutex, because the former is more efficient.
cyg_scheduler_start
can only be called during
system initialization, since it marks the end of that phase. The
remaining functions may be called from thread or DSR context. Locking
the scheduler from inside the DSR has no practical effect because the
lock is claimed automatically by the interrupt subsystem before
running DSRs, but allows functions to be shared between normal thread
code and DSRs.