Linux Kernel
3.7.1
|
Go to the source code of this file.
Functions | |
void | rcu_init (void) |
void | rcu_note_context_switch (int cpu) |
int | rcu_needs_cpu (int cpu, unsigned long *delta_jiffies) |
void | rcu_cpu_stall_reset (void) |
void | synchronize_rcu_bh (void) |
void | synchronize_sched_expedited (void) |
void | synchronize_rcu_expedited (void) |
void | kfree_call_rcu (struct rcu_head *head, void(*func)(struct rcu_head *rcu)) |
void | rcu_barrier (void) |
void | rcu_barrier_bh (void) |
void | rcu_barrier_sched (void) |
long | rcu_batches_completed (void) |
long | rcu_batches_completed_bh (void) |
long | rcu_batches_completed_sched (void) |
void | rcu_force_quiescent_state (void) |
void | rcu_bh_force_quiescent_state (void) |
void | rcu_sched_force_quiescent_state (void) |
void | rcu_scheduler_starting (void) |
Variables | |
unsigned long | rcutorture_testseq |
unsigned long | rcutorture_vernum |
int rcu_scheduler_active | __read_mostly |
Definition at line 1013 of file rcutree_plugin.h.
Definition at line 1048 of file rcutree_plugin.h.
rcu_barrier_bh - Wait until all in-flight call_rcu_bh() callbacks complete.
rcu_barrier_sched - Wait for in-flight call_rcu_sched() callbacks.
Definition at line 909 of file rcutree_plugin.h.
Definition at line 919 of file rcutree_plugin.h.
Definition at line 1501 of file rcutree_plugin.h.
synchronize_rcu_bh - wait until an rcu_bh grace period has elapsed.
Control will return to the caller some time after a full rcu_bh grace period has elapsed, in other words after all currently executing rcu_bh read-side critical sections have completed. RCU read-side critical sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), and may be nested.
Definition at line 1024 of file rcutree_plugin.h.
synchronize_sched_expedited - Brute-force RCU-sched grace period
Wait for an RCU-sched grace period to elapse, but use a "big hammer" approach to force the grace period to end quickly. This consumes significant time on all CPUs and is unfriendly to real-time workloads, so is thus not recommended for any sort of common-case code. In fact, if you are using synchronize_sched_expedited() in a loop, please restructure your code to batch your updates, and then use a single synchronize_sched() instead.
Note that it is illegal to call this function while holding any lock that is acquired by a CPU-hotplug notifier. And yes, it is also illegal to call this function from a CPU-hotplug notifier. Failing to observe these restriction will result in deadlock.
This implementation can be thought of as an application of ticket locking to RCU, with sync_sched_expedited_started and sync_sched_expedited_done taking on the roles of the halves of the ticket-lock word. Each task atomically increments sync_sched_expedited_started upon entry, snapshotting the old value, then attempts to stop all the CPUs. If this succeeds, then each CPU will have executed a context switch, resulting in an RCU-sched grace period. We are then done, so we use atomic_cmpxchg() to update sync_sched_expedited_done to match our snapshot – but only if someone else has not already advanced past our snapshot.
On the other hand, if try_stop_cpus() fails, we check the value of sync_sched_expedited_done. If it has advanced past our initial snapshot, then someone else must have forced a grace period some time after we took our snapshot. In this case, our work is done for us, and we can simply return. Otherwise, we try again, but keep our initial snapshot for purposes of checking for someone doing our work for us.
If we fail too many times in a row, we fall back to synchronize_sched().