Linux Kernel  3.7.1
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Macros Groups Pages
Functions | Variables
rcutree.h File Reference

Go to the source code of this file.


void rcu_init (void)
void rcu_note_context_switch (int cpu)
int rcu_needs_cpu (int cpu, unsigned long *delta_jiffies)
void rcu_cpu_stall_reset (void)
void synchronize_rcu_bh (void)
void synchronize_sched_expedited (void)
void synchronize_rcu_expedited (void)
void kfree_call_rcu (struct rcu_head *head, void(*func)(struct rcu_head *rcu))
void rcu_barrier (void)
void rcu_barrier_bh (void)
void rcu_barrier_sched (void)
long rcu_batches_completed (void)
long rcu_batches_completed_bh (void)
long rcu_batches_completed_sched (void)
void rcu_force_quiescent_state (void)
void rcu_bh_force_quiescent_state (void)
void rcu_sched_force_quiescent_state (void)
void rcu_scheduler_starting (void)


unsigned long rcutorture_testseq
unsigned long rcutorture_vernum
int rcu_scheduler_active __read_mostly

Function Documentation

void kfree_call_rcu ( struct rcu_head head,
void(*)(struct rcu_head *rcu)  func 

Definition at line 1013 of file rcutree_plugin.h.

void rcu_barrier ( void  )

Definition at line 1048 of file rcutree_plugin.h.

void rcu_barrier_bh ( void  )

rcu_barrier_bh - Wait until all in-flight call_rcu_bh() callbacks complete.

Definition at line 2599 of file rcutree.c.

void rcu_barrier_sched ( void  )

rcu_barrier_sched - Wait for in-flight call_rcu_sched() callbacks.

Definition at line 2608 of file rcutree.c.

long rcu_batches_completed ( void  )

Definition at line 909 of file rcutree_plugin.h.

long rcu_batches_completed_bh ( void  )

Definition at line 251 of file rcutree.c.

long rcu_batches_completed_sched ( void  )

Definition at line 242 of file rcutree.c.

void rcu_bh_force_quiescent_state ( void  )

Definition at line 260 of file rcutree.c.

void rcu_cpu_stall_reset ( void  )

rcu_cpu_stall_reset - prevent further stall warnings in current grace period

Set the stall-warning timeout way off into the future, thus preventing any RCU CPU stall-warning messages from appearing in the current set of RCU grace periods.

The caller must disable hard irqs.

Definition at line 1008 of file rcutree.c.

void rcu_force_quiescent_state ( void  )

Definition at line 919 of file rcutree_plugin.h.

void rcu_init ( void  )

Definition at line 2962 of file rcutree.c.

int rcu_needs_cpu ( int  cpu,
unsigned long delta_jiffies 

Definition at line 1501 of file rcutree_plugin.h.

void rcu_note_context_switch ( int  cpu)

Definition at line 198 of file rcutree.c.

void rcu_sched_force_quiescent_state ( void  )

Definition at line 294 of file rcutree.c.

void rcu_scheduler_starting ( void  )

Definition at line 2789 of file rcutree.c.

void synchronize_rcu_bh ( void  )

synchronize_rcu_bh - wait until an rcu_bh grace period has elapsed.

Control will return to the caller some time after a full rcu_bh grace period has elapsed, in other words after all currently executing rcu_bh read-side critical sections have completed. RCU read-side critical sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), and may be nested.

Definition at line 2241 of file rcutree.c.

void synchronize_rcu_expedited ( void  )

Definition at line 1024 of file rcutree_plugin.h.

void synchronize_sched_expedited ( void  )

synchronize_sched_expedited - Brute-force RCU-sched grace period

Wait for an RCU-sched grace period to elapse, but use a "big hammer" approach to force the grace period to end quickly. This consumes significant time on all CPUs and is unfriendly to real-time workloads, so is thus not recommended for any sort of common-case code. In fact, if you are using synchronize_sched_expedited() in a loop, please restructure your code to batch your updates, and then use a single synchronize_sched() instead.

Note that it is illegal to call this function while holding any lock that is acquired by a CPU-hotplug notifier. And yes, it is also illegal to call this function from a CPU-hotplug notifier. Failing to observe these restriction will result in deadlock.

This implementation can be thought of as an application of ticket locking to RCU, with sync_sched_expedited_started and sync_sched_expedited_done taking on the roles of the halves of the ticket-lock word. Each task atomically increments sync_sched_expedited_started upon entry, snapshotting the old value, then attempts to stop all the CPUs. If this succeeds, then each CPU will have executed a context switch, resulting in an RCU-sched grace period. We are then done, so we use atomic_cmpxchg() to update sync_sched_expedited_done to match our snapshot – but only if someone else has not already advanced past our snapshot.

On the other hand, if try_stop_cpus() fails, we check the value of sync_sched_expedited_done. If it has advanced past our initial snapshot, then someone else must have forced a grace period some time after we took our snapshot. In this case, our work is done for us, and we can simply return. Otherwise, we try again, but keep our initial snapshot for purposes of checking for someone doing our work for us.

If we fail too many times in a row, we fall back to synchronize_sched().

Definition at line 2310 of file rcutree.c.

Variable Documentation

int rcu_scheduler_active __read_mostly

Definition at line 81 of file setup.c.

unsigned long rcutorture_testseq

Definition at line 156 of file rcutree.c.

unsigned long rcutorture_vernum

Definition at line 157 of file rcutree.c.