std::sync::RwLock<T>
Mutex
, but has separate modes for reading and writing access.T
must be Send + Sync
.Mutex
.std::sync::RwLock<T>
RwLock
allows safe, concurrent access with better performance
(fewer lockouts; more concurrency) in some situations.Mutex
, which locks everyone else out.RwLock
only works on Sync
types - otherwise, you really need mutual exclusion.RefCell
, which also encodes the
single-writer-or-multiple-reader idea, but panics instead of blocking
on conflicts and is never thread-safe (Sync
).Mutex
, an RwLock
could cause deadlock under certain conditions.std::sync::Barrier
wait
on a copy of the Barrier
to "report" their
readiness.n - 1
threads and wakes all blocked threads when the n
th thread
reports.std::sync::Barrier
Barrier
.std::sync::Condvar
Mutex
wrapping some boolean predicate, which is
the blocking predicate.while !predicate { }
std::sync::Condvar
std::sync::Condvar
std::sync::Once
once.call_once(function)
is called, only the first one will execute.¹ Some section content borrowed from The Rustonomicon
x
being equal to 1
at some
point, and this optimization may be bogus!x = 1;y = 2; =====> x = 3;x = 3; y = 2;
y = 3
(thread 1 writes to y after thread 2 reads y)y = 6
(thread 1 writes to y before thread 2 reads y)Initially: x = 0, y = 1;Thread 1 | Thread 2----------+-------------y = 3; | if x == 1 {x = 1; | y *= 2; | }
std::sync::atomic
AtomicUsize
, Isize
, Bool
, Ptr
Sync
).std::sync
use these primitives (Arc
,
Mutex
, etc.).AtomicUsize
freely.usize
.Ordering
.Ordering
describes how the compiler & CPU may reorder instructions
surrounding atomic operations.¹ More on these in a bit.
rayon::join
, which converts recursive divide-and-conquer iterators to
execute in parallel.Send
and Sync
)par_iter()
or par_iter_mut()
instead of the non-par_
variants.map
, fold
, filter
, etc.).// Increment all values in a sliceuse rayon::prelude::*;fn increment_all(input: &mut [i32]) { input.par_iter_mut() .for_each(|p| *p += 1);}
join
method.par_iter()
abtracts over it.join
takes two closures and potentially runs them in parallel.increment_all()
using join()
.// Increment all values in slice.fn increment_all(slice: &mut [i32]) { if slice.len() < 1000 { for p in slice { *p += 1; } } else { let mid_point = slice.len() / 2; let (left, right) = slice.split_at_mut(mid_point); rayon::join(|| increment_all(left), || increment_all(right)); }}
join()
can also be used to implement things like parallel quicksort:fn quick_sort<T:PartialOrd+Send>(v: &mut [T]) { if v.len() <= 1 { return; } let mid = partition(v); // Choose some partition let (lo, hi) = v.split_at_mut(mid); rayon::join(|| quick_sort(lo), || quick_sort(hi));}
join
is not the same as just spawning two threads (one per
closure).join
is designed to have low overhead, but may have performance
implications on small workloads.join
is lower-level.// This fails to compile, since both threads in `join`// try to borrow `slice` mutably.fn increment_all(slice: &mut [i32]) { rayon::join(|| process(slice), || process(slice));}
Cell
-> AtomicUsize
, AtomicBool
, etc.RefCell
-> RwLock
Rc
-> Arc
(Ref)Cell
-like structures in parallel has some pitfalls due to code
interleaving.Rc<Cell<usize>>
can't be blindly logically converted to an
Arc<AtomicUsize>
.ts
is an Arc<AtomicUsize>
:Thread 1 | Thread 2--------------------------------+--------------------------------let value = | ts.load(Ordering::SeqCst); |// value = X | let value = | ts.load(Ordering::SeqCst); | // value = Xts.store(value+1); | ts.store(value+1);// ts = X+1 | // ts = X+1
ts
only gets incremented by 1, but we expect it to get
incremented twice.load
and store
.fetch_add
is more appropriate (and correct!) in this case.join(a, b)
, a
is started, and b
gets put on a queue of
pending work.a
completes, the thread that ran a
checks to see if b
was taken off
the queue, and runs it if not.*Your personal value of relativity may vary
crossbeam::Scope::defer(function)
schedules some code to be executed
at the end of its scope.
crossbeam::Scope::spawn
creates a standalone scoped thread, tied to a parent scope.
'static
lifetime.Scope
is tied to its parent thread's liftetime, the function the
thread executes need only have some 'a
lifetime of its parent!Arc
wrapper just to share it with
local threads.scoped_pool
provided pools with both scoped and unscoped threads.scoped_threadpool
only has scoped pools.std
?mem::forget
,
and have it end up accessing freed memory.Future
s and Stream
s, which represent asynchronous computations.Promise
s (in JavaScript).The term promise was proposed in 1976 by Daniel P. Friedman and David Wise, and Peter Hibbard called it eventual. A somewhat similar concept future was introduced in 1977 in a paper by Henry Baker and Carl Hewitt. —Wikipedia
Future
is a proxy for a value which is being computed asynchronously.Future
may be run in another thread, or may be
kicked off as a callback to some other operation.Future
like a Result
whose value is computed
asynchronously.Stream
s are similar to Future
s, but represent a sequence of values instead
of just one.extern crate eventual;use eventual::*;let f1 = Future::spawn(|| { 1 });let f2 = Future::spawn(|| { 2 });let res = join((f1, f2)) .and_then(|(v1, v2)| Ok(v1 + v2)) .await().unwrap();println!("{}", res); // 3
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |