pub struct Guard {
pub(crate) local: *const Local,
}
Expand description
A guard that keeps the current thread pinned.
§Pinning
The current thread is pinned by calling pin
, which returns a new guard:
use crossbeam_epoch as epoch;
// It is often convenient to prefix a call to `pin` with a `&` in order to create a reference.
// This is not really necessary, but makes passing references to the guard a bit easier.
let guard = &epoch::pin();
When a guard gets dropped, the current thread is automatically unpinned.
§Pointers on the stack
Having a guard allows us to create pointers on the stack to heap-allocated objects. For example:
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
// Create a heap-allocated number.
let a = Atomic::new(777);
// Pin the current thread.
let guard = &epoch::pin();
// Load the heap-allocated object and create pointer `p` on the stack.
let p = a.load(SeqCst, guard);
// Dereference the pointer and print the value:
if let Some(num) = unsafe { p.as_ref() } {
println!("The number is {}.", num);
}
§Multiple guards
Pinning is reentrant and it is perfectly legal to create multiple guards. In that case, the thread will actually be pinned only when the first guard is created and unpinned when the last one is dropped:
use crossbeam_epoch as epoch;
let guard1 = epoch::pin();
let guard2 = epoch::pin();
assert!(epoch::is_pinned());
drop(guard1);
assert!(epoch::is_pinned());
drop(guard2);
assert!(!epoch::is_pinned());
Fields§
§local: *const Local
Implementations§
§impl Guard
impl Guard
pub fn defer<F, R>(&self, f: F)
pub fn defer<F, R>(&self, f: F)
Stores a function so that it can be executed at some point after all currently pinned threads get unpinned.
This method first stores f
into the thread-local (or handle-local) cache. If this cache
becomes full, some functions are moved into the global cache. At the same time, some
functions from both local and global caches may get executed in order to incrementally
clean up the caches as they fill up.
There is no guarantee when exactly f
will be executed. The only guarantee is that it
won’t be executed until all currently pinned threads get unpinned. In theory, f
might
never run, but the epoch-based garbage collection will make an effort to execute it
reasonably soon.
If this method is called from an unprotected
guard, the function will simply be
executed immediately.
pub unsafe fn defer_unchecked<F, R>(&self, f: F)where
F: FnOnce() -> R,
pub unsafe fn defer_unchecked<F, R>(&self, f: F)where
F: FnOnce() -> R,
Stores a function so that it can be executed at some point after all currently pinned threads get unpinned.
This method first stores f
into the thread-local (or handle-local) cache. If this cache
becomes full, some functions are moved into the global cache. At the same time, some
functions from both local and global caches may get executed in order to incrementally
clean up the caches as they fill up.
There is no guarantee when exactly f
will be executed. The only guarantee is that it
won’t be executed until all currently pinned threads get unpinned. In theory, f
might
never run, but the epoch-based garbage collection will make an effort to execute it
reasonably soon.
If this method is called from an unprotected
guard, the function will simply be
executed immediately.
§Safety
The given function must not hold reference onto the stack. It is highly recommended that
the passed function is always marked with move
in order to prevent accidental
borrows.
use crossbeam_epoch as epoch;
let guard = &epoch::pin();
let message = "Hello!";
unsafe {
// ALWAYS use `move` when sending a closure into `defer_unchecked`.
guard.defer_unchecked(move || {
println!("{}", message);
});
}
Apart from that, keep in mind that another thread may execute f
, so anything accessed by
the closure must be Send
.
We intentionally didn’t require F: Send
, because Rust’s type systems usually cannot prove
F: Send
for typical use cases. For example, consider the following code snippet, which
exemplifies the typical use case of deferring the deallocation of a shared reference:
let shared = Owned::new(7i32).into_shared(guard);
guard.defer_unchecked(move || shared.into_owned()); // `Shared` is not `Send`!
While Shared
is not Send
, it’s safe for another thread to call the deferred function,
because it’s called only after the grace period and shared
is no longer shared with other
threads. But we don’t expect type systems to prove this.
§Examples
When a heap-allocated object in a data structure becomes unreachable, it has to be deallocated. However, the current thread and other threads may be still holding references on the stack to that same object. Therefore it cannot be deallocated before those references get dropped. This method can defer deallocation until all those threads get unpinned and consequently drop all their references on the stack.
use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new("foo");
// Now suppose that `a` is shared among multiple threads and concurrently
// accessed and modified...
// Pin the current thread.
let guard = &epoch::pin();
// Steal the object currently stored in `a` and swap it with another one.
let p = a.swap(Owned::new("bar").into_shared(guard), SeqCst, guard);
if !p.is_null() {
// The object `p` is pointing to is now unreachable.
// Defer its deallocation until all currently pinned threads get unpinned.
unsafe {
// ALWAYS use `move` when sending a closure into `defer_unchecked`.
guard.defer_unchecked(move || {
println!("{} is now being deallocated.", p.deref());
// Now we have unique access to the object pointed to by `p` and can turn it
// into an `Owned`. Dropping the `Owned` will deallocate the object.
drop(p.into_owned());
});
}
}
pub unsafe fn defer_destroy<T>(&self, ptr: Shared<'_, T>)
pub unsafe fn defer_destroy<T>(&self, ptr: Shared<'_, T>)
Stores a destructor for an object so that it can be deallocated and dropped at some point after all currently pinned threads get unpinned.
This method first stores the destructor into the thread-local (or handle-local) cache. If this cache becomes full, some destructors are moved into the global cache. At the same time, some destructors from both local and global caches may get executed in order to incrementally clean up the caches as they fill up.
There is no guarantee when exactly the destructor will be executed. The only guarantee is that it won’t be executed until all currently pinned threads get unpinned. In theory, the destructor might never run, but the epoch-based garbage collection will make an effort to execute it reasonably soon.
If this method is called from an unprotected
guard, the destructor will simply be
executed immediately.
§Safety
The object must not be reachable by other threads anymore, otherwise it might be still in use when the destructor runs.
Apart from that, keep in mind that another thread may execute the destructor, so the object must be sendable to other threads.
We intentionally didn’t require T: Send
, because Rust’s type systems usually cannot prove
T: Send
for typical use cases. For example, consider the following code snippet, which
exemplifies the typical use case of deferring the deallocation of a shared reference:
let shared = Owned::new(7i32).into_shared(guard);
guard.defer_destroy(shared); // `Shared` is not `Send`!
While Shared
is not Send
, it’s safe for another thread to call the destructor, because
it’s called only after the grace period and shared
is no longer shared with other
threads. But we don’t expect type systems to prove this.
§Examples
When a heap-allocated object in a data structure becomes unreachable, it has to be deallocated. However, the current thread and other threads may be still holding references on the stack to that same object. Therefore it cannot be deallocated before those references get dropped. This method can defer deallocation until all those threads get unpinned and consequently drop all their references on the stack.
use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new("foo");
// Now suppose that `a` is shared among multiple threads and concurrently
// accessed and modified...
// Pin the current thread.
let guard = &epoch::pin();
// Steal the object currently stored in `a` and swap it with another one.
let p = a.swap(Owned::new("bar").into_shared(guard), SeqCst, guard);
if !p.is_null() {
// The object `p` is pointing to is now unreachable.
// Defer its deallocation until all currently pinned threads get unpinned.
unsafe {
guard.defer_destroy(p);
}
}
pub fn flush(&self)
pub fn flush(&self)
Clears up the thread-local cache of deferred functions by executing them or moving into the global cache.
Call this method after deferring execution of a function if you want to get it executed as soon as possible. Flushing will make sure it is residing in in the global cache, so that any thread has a chance of taking the function and executing it.
If this method is called from an unprotected
guard, it is a no-op (nothing happens).
§Examples
use crossbeam_epoch as epoch;
let guard = &epoch::pin();
guard.defer(move || {
println!("This better be printed as soon as possible!");
});
guard.flush();
pub fn repin(&mut self)
pub fn repin(&mut self)
Unpins and then immediately re-pins the thread.
This method is useful when you don’t want delay the advancement of the global epoch by
holding an old epoch. For safety, you should not maintain any guard-based reference across
the call (the latter is enforced by &mut self
). The thread will only be repinned if this
is the only active guard for the current thread.
If this method is called from an unprotected
guard, then the call will be just no-op.
§Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new(777);
let mut guard = epoch::pin();
{
let p = a.load(SeqCst, &guard);
assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
guard.repin();
{
let p = a.load(SeqCst, &guard);
assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
pub fn repin_after<F, R>(&mut self, f: F) -> Rwhere
F: FnOnce() -> R,
pub fn repin_after<F, R>(&mut self, f: F) -> Rwhere
F: FnOnce() -> R,
Temporarily unpins the thread, executes the given function and then re-pins the thread.
This method is useful when you need to perform a long-running operation (e.g. sleeping)
and don’t need to maintain any guard-based reference across the call (the latter is enforced
by &mut self
). The thread will only be unpinned if this is the only active guard for the
current thread.
If this method is called from an unprotected
guard, then the passed function is called
directly without unpinning the thread.
§Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
use std::thread;
use std::time::Duration;
let a = Atomic::new(777);
let mut guard = epoch::pin();
{
let p = a.load(SeqCst, &guard);
assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
guard.repin_after(|| thread::sleep(Duration::from_millis(50)));
{
let p = a.load(SeqCst, &guard);
assert_eq!(unsafe { p.as_ref() }, Some(&777));
}
pub fn collector(&self) -> Option<&Collector>
pub fn collector(&self) -> Option<&Collector>
Returns the Collector
associated with this guard.
This method is useful when you need to ensure that all guards used with a data structure come from the same collector.
If this method is called from an unprotected
guard, then None
is returned.
§Examples
use crossbeam_epoch as epoch;
let guard1 = epoch::pin();
let guard2 = epoch::pin();
assert!(guard1.collector() == guard2.collector());
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Guard
impl !RefUnwindSafe for Guard
impl !Send for Guard
impl !Sync for Guard
impl Unpin for Guard
impl !UnwindSafe for Guard
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CheckedAs for T
impl<T> CheckedAs for T
source§fn checked_as<Dst>(self) -> Option<Dst>where
T: CheckedCast<Dst>,
fn checked_as<Dst>(self) -> Option<Dst>where
T: CheckedCast<Dst>,
source§impl<Src, Dst> CheckedCastFrom<Src> for Dstwhere
Src: CheckedCast<Dst>,
impl<Src, Dst> CheckedCastFrom<Src> for Dstwhere
Src: CheckedCast<Dst>,
source§fn checked_cast_from(src: Src) -> Option<Dst>
fn checked_cast_from(src: Src) -> Option<Dst>
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
fn into_either(self, into_left: bool) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self> ⓘ
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request