1 //! Callsites represent the source locations from which spans or events
2 //! originate.
3 //!
4 //! # What Are Callsites?
5 //!
6 //! Every span or event in `tracing` is associated with a [`Callsite`]. A
7 //! callsite is a small `static` value that is responsible for the following:
8 //!
9 //! * Storing the span or event's [`Metadata`],
10 //! * Uniquely [identifying](Identifier) the span or event definition,
11 //! * Caching the subscriber's [`Interest`][^1] in that span or event, to avoid
12 //! re-evaluating filters.
13 //!
14 //! # Registering Callsites
15 //!
16 //! When a span or event is recorded for the first time, its callsite
17 //! [`register`]s itself with the global callsite registry. Registering a
18 //! callsite calls the [`Subscriber::register_callsite`][`register_callsite`]
19 //! method with that callsite's [`Metadata`] on every currently active
20 //! subscriber. This serves two primary purposes: informing subscribers of the
21 //! callsite's existence, and performing static filtering.
22 //!
23 //! ## Callsite Existence
24 //!
25 //! If a [`Subscriber`] implementation wishes to allocate storage for each
26 //! unique span/event location in the program, or pre-compute some value
27 //! that will be used to record that span or event in the future, it can
28 //! do so in its [`register_callsite`] method.
29 //!
30 //! ## Performing Static Filtering
31 //!
32 //! The [`register_callsite`] method returns an [`Interest`] value,
33 //! which indicates that the subscriber either [always] wishes to record
34 //! that span or event, [sometimes] wishes to record it based on a
35 //! dynamic filter evaluation, or [never] wishes to record it.
36 //!
37 //! When registering a new callsite, the [`Interest`]s returned by every
38 //! currently active subscriber are combined, and the result is stored at
39 //! each callsite. This way, when the span or event occurs in the
40 //! future, the cached [`Interest`] value can be checked efficiently
41 //! to determine if the span or event should be recorded, without
42 //! needing to perform expensive filtering (i.e. calling the
43 //! [`Subscriber::enabled`] method every time a span or event occurs).
44 //!
45 //! ### Rebuilding Cached Interest
46 //!
47 //! When a new [`Dispatch`] is created (i.e. a new subscriber becomes
48 //! active), any previously cached [`Interest`] values are re-evaluated
49 //! for all callsites in the program. This way, if the new subscriber
50 //! will enable a callsite that was not previously enabled, the
51 //! [`Interest`] in that callsite is updated. Similarly, when a
52 //! subscriber is dropped, the interest cache is also re-evaluated, so
53 //! that any callsites enabled only by that subscriber are disabled.
54 //!
55 //! In addition, the [`rebuild_interest_cache`] function in this module can be
56 //! used to manually invalidate all cached interest and re-register those
57 //! callsites. This function is useful in situations where a subscriber's
58 //! interest can change, but it does so relatively infrequently. The subscriber
59 //! may wish for its interest to be cached most of the time, and return
60 //! [`Interest::always`][always] or [`Interest::never`][never] in its
61 //! [`register_callsite`] method, so that its [`Subscriber::enabled`] method
62 //! doesn't need to be evaluated every time a span or event is recorded.
63 //! However, when the configuration changes, the subscriber can call
64 //! [`rebuild_interest_cache`] to re-evaluate the entire interest cache with its
65 //! new configuration. This is a relatively costly operation, but if the
66 //! configuration changes infrequently, it may be more efficient than calling
67 //! [`Subscriber::enabled`] frequently.
68 //!
69 //! # Implementing Callsites
70 //!
71 //! In most cases, instrumenting code using `tracing` should *not* require
72 //! implementing the [`Callsite`] trait directly. When using the [`tracing`
73 //! crate's macros][macros] or the [`#[instrument]` attribute][instrument], a
74 //! `Callsite` is automatically generated.
75 //!
76 //! However, code which provides alternative forms of `tracing` instrumentation
77 //! may need to interact with the callsite system directly. If
78 //! instrumentation-side code needs to produce a `Callsite` to emit spans or
79 //! events, the [`DefaultCallsite`] struct provided in this module is a
80 //! ready-made `Callsite` implementation that is suitable for most uses. When
81 //! possible, the use of `DefaultCallsite` should be preferred over implementing
82 //! [`Callsite`] for user types, as `DefaultCallsite` may benefit from
83 //! additional performance optimizations.
84 //!
85 //! [^1]: Returned by the [`Subscriber::register_callsite`][`register_callsite`]
86 //! method.
87 //!
88 //! [`Metadata`]: crate::metadata::Metadata
89 //! [`Interest`]: crate::subscriber::Interest
90 //! [`Subscriber`]: crate::subscriber::Subscriber
91 //! [`register_callsite`]: crate::subscriber::Subscriber::register_callsite
92 //! [`Subscriber::enabled`]: crate::subscriber::Subscriber::enabled
93 //! [always]: crate::subscriber::Interest::always
94 //! [sometimes]: crate::subscriber::Interest::sometimes
95 //! [never]: crate::subscriber::Interest::never
96 //! [`Dispatch`]: crate::dispatch::Dispatch
97 //! [macros]: https://docs.rs/tracing/latest/tracing/#macros
98 //! [instrument]: https://docs.rs/tracing/latest/tracing/attr.instrument.html
99 use crate::stdlib::{
100 any::TypeId,
101 fmt,
102 hash::{Hash, Hasher},
103 ptr,
104 sync::{
105 atomic::{AtomicBool, AtomicPtr, AtomicU8, Ordering},
106 Mutex,
107 },
108 vec::Vec,
109 };
110 use crate::{
111 dispatcher::Dispatch,
112 lazy::Lazy,
113 metadata::{LevelFilter, Metadata},
114 subscriber::Interest,
115 };
116
117 use self::dispatchers::Dispatchers;
118
119 /// Trait implemented by callsites.
120 ///
121 /// These functions are only intended to be called by the callsite registry, which
122 /// correctly handles determining the common interest between all subscribers.
123 ///
124 /// See the [module-level documentation](crate::callsite) for details on
125 /// callsites.
126 pub trait Callsite: Sync {
127 /// Sets the [`Interest`] for this callsite.
128 ///
129 /// See the [documentation on callsite interest caching][cache-docs] for
130 /// details.
131 ///
132 /// [`Interest`]: super::subscriber::Interest
133 /// [cache-docs]: crate::callsite#performing-static-filtering
set_interest(&self, interest: Interest)134 fn set_interest(&self, interest: Interest);
135
136 /// Returns the [metadata] associated with the callsite.
137 ///
138 /// <div class="example-wrap" style="display:inline-block">
139 /// <pre class="ignore" style="white-space:normal;font:inherit;">
140 ///
141 /// **Note:** Implementations of this method should not produce [`Metadata`]
142 /// that share the same callsite [`Identifier`] but otherwise differ in any
143 /// way (e.g., have different `name`s).
144 ///
145 /// </pre></div>
146 ///
147 /// [metadata]: super::metadata::Metadata
metadata(&self) -> &Metadata<'_>148 fn metadata(&self) -> &Metadata<'_>;
149
150 /// This method is an *internal implementation detail* of `tracing-core`. It
151 /// is *not* intended to be called or overridden from downstream code.
152 ///
153 /// The `Private` type can only be constructed from within `tracing-core`.
154 /// Because this method takes a `Private` as an argument, it cannot be
155 /// called from (safe) code external to `tracing-core`. Because it must
156 /// *return* a `Private`, the only valid implementation possible outside of
157 /// `tracing-core` would have to always unconditionally panic.
158 ///
159 /// THIS IS BY DESIGN. There is currently no valid reason for code outside
160 /// of `tracing-core` to override this method.
161 // TODO(eliza): this could be used to implement a public downcasting API
162 // for `&dyn Callsite`s in the future.
163 #[doc(hidden)]
164 #[inline]
private_type_id(&self, _: private::Private<()>) -> private::Private<TypeId> where Self: 'static,165 fn private_type_id(&self, _: private::Private<()>) -> private::Private<TypeId>
166 where
167 Self: 'static,
168 {
169 private::Private(TypeId::of::<Self>())
170 }
171 }
172
173 /// Uniquely identifies a [`Callsite`]
174 ///
175 /// Two `Identifier`s are equal if they both refer to the same callsite.
176 ///
177 /// [`Callsite`]: super::callsite::Callsite
178 #[derive(Clone)]
179 pub struct Identifier(
180 /// **Warning**: The fields on this type are currently `pub` because it must
181 /// be able to be constructed statically by macros. However, when `const
182 /// fn`s are available on stable Rust, this will no longer be necessary.
183 /// Thus, these fields are *not* considered stable public API, and they may
184 /// change warning. Do not rely on any fields on `Identifier`. When
185 /// constructing new `Identifier`s, use the `identify_callsite!` macro
186 /// instead.
187 #[doc(hidden)]
188 pub &'static dyn Callsite,
189 );
190
191 /// A default [`Callsite`] implementation.
192 #[derive(Debug)]
193 pub struct DefaultCallsite {
194 interest: AtomicU8,
195 registration: AtomicU8,
196 meta: &'static Metadata<'static>,
197 next: AtomicPtr<Self>,
198 }
199
200 /// Clear and reregister interest on every [`Callsite`]
201 ///
202 /// This function is intended for runtime reconfiguration of filters on traces
203 /// when the filter recalculation is much less frequent than trace events are.
204 /// The alternative is to have the [`Subscriber`] that supports runtime
205 /// reconfiguration of filters always return [`Interest::sometimes()`] so that
206 /// [`enabled`] is evaluated for every event.
207 ///
208 /// This function will also re-compute the global maximum level as determined by
209 /// the [`max_level_hint`] method. If a [`Subscriber`]
210 /// implementation changes the value returned by its `max_level_hint`
211 /// implementation at runtime, then it **must** call this function after that
212 /// value changes, in order for the change to be reflected.
213 ///
214 /// See the [documentation on callsite interest caching][cache-docs] for
215 /// additional information on this function's usage.
216 ///
217 /// [`max_level_hint`]: super::subscriber::Subscriber::max_level_hint
218 /// [`Callsite`]: super::callsite::Callsite
219 /// [`enabled`]: super::subscriber::Subscriber#tymethod.enabled
220 /// [`Interest::sometimes()`]: super::subscriber::Interest::sometimes
221 /// [`Subscriber`]: super::subscriber::Subscriber
222 /// [cache-docs]: crate::callsite#rebuilding-cached-interest
rebuild_interest_cache()223 pub fn rebuild_interest_cache() {
224 CALLSITES.rebuild_interest(DISPATCHERS.rebuilder());
225 }
226
227 /// Register a new [`Callsite`] with the global registry.
228 ///
229 /// This should be called once per callsite after the callsite has been
230 /// constructed.
231 ///
232 /// See the [documentation on callsite registration][reg-docs] for details
233 /// on the global callsite registry.
234 ///
235 /// [`Callsite`]: crate::callsite::Callsite
236 /// [reg-docs]: crate::callsite#registering-callsites
register(callsite: &'static dyn Callsite)237 pub fn register(callsite: &'static dyn Callsite) {
238 rebuild_callsite_interest(callsite, &DISPATCHERS.rebuilder());
239
240 // Is this a `DefaultCallsite`? If so, use the fancy linked list!
241 if callsite.private_type_id(private::Private(())).0 == TypeId::of::<DefaultCallsite>() {
242 let callsite = unsafe {
243 // Safety: the pointer cast is safe because the type id of the
244 // provided callsite matches that of the target type for the cast
245 // (`DefaultCallsite`). Because user implementations of `Callsite`
246 // cannot override `private_type_id`, we can trust that the callsite
247 // is not lying about its type ID.
248 &*(callsite as *const dyn Callsite as *const DefaultCallsite)
249 };
250 CALLSITES.push_default(callsite);
251 return;
252 }
253
254 CALLSITES.push_dyn(callsite);
255 }
256
257 static CALLSITES: Callsites = Callsites {
258 list_head: AtomicPtr::new(ptr::null_mut()),
259 has_locked_callsites: AtomicBool::new(false),
260 };
261
262 static DISPATCHERS: Dispatchers = Dispatchers::new();
263
264 static LOCKED_CALLSITES: Lazy<Mutex<Vec<&'static dyn Callsite>>> = Lazy::new(Default::default);
265
266 struct Callsites {
267 list_head: AtomicPtr<DefaultCallsite>,
268 has_locked_callsites: AtomicBool,
269 }
270
271 // === impl DefaultCallsite ===
272
273 impl DefaultCallsite {
274 const UNREGISTERED: u8 = 0;
275 const REGISTERING: u8 = 1;
276 const REGISTERED: u8 = 2;
277
278 const INTEREST_NEVER: u8 = 0;
279 const INTEREST_SOMETIMES: u8 = 1;
280 const INTEREST_ALWAYS: u8 = 2;
281
282 /// Returns a new `DefaultCallsite` with the specified `Metadata`.
new(meta: &'static Metadata<'static>) -> Self283 pub const fn new(meta: &'static Metadata<'static>) -> Self {
284 Self {
285 interest: AtomicU8::new(0xFF),
286 meta,
287 next: AtomicPtr::new(ptr::null_mut()),
288 registration: AtomicU8::new(Self::UNREGISTERED),
289 }
290 }
291
292 /// Registers this callsite with the global callsite registry.
293 ///
294 /// If the callsite is already registered, this does nothing. When using
295 /// [`DefaultCallsite`], this method should be preferred over
296 /// [`tracing_core::callsite::register`], as it ensures that the callsite is
297 /// only registered a single time.
298 ///
299 /// Other callsite implementations will generally ensure that
300 /// callsites are not re-registered through another mechanism.
301 ///
302 /// See the [documentation on callsite registration][reg-docs] for details
303 /// on the global callsite registry.
304 ///
305 /// [`Callsite`]: crate::callsite::Callsite
306 /// [reg-docs]: crate::callsite#registering-callsites
307 #[inline(never)]
308 // This only happens once (or if the cached interest value was corrupted).
309 #[cold]
register(&'static self) -> Interest310 pub fn register(&'static self) -> Interest {
311 // Attempt to advance the registration state to `REGISTERING`...
312 match self.registration.compare_exchange(
313 Self::UNREGISTERED,
314 Self::REGISTERING,
315 Ordering::AcqRel,
316 Ordering::Acquire,
317 ) {
318 Ok(_) => {
319 // Okay, we advanced the state, try to register the callsite.
320 rebuild_callsite_interest(self, &DISPATCHERS.rebuilder());
321 CALLSITES.push_default(self);
322 self.registration.store(Self::REGISTERED, Ordering::Release);
323 }
324 // Great, the callsite is already registered! Just load its
325 // previous cached interest.
326 Err(Self::REGISTERED) => {}
327 // Someone else is registering...
328 Err(_state) => {
329 debug_assert_eq!(
330 _state,
331 Self::REGISTERING,
332 "weird callsite registration state"
333 );
334 // Just hit `enabled` this time.
335 return Interest::sometimes();
336 }
337 }
338
339 match self.interest.load(Ordering::Relaxed) {
340 Self::INTEREST_NEVER => Interest::never(),
341 Self::INTEREST_ALWAYS => Interest::always(),
342 _ => Interest::sometimes(),
343 }
344 }
345
346 /// Returns the callsite's cached `Interest`, or registers it for the
347 /// first time if it has not yet been registered.
348 #[inline]
interest(&'static self) -> Interest349 pub fn interest(&'static self) -> Interest {
350 match self.interest.load(Ordering::Relaxed) {
351 Self::INTEREST_NEVER => Interest::never(),
352 Self::INTEREST_SOMETIMES => Interest::sometimes(),
353 Self::INTEREST_ALWAYS => Interest::always(),
354 _ => self.register(),
355 }
356 }
357 }
358
359 impl Callsite for DefaultCallsite {
set_interest(&self, interest: Interest)360 fn set_interest(&self, interest: Interest) {
361 let interest = match () {
362 _ if interest.is_never() => Self::INTEREST_NEVER,
363 _ if interest.is_always() => Self::INTEREST_ALWAYS,
364 _ => Self::INTEREST_SOMETIMES,
365 };
366 self.interest.store(interest, Ordering::SeqCst);
367 }
368
369 #[inline(always)]
metadata(&self) -> &Metadata<'static>370 fn metadata(&self) -> &Metadata<'static> {
371 self.meta
372 }
373 }
374
375 // ===== impl Identifier =====
376
377 impl PartialEq for Identifier {
eq(&self, other: &Identifier) -> bool378 fn eq(&self, other: &Identifier) -> bool {
379 core::ptr::eq(
380 self.0 as *const _ as *const (),
381 other.0 as *const _ as *const (),
382 )
383 }
384 }
385
386 impl Eq for Identifier {}
387
388 impl fmt::Debug for Identifier {
fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result389 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
390 write!(f, "Identifier({:p})", self.0)
391 }
392 }
393
394 impl Hash for Identifier {
hash<H>(&self, state: &mut H) where H: Hasher,395 fn hash<H>(&self, state: &mut H)
396 where
397 H: Hasher,
398 {
399 (self.0 as *const dyn Callsite).hash(state)
400 }
401 }
402
403 // === impl Callsites ===
404
405 impl Callsites {
406 /// Rebuild `Interest`s for all callsites in the registry.
407 ///
408 /// This also re-computes the max level hint.
rebuild_interest(&self, dispatchers: dispatchers::Rebuilder<'_>)409 fn rebuild_interest(&self, dispatchers: dispatchers::Rebuilder<'_>) {
410 let mut max_level = LevelFilter::OFF;
411 dispatchers.for_each(|dispatch| {
412 // If the subscriber did not provide a max level hint, assume
413 // that it may enable every level.
414 let level_hint = dispatch.max_level_hint().unwrap_or(LevelFilter::TRACE);
415 if level_hint > max_level {
416 max_level = level_hint;
417 }
418 });
419
420 self.for_each(|callsite| {
421 rebuild_callsite_interest(callsite, &dispatchers);
422 });
423 LevelFilter::set_max(max_level);
424 }
425
426 /// Push a `dyn Callsite` trait object to the callsite registry.
427 ///
428 /// This will attempt to lock the callsites vector.
push_dyn(&self, callsite: &'static dyn Callsite)429 fn push_dyn(&self, callsite: &'static dyn Callsite) {
430 let mut lock = LOCKED_CALLSITES.lock().unwrap();
431 self.has_locked_callsites.store(true, Ordering::Release);
432 lock.push(callsite);
433 }
434
435 /// Push a `DefaultCallsite` to the callsite registry.
436 ///
437 /// If we know the callsite being pushed is a `DefaultCallsite`, we can push
438 /// it to the linked list without having to acquire a lock.
push_default(&self, callsite: &'static DefaultCallsite)439 fn push_default(&self, callsite: &'static DefaultCallsite) {
440 let mut head = self.list_head.load(Ordering::Acquire);
441
442 loop {
443 callsite.next.store(head, Ordering::Release);
444
445 assert_ne!(
446 callsite as *const _, head,
447 "Attempted to register a `DefaultCallsite` that already exists! \
448 This will cause an infinite loop when attempting to read from the \
449 callsite cache. This is likely a bug! You should only need to call \
450 `DefaultCallsite::register` once per `DefaultCallsite`."
451 );
452
453 match self.list_head.compare_exchange(
454 head,
455 callsite as *const _ as *mut _,
456 Ordering::AcqRel,
457 Ordering::Acquire,
458 ) {
459 Ok(_) => {
460 break;
461 }
462 Err(current) => head = current,
463 }
464 }
465 }
466
467 /// Invokes the provided closure `f` with each callsite in the registry.
for_each(&self, mut f: impl FnMut(&'static dyn Callsite))468 fn for_each(&self, mut f: impl FnMut(&'static dyn Callsite)) {
469 let mut head = self.list_head.load(Ordering::Acquire);
470
471 while let Some(cs) = unsafe { head.as_ref() } {
472 f(cs);
473
474 head = cs.next.load(Ordering::Acquire);
475 }
476
477 if self.has_locked_callsites.load(Ordering::Acquire) {
478 let locked = LOCKED_CALLSITES.lock().unwrap();
479 for &cs in locked.iter() {
480 f(cs);
481 }
482 }
483 }
484 }
485
register_dispatch(dispatch: &Dispatch)486 pub(crate) fn register_dispatch(dispatch: &Dispatch) {
487 let dispatchers = DISPATCHERS.register_dispatch(dispatch);
488 dispatch.subscriber().on_register_dispatch(dispatch);
489 CALLSITES.rebuild_interest(dispatchers);
490 }
491
rebuild_callsite_interest( callsite: &'static dyn Callsite, dispatchers: &dispatchers::Rebuilder<'_>, )492 fn rebuild_callsite_interest(
493 callsite: &'static dyn Callsite,
494 dispatchers: &dispatchers::Rebuilder<'_>,
495 ) {
496 let meta = callsite.metadata();
497
498 let mut interest = None;
499 dispatchers.for_each(|dispatch| {
500 let this_interest = dispatch.register_callsite(meta);
501 interest = match interest.take() {
502 None => Some(this_interest),
503 Some(that_interest) => Some(that_interest.and(this_interest)),
504 }
505 });
506
507 let interest = interest.unwrap_or_else(Interest::never);
508 callsite.set_interest(interest)
509 }
510
511 mod private {
512 /// Don't call this function, it's private.
513 #[allow(missing_debug_implementations)]
514 pub struct Private<T>(pub(crate) T);
515 }
516
517 #[cfg(feature = "std")]
518 mod dispatchers {
519 use crate::{dispatcher, lazy::Lazy};
520 use std::sync::{
521 atomic::{AtomicBool, Ordering},
522 RwLock, RwLockReadGuard, RwLockWriteGuard,
523 };
524
525 pub(super) struct Dispatchers {
526 has_just_one: AtomicBool,
527 }
528
529 static LOCKED_DISPATCHERS: Lazy<RwLock<Vec<dispatcher::Registrar>>> =
530 Lazy::new(Default::default);
531
532 pub(super) enum Rebuilder<'a> {
533 JustOne,
534 Read(RwLockReadGuard<'a, Vec<dispatcher::Registrar>>),
535 Write(RwLockWriteGuard<'a, Vec<dispatcher::Registrar>>),
536 }
537
538 impl Dispatchers {
new() -> Self539 pub(super) const fn new() -> Self {
540 Self {
541 has_just_one: AtomicBool::new(true),
542 }
543 }
544
rebuilder(&self) -> Rebuilder<'_>545 pub(super) fn rebuilder(&self) -> Rebuilder<'_> {
546 if self.has_just_one.load(Ordering::SeqCst) {
547 return Rebuilder::JustOne;
548 }
549 Rebuilder::Read(LOCKED_DISPATCHERS.read().unwrap())
550 }
551
register_dispatch(&self, dispatch: &dispatcher::Dispatch) -> Rebuilder<'_>552 pub(super) fn register_dispatch(&self, dispatch: &dispatcher::Dispatch) -> Rebuilder<'_> {
553 let mut dispatchers = LOCKED_DISPATCHERS.write().unwrap();
554 dispatchers.retain(|d| d.upgrade().is_some());
555 dispatchers.push(dispatch.registrar());
556 self.has_just_one
557 .store(dispatchers.len() <= 1, Ordering::SeqCst);
558 Rebuilder::Write(dispatchers)
559 }
560 }
561
562 impl Rebuilder<'_> {
for_each(&self, mut f: impl FnMut(&dispatcher::Dispatch))563 pub(super) fn for_each(&self, mut f: impl FnMut(&dispatcher::Dispatch)) {
564 let iter = match self {
565 Rebuilder::JustOne => {
566 dispatcher::get_default(f);
567 return;
568 }
569 Rebuilder::Read(vec) => vec.iter(),
570 Rebuilder::Write(vec) => vec.iter(),
571 };
572 iter.filter_map(dispatcher::Registrar::upgrade)
573 .for_each(|dispatch| f(&dispatch))
574 }
575 }
576 }
577
578 #[cfg(not(feature = "std"))]
579 mod dispatchers {
580 use crate::dispatcher;
581
582 pub(super) struct Dispatchers(());
583 pub(super) struct Rebuilder<'a>(Option<&'a dispatcher::Dispatch>);
584
585 impl Dispatchers {
new() -> Self586 pub(super) const fn new() -> Self {
587 Self(())
588 }
589
rebuilder(&self) -> Rebuilder<'_>590 pub(super) fn rebuilder(&self) -> Rebuilder<'_> {
591 Rebuilder(None)
592 }
593
register_dispatch<'dispatch>( &self, dispatch: &'dispatch dispatcher::Dispatch, ) -> Rebuilder<'dispatch>594 pub(super) fn register_dispatch<'dispatch>(
595 &self,
596 dispatch: &'dispatch dispatcher::Dispatch,
597 ) -> Rebuilder<'dispatch> {
598 // nop; on no_std, there can only ever be one dispatcher
599 Rebuilder(Some(dispatch))
600 }
601 }
602
603 impl Rebuilder<'_> {
604 #[inline]
for_each(&self, mut f: impl FnMut(&dispatcher::Dispatch))605 pub(super) fn for_each(&self, mut f: impl FnMut(&dispatcher::Dispatch)) {
606 if let Some(dispatch) = self.0 {
607 // we are rebuilding the interest cache because a new dispatcher
608 // is about to be set. on `no_std`, this should only happen
609 // once, because the new dispatcher will be the global default.
610 f(dispatch)
611 } else {
612 // otherwise, we are rebuilding the cache because the subscriber
613 // configuration changed, so use the global default.
614 // on no_std, there can only ever be one dispatcher
615 dispatcher::get_default(f)
616 }
617 }
618 }
619 }
620