-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OXY-1514: only track sampled spans and through a setting track all spans. #107
base: main
Are you sure you want to change the base?
Conversation
We're seeing high lock contention when removing spans from the `LiveReferenceSet`. Currently, we sample all spans that are created, even if those spans are unsampled. The purpose of this is to capture graceful restart timeout issues which sometimes occur in our services. By fetching the list of all spans in memory when a graceful restart hangs, we can figure out what is preventing the restart from occuring within its timeout. In this commit we only track _sampled_ spans. Most services which use foundations sample at a 1% rate, meaning we effectively remove 99% of locks. This allows light debugability in all running services, at any time you can get a list of traced spans. Although those spans should end up in jaeger/oltp at some point[^1]. To make sure we can still determine the root cause of graceful restart issues, we add a setting which forces all spans to be sampled (what happened previously). [^1]: to be fair, sometimes some traces will remain in memory forever and never be dropped, so even at 1% this can be useful.
73e5d9c
to
d0d0d02
Compare
@@ -26,6 +26,13 @@ pub struct TracingSettings { | |||
|
|||
/// The strategy used to sample traces. | |||
pub sampling_strategy: SamplingStrategy, | |||
/// Enable liveness tracking of all generated spans. Even if the spans are | |||
/// unsampled. This can be useful for debugging potential hangs cause by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// unsampled. This can be useful for debugging potential hangs cause by | |
/// unsampled. This can be useful for debugging potential hangs caused by |
@@ -26,6 +26,13 @@ pub struct TracingSettings { | |||
|
|||
/// The strategy used to sample traces. | |||
pub sampling_strategy: SamplingStrategy, | |||
/// Enable liveness tracking of all generated spans. Even if the spans are | |||
/// unsampled. This can be useful for debugging potential hangs cause by | |||
/// some objects remaining in memory. The default value is false, meaning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// some objects remaining in memory. The default value is false, meaning | |
/// some objects remaining in memory. The default value is `false`, meaning |
pub(crate) use live_reference_set::LiveReferenceHandle; | ||
|
||
use crate::telemetry::tracing::internal::SharedSpanHandle; | ||
// pub(crate) type SharedSpanHandle = Arc<SharedSpanHandle>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// pub(crate) type SharedSpanHandle = Arc<SharedSpanHandle>; |
pub(crate) struct SharedSpanInner(SharedSpanHandle); | ||
pub(crate) enum SharedSpanHandle { | ||
Tracked(Arc<LiveReferenceHandle<Arc<RwLock<Span>>>>), | ||
Unsampled(Arc<RwLock<Span>>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bikeshed?
Unsampled(Arc<RwLock<Span>>), | |
Untracked(Arc<RwLock<Span>>), |
d0d0d02
to
84099ac
Compare
/// only sampled spans are tracked. | ||
/// | ||
/// To get a json dump of the currently active spans, query: `/debug/traces` | ||
pub track_all_spans: bool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to make the whole liveness tracking functionality, i.e. introduce LivenessTrackingSettings
with enabled
field and also put this field in there too.
We're seeing high lock contention when removing spans from the
LiveReferenceSet
. Currently,we sample all spans that are created, even if those spans are unsampled. The purpose of this
is to capture graceful restart timeout issues which sometimes occur in our services. By fetching
the list of all spans in memory when a graceful restart hangs, we can figure out what is preventing
the restart from occuring within its timeout.
In this commit we only track sampled spans. Most services which use foundations sample at a 1% rate,
meaning we effectively remove 99% of locks. This allows light debugability in all running services, at
any time you can get a list of traced spans. Although those spans should end up in jaeger/oltp at some
point1. To make sure we can still determine the root cause of graceful restart issues, we add a setting
which forces all spans to be sampled (what happened previously).
Footnotes
to be fair, sometimes some traces will remain in memory forever and never be dropped, so even
at 1% this can be useful. ↩