You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe
This issue follows up on this discussion. Currently, the span reports the execution time of a single command, which includes not only network and Redis server processing but also client-side processing (e.g., serialization/deserialization). As a result, the generated trace lacks granularity, making it unclear whether high latencies are caused by slow application processing, external factors, or both.
For example, when analyzing slow HTTP requests, I sometimes see MGET Redis commands taking up to 750 ms. However, it's unclear how much of this time is actually spent on Redis versus client-side processing
Describe the solution you'd like
Client-side serialization/deserialization should either be excluded from the reported command duration or a more granular metric should be added. Ideally, the command duration should reflect:
Start time: The moment the command is issued from the client to the server.
End time: When the response is read from the socket, but before deserialization occurs.
This would provide a clearer picture of where time is spent and help diagnose latency issues more effectively.
Feature Request
Is your feature request related to a problem? Please describe
This issue follows up on this discussion. Currently, the span reports the execution time of a single command, which includes not only network and Redis server processing but also client-side processing (e.g., serialization/deserialization). As a result, the generated trace lacks granularity, making it unclear whether high latencies are caused by slow application processing, external factors, or both.
For example, when analyzing slow HTTP requests, I sometimes see MGET Redis commands taking up to 750 ms. However, it's unclear how much of this time is actually spent on Redis versus client-side processing
Describe the solution you'd like
Client-side serialization/deserialization should either be excluded from the reported command duration or a more granular metric should be added. Ideally, the command duration should reflect:
Start time: The moment the command is issued from the client to the server.
End time: When the response is read from the socket, but before deserialization occurs.
This would provide a clearer picture of where time is spent and help diagnose latency issues more effectively.
Describe alternatives you've considered
N/A
Teachability, Documentation, Adoption, Migration Strategy
N/A
The text was updated successfully, but these errors were encountered: