Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: send error instead of extra #1430

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions python/langsmith/client.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
"""Client for interacting with the LangSmith API.

Check notice on line 1 in python/langsmith/client.py

View workflow job for this annotation

GitHub Actions / benchmark

Benchmark results

........... create_5_000_run_trees: Mean +- std dev: 684 ms +- 66 ms ........... create_10_000_run_trees: Mean +- std dev: 1.35 sec +- 0.09 sec ........... create_20_000_run_trees: Mean +- std dev: 2.68 sec +- 0.17 sec ........... dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 708 us +- 11 us ........... dumps_class_nested_py_leaf_50x100: Mean +- std dev: 25.2 ms +- 0.4 ms ........... dumps_class_nested_py_leaf_100x200: Mean +- std dev: 105 ms +- 2 ms ........... dumps_dataclass_nested_50x100: Mean +- std dev: 25.3 ms +- 0.3 ms ........... WARNING: the benchmark result may be unstable * the standard deviation (18.1 ms) is 25% of the mean (73.3 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydantic_nested_50x100: Mean +- std dev: 73.3 ms +- 18.1 ms ........... dumps_pydanticv1_nested_50x100: Mean +- std dev: 197 ms +- 3 ms

Check notice on line 1 in python/langsmith/client.py

View workflow job for this annotation

GitHub Actions / benchmark

Comparison against main

+-----------------------------------------------+----------+------------------------+ | Benchmark | main | changes | +===============================================+==========+========================+ | dumps_pydanticv1_nested_50x100 | 220 ms | 197 ms: 1.11x faster | +-----------------------------------------------+----------+------------------------+ | dumps_dataclass_nested_50x100 | 25.5 ms | 25.3 ms: 1.01x faster | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_leaf_50x100 | 25.2 ms | 25.2 ms: 1.00x faster | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_leaf_100x200 | 105 ms | 105 ms: 1.00x slower | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_branch_and_leaf_200x400 | 706 us | 708 us: 1.00x slower | +-----------------------------------------------+----------+------------------------+ | create_20_000_run_trees | 2.66 sec | 2.68 sec: 1.01x slower | +-----------------------------------------------+----------+------------------------+ | create_10_000_run_trees | 1.31 sec | 1.35 sec: 1.03x slower | +-----------------------------------------------+----------+------------------------+ | create_5_000_run_trees | 658 ms | 684 ms: 1.04x slower | +-----------------------------------------------+----------+------------------------+ | dumps_pydantic_nested_50x100 | 68.3 ms | 73.3 ms: 1.07x slower | +-----------------------------------------------+----------+------------------------+ | Geometric mean | (ref) | 1.00x slower | +-----------------------------------------------+----------+------------------------+

Use the client to customize API keys / workspace ocnnections, SSl certs,
etc. for tracing.
Expand Down Expand Up @@ -5047,8 +5047,8 @@
),
feedback_source_type=ls_schemas.FeedbackSourceType.MODEL,
project_id=project_id,
extra=res.extra,
trace_id=run.trace_id if run else None,
error=res.error,
)
return results

Expand Down Expand Up @@ -5116,7 +5116,7 @@
project_id: Optional[ID_TYPE] = None,
comparative_experiment_id: Optional[ID_TYPE] = None,
feedback_group_id: Optional[ID_TYPE] = None,
extra: Optional[Dict] = None,
error: Optional[bool] = None,
trace_id: Optional[ID_TYPE] = None,
**kwargs: Any,
) -> ls_schemas.Feedback:
Expand Down Expand Up @@ -5162,8 +5162,8 @@
feedback_group_id (Optional[Union[UUID, str]]):
When logging preferences, ranking runs, or other comparative feedback,
this is used to group feedback together.
extra (Optional[Dict]):
Metadata for the feedback.
error (Optional[bool]):
Whether the evaluator run errored.
trace_id (Optional[Union[UUID, str]]):
The trace ID of the run to provide feedback for. Enables batch ingestion.
**kwargs (Any):
Expand Down Expand Up @@ -5234,7 +5234,7 @@
comparative_experiment_id, accept_null=True
),
feedback_group_id=_ensure_uuid(feedback_group_id, accept_null=True),
extra=extra,
error=error,
)

use_multipart = (self.info.batch_ingest_config or {}).get(
Expand Down
3 changes: 1 addition & 2 deletions python/langsmith/evaluation/_arunner.py
Original file line number Diff line number Diff line change
Expand Up @@ -817,9 +817,8 @@ async def _arun_evaluators(
results=[
EvaluationResult(
key=key,
source_run_id=run.id,
comment=repr(e),
extra={"error": True},
error=True,
)
for key in feedback_keys
]
Expand Down
3 changes: 1 addition & 2 deletions python/langsmith/evaluation/_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -1591,9 +1591,8 @@ def _run_evaluators(
results=[
EvaluationResult(
key=key,
source_run_id=run.id,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just setting the evaluator trace to be the run itself (the one the feedback is associated with). is there a way to get the evaluator trace here @hinthornw ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol

comment=repr(e),
extra={"error": True},
error=True,
)
for key in feedback_keys
]
Expand Down
4 changes: 2 additions & 2 deletions python/langsmith/evaluation/evaluator.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,8 +94,8 @@ class EvaluationResult(BaseModel):

If none provided, the evaluation feedback is applied to the
root trace being."""
extra: Optional[Dict] = None
"""Metadata for the evaluator run."""
error: Optional[bool] = None
"""If the evaluator run errored."""

class Config:
"""Pydantic model configuration."""
Expand Down
4 changes: 2 additions & 2 deletions python/langsmith/schemas.py
Original file line number Diff line number Diff line change
Expand Up @@ -583,8 +583,8 @@ class FeedbackBase(BaseModel):
"""For preference scoring, this group ID is shared across feedbacks for each

run in the group that was being compared."""
extra: Optional[Dict] = None
"""The metadata of the feedback."""
error: Optional[bool] = None
"""Whether the evaluator run errored."""

class Config:
"""Configuration class for the schema."""
Expand Down
Loading