Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix AttributeError: Update OpenAI error imports (Closes #1564) #1577

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

SaiKrishna-KK
Copy link

Thank you for contributing an eval! ♥️

🚨 Please make sure your PR follows these guidelines. Failure to follow the guidelines below will result in the PR being closed automatically.
Also note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨

PLEASE READ THIS:

  • In order for a PR to be merged, it must fail on GPT-4. We understand you currently do not have GPT-4 access, so you cannot directly test it. Please run your eval with GPT-3.5-Turbo; we will test GPT-4 performance internally. If GPT-4 scores higher than ~90% on your eval, we may not merge (since GPT-4 already does well).
  • Starting April 10, the minimum eval size is 15 samples.
  • We use Git LFS for JSON files, so ensure large JSON data is in LFS (instructions here).
  • We may expand contributor access to GPT-4 based on accepted PRs, but acceptance is not guaranteed.

Eval details 📑

Eval name

Fix AttributeError: Update OpenAI error imports for v1.0+ (Closes #1564)

Eval description

A bug fix to remove references to the now-removed openai.error module and replace them with the newer top-level exceptions (APIError, APIConnectionError, etc.) introduced in OpenAI Python client 1.0+. This PR resolves the AttributeError: module 'openai' has no attribute 'error' that appears when running oaieval --help or importing evals while on a newer OpenAI library.

What makes this a useful eval?

While this PR does not introduce a new eval dataset, it fixes a breaking issue preventing any user from running the existing Evals if they have a newer version of the OpenAI Python client. This ensures better compatibility moving forward and helps the community continue building and testing evals without version conflicts.


Criteria for a good eval ✅

  • Thematically consistent: Not applicable here—we’re not adding new prompts, just fixing existing code.
  • Contains failures a human can solve, but GPT-3.5 or GPT-4 could not: Again, not a new eval. This is a code fix.
  • Includes good signal around correct behavior: N/A for this PR; we’re strictly addressing import errors.
  • At least 15 high-quality examples: N/A for this PR. No new eval data is added.

Unique eval value

Not a new eval—this is a bug fix. The unique value is ensuring that current/future versions of the openai library remain compatible with the Evals repository.


Eval structure 🏗️

  • Data in evals/registry/data/{name}: No new data or YAML—this PR only fixes Python imports.
  • YAML in evals/registry/evals/{name}.yaml: No new YAML needed.
  • Usage rights: N/A. We wrote and own the fix code.

Final checklist 👀

Submission agreement

  • I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.

Email address validation

  • I acknowledge that GPT-4 access, if granted, will be linked to the email address used in my commits.

Limited availability acknowledgment

  • I understand that opening this PR, even if it meets all guidelines, does not guarantee a merge or GPT-4 access.

Submit eval

  • I have filled out all required fields of this form.
  • I have not added large JSON files, so no need to add to Git LFS.
  • I have run pip install pre-commit; pre-commit install and verified mypy, black, isort, autoflake, and ruff are running on commit.

Eval JSON data

View evals in JSON

Eval

No new eval data is introduced by this PR.
</details>


```markdown

# Thank you for contributing an eval! ♥️



🚨 **Please make sure your PR follows these guidelines. Failure to follow the guidelines below will result in the PR being closed automatically.**  

Also note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨



**PLEASE READ THIS**:



- In order for a PR to be merged, it must fail on GPT-4. We understand you currently do not have GPT-4 access, so you cannot directly test it. Please run your eval with GPT-3.5-Turbo; we will test GPT-4 performance internally. If GPT-4 scores higher than ~90% on your eval, we may not merge (since GPT-4 already does well).

- Starting April 10, the minimum eval size is 15 samples.  

- We use **Git LFS** for JSON files, so ensure large JSON data is in LFS ([instructions here](https://git-lfs.com)).  

- We may expand contributor access to GPT-4 based on accepted PRs, but acceptance is **not** guaranteed.



---



## Eval details 📑



### Eval name

Fix AttributeError: Update OpenAI error imports for v1.0+ (Closes #1564)



### Eval description

A bug fix to remove references to the now-removed `openai.error` module and replace them with the newer top-level exceptions (`APIError`, `APIConnectionError`, etc.) introduced in OpenAI Python client 1.0+. This PR resolves the `AttributeError: module 'openai' has no attribute 'error'` that appears when running `oaieval --help` or importing `evals` while on a newer OpenAI library.



### What makes this a useful eval?

While this PR does not introduce a new eval dataset, it fixes a breaking issue preventing any user from running the existing Evals if they have a newer version of the OpenAI Python client. This ensures better compatibility moving forward and helps the community continue building and testing evals without version conflicts.



---



## Criteria for a good eval ✅



- [ ] **Thematically consistent**: Not applicable here—we’re not adding new prompts, just fixing existing code.

- [ ] **Contains failures a human can solve, but GPT-3.5 or GPT-4 could not**: Again, not a new eval. This is a code fix.

- [ ] **Includes good signal around correct behavior**: N/A for this PR; we’re strictly addressing import errors.

- [ ] **At least 15 high-quality examples**: N/A for this PR. No new eval data is added.



### Unique eval value

> **Not a new eval**—this is a bug fix. The unique value is ensuring that current/future versions of the `openai` library remain compatible with the Evals repository.



---



## Eval structure 🏗️



- [ ] **Data in `evals/registry/data/{name}`**: No new data or YAML—this PR only fixes Python imports.

- [ ] **YAML in `evals/registry/evals/{name}.yaml`**: No new YAML needed.

- [ ] **Usage rights**: N/A. We wrote and own the fix code.



---



## Final checklist 👀



### Submission agreement

- [x] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies.



### Email address validation

- [x] I acknowledge that GPT-4 access, if granted, will be linked to the email address used in my commits.



### Limited availability acknowledgment

- [x] I understand that opening this PR, even if it meets all guidelines, does not guarantee a merge or GPT-4 access.



### Submit eval

- [x] I have filled out all required fields of this form.

- [x] I have **not** added large JSON files, so no need to add to Git LFS.

- [x] I have run `pip install pre-commit; pre-commit install` and verified `mypy`, `black`, `isort`, `autoflake`, and `ruff` are running on commit.



---



### Eval JSON data



<details>

<summary>View evals in JSON</summary>



### Eval

```jsonl

No new eval data is introduced by this PR.

Code Changes

File: evals/utils/api_utils.py (and wherever else openai.error was referenced)

- import openai.error

- from openai.error import ServiceUnavailableError, ...

+ from openai import APIError, APIConnectionError, APITimeoutError, RateLimitError


 # Old references replaced with new ones:

- RETRY_ERRORS = (

-     openai.error.ServiceUnavailableError,

-     ...

- )

+ RETRY_ERRORS = (

+     APIError,

+     APIConnectionError,

+     APITimeoutError,

+     RateLimitError,

+     requests.exceptions.ConnectionError,

+     requests.exceptions.Timeout,

+ )

Testing:

  1. pip install --upgrade openai (>=1.0).

  2. oaieval --help and python -c "import evals" now run without AttributeError.


Thank you for reviewing this PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AttributeError: module 'openai' has no attribute 'error'
1 participant