Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove hard-coded dtype from best_f buffers #2725

Closed
wants to merge 1 commit into from

Conversation

saitcakmak
Copy link
Contributor

Summary:
Specifiying dtype=float (equivalent to torch.float64) causes issues if the user wants to use single precision. See #2724

Removing the dtype argument from torch.as_tensor will lead to using the existing dtype if the input is a tensor, and using torch.get_default_dtype() if the input is a python float or a list of floats. If the input is a numpy array, the corresponding torch dtype is used (e.g. torch.float64 for np.float64).

Differential Revision: D69121327

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Feb 4, 2025
@Balandat
Copy link
Contributor

Balandat commented Feb 4, 2025

One thing to note here is that if the input is a python list with ints then this will register an int tensor which will cause breakages downstream. This shouldn't happen a lot but I think we should auto-convert the dtype in that case.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69121327

Copy link

codecov bot commented Feb 4, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.99%. Comparing base (3faf489) to head (fca8bd5).
Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2725   +/-   ##
=======================================
  Coverage   99.99%   99.99%           
=======================================
  Files         203      203           
  Lines       18671    18671           
=======================================
  Hits        18670    18670           
  Misses          1        1           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@saitcakmak
Copy link
Contributor Author

One thing to note here is that if the input is a python list with ints then this will register an int tensor which will cause breakages downstream. This shouldn't happen a lot but I think we should auto-convert the dtype in that case.

Yeah, that's a fair point. I can't find a clean way to convert to the default float type without explicitly checking for integers and casting after, which I don't like. The type hints specify float | Tensor. I think it's ok to expect the user to respect that and specify an appropriate inputs. I'm sure various other parts of the code will also break if you pass in integer tensors.

@saitcakmak saitcakmak mentioned this pull request Feb 4, 2025
1 task
Summary:

Specifiying `dtype=float` (equivalent to `torch.float64`) causes issues if the user wants to use single precision. See pytorch#2724

Removing the `dtype` argument from `torch.as_tensor` will lead to using the existing dtype if the input is a tensor, and using `torch.get_default_dtype()` if the input is a python float or a list of floats. If the input is a numpy array, the corresponding torch dtype is used (e.g. `torch.float64` for `np.float64`).

Reviewed By: esantorella

Differential Revision: D69121327
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D69121327

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in c0db823.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants