-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove hard-coded dtype from best_f buffers #2725
Conversation
One thing to note here is that if the input is a python list with ints then this will register an int tensor which will cause breakages downstream. This shouldn't happen a lot but I think we should auto-convert the dtype in that case. |
This pull request was exported from Phabricator. Differential Revision: D69121327 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2725 +/- ##
=======================================
Coverage 99.99% 99.99%
=======================================
Files 203 203
Lines 18671 18671
=======================================
Hits 18670 18670
Misses 1 1 ☔ View full report in Codecov by Sentry. |
Yeah, that's a fair point. I can't find a clean way to convert to the default float type without explicitly checking for integers and casting after, which I don't like. The type hints specify |
Summary: Specifiying `dtype=float` (equivalent to `torch.float64`) causes issues if the user wants to use single precision. See pytorch#2724 Removing the `dtype` argument from `torch.as_tensor` will lead to using the existing dtype if the input is a tensor, and using `torch.get_default_dtype()` if the input is a python float or a list of floats. If the input is a numpy array, the corresponding torch dtype is used (e.g. `torch.float64` for `np.float64`). Reviewed By: esantorella Differential Revision: D69121327
bf901ba
to
fca8bd5
Compare
This pull request was exported from Phabricator. Differential Revision: D69121327 |
This pull request has been merged in c0db823. |
Summary:
Specifiying
dtype=float
(equivalent totorch.float64
) causes issues if the user wants to use single precision. See #2724Removing the
dtype
argument fromtorch.as_tensor
will lead to using the existing dtype if the input is a tensor, and usingtorch.get_default_dtype()
if the input is a python float or a list of floats. If the input is a numpy array, the corresponding torch dtype is used (e.g.torch.float64
fornp.float64
).Differential Revision: D69121327