-
Notifications
You must be signed in to change notification settings - Fork 415
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Remove hard-coded dtype from best_f buffers (#2725)
Summary: Pull Request resolved: #2725 Specifiying `dtype=float` (equivalent to `torch.float64`) causes issues if the user wants to use single precision. See #2724 Removing the `dtype` argument from `torch.as_tensor` will lead to using the existing dtype if the input is a tensor, and using `torch.get_default_dtype()` if the input is a python float or a list of floats. If the input is a numpy array, the corresponding torch dtype is used (e.g. `torch.float64` for `np.float64`). Reviewed By: esantorella Differential Revision: D69121327 fbshipit-source-id: 4a5639e0a541022333a8ca76d2715d4868134b24
- Loading branch information
1 parent
a972ae1
commit c0db823
Showing
4 changed files
with
9 additions
and
11 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters