v0.4.0
Optimisers v0.4.0
Merged pull requests:
- make docstrings consistent (#187) (@CarloLucibello)
- Add the option couple to AdamW and set the default to match pytorch (#188) (@CarloLucibello)
- fix epsilon for Float16 (#190) (@CarloLucibello)
- docs for nothing behavior and for walking a tree with keypath (#191) (@CarloLucibello)
Closed issues:
- Stable docs will 404 until a new version is tagged (#25)
- Allow keyword arguments for optimisers (#74)
- doc improvement: working with custom model types (#84)
- Rename or outsource
iswriteable
(#99) - Split out the
rules.jl
as a sub-package (or a separate package) ? (#108) - Wrong model update for BatchNorm for some specific synthax (#123)
- Use
OptChain
as an alias forOptimiserChain
? (#138) nothing
does not correspond to updating the state with a zero gradient. (#140)- Utility for walking a tree (e.g. gradients) w.r.t. a model (#143)
- Adam optimizer can produce NaNs with Float16 due to small epsilon (#167)
- mark as public any non-exported but documented interface (#189)