Skip to content

Commit

Permalink
updates, rm some Optimisers detail
Browse files Browse the repository at this point in the history
  • Loading branch information
mcabbott committed Nov 20, 2022
1 parent cf2e7a9 commit f2a8883
Show file tree
Hide file tree
Showing 4 changed files with 29 additions and 83 deletions.
83 changes: 8 additions & 75 deletions docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,53 +4,24 @@ CurrentModule = Flux

# [Optimisers](@id man-optimisers)

Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Flux builds in many optimisation rules for use with [`train!`](@ref Flux.Optimise.train!) and
other training functions.

```julia
using Flux

W = rand(2, 5)
b = rand(2)

predict(x) = (W * x) .+ b
loss(x, y) = sum((predict(x) .- y).^2)
The mechanism by which these work is gradually being replaced as part of the change
from "implicit" dictionary-based to "explicit" tree-like structures.
At present, the same struct (such as `Adam`) can be used with either form,
and will be automatically translated.

x, y = rand(5), rand(2) # Dummy data
l = loss(x, y) # ~ 3

θ = Flux.params(W, b)
grads = gradient(() -> loss(x, y), θ)
```
For full details of how the new "explicit" interface works, see the [Optimisers.jl documentation](https://fluxml.ai/Optimisers.jl/dev/).

We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

```julia
η = 0.1 # Learning Rate
for p in (W, b)
p .-= η * grads[p]
end
```
For full details on how the "implicit" interface worked, see the [Flux 0.13.6 manual](https://fluxml.ai/Flux.jl/v0.13.6/training/optimisers/#Optimiser-Interface).

Running this will alter the parameters `W` and `b` and our loss should go down. Flux provides a more general way to do optimiser updates like this.

```julia
using Flux: update!

opt = Descent(0.1) # Gradient descent with learning rate 0.1

for p in (W, b)
update!(opt, p, grads[p])
end
```

An optimiser `update!` accepts a parameter and a gradient, and updates the parameter according to the chosen rule. We can also pass `opt` to our [training loop](training.md), which will update all parameters of the model in a loop. However, we can now easily replace `Descent` with a more advanced optimiser such as `Adam`.

## Optimiser Reference

All optimisers return an object that, when passed to `train!`, will update the parameters passed to it.

```@docs
Flux.Optimise.update!
Descent
Momentum
Nesterov
Expand All @@ -67,44 +38,6 @@ OAdam
AdaBelief
```

## Optimiser Interface

Flux's optimisers are built around a `struct` that holds all the optimiser parameters along with a definition of how to apply the update rule associated with it. We do this via the `apply!` function which takes the optimiser as the first argument followed by the parameter and its corresponding gradient.

In this manner Flux also allows one to create custom optimisers to be used seamlessly. Let's work on this with a simple example.

```julia
mutable struct Momentum
eta
rho
velocity
end

Momentum(eta::Real, rho::Real) = Momentum(eta, rho, IdDict())
```

The `Momentum` type will act as our optimiser in this case. Notice that we have added all the parameters as fields, along with the velocity which we will use as our state dictionary. Each parameter in our models will get an entry in there. We can now define the rule applied when this optimiser is invoked.

```julia
function Flux.Optimise.apply!(o::Momentum, x, Δ)
η, ρ = o.eta, o.rho
v = get!(o.velocity, x, zero(x))::typeof(x)
@. v = ρ * v - η * Δ
@. Δ = -v
end
```

This is the basic definition of a Momentum update rule given by:

```math
v = ρ * v - η * Δ
w = w - v
```

The `apply!` defines the update rules for an optimiser `opt`, given the parameters and gradients. It returns the updated gradients. Here, every parameter `x` is retrieved from the running state `v` and subsequently updates the state of the optimiser.

Flux internally calls on this function via the `update!` function. It shares the API with `apply!` but ensures that multiple parameters are handled gracefully.

## Composing Optimisers

Flux defines a special kind of optimiser simply called `Optimiser` which takes in arbitrary optimisers as input. Its behaviour is similar to the usual optimisers, but differs in that it acts by calling the optimisers listed in it sequentially. Each optimiser produces a modified gradient
Expand Down
19 changes: 12 additions & 7 deletions docs/src/training/train_api.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,16 @@
# Training API


```@docs
Flux.Train.setup
Flux.Train.update!
Flux.Train.train!
Flux.Optimise.train!(loss, model, data, opt; cb)
```

The new version of Flux's training code was written as an independent package, called Optimisers.jl.
However, at present all Flux models contain parameter arrays (such as `Array`s and `CuArray`s)
which can be updated in-place. Thus objects returned by `update!` can be ignored.

```@docs
Optimisers.update!
```

## Implicit style
Expand All @@ -15,14 +21,12 @@ Flux 0.13 is the transitional version which supports both.

For full details on how to use the implicit style, see [Flux 0.13.6 manual](https://fluxml.ai/Flux.jl/v0.13.6/training/training/).


```@docs
Flux.params
Flux.Optimise.update!
Flux.Optimise.train!
Optimisers.update!(opt::Flux.Optimise.AbstractOptimiser, xs::Params, gs)
Flux.Optimise.train!(loss, ps::Flux.Params, data, opt::Flux.Optimise.AbstractOptimiser; cb)
```


Note that, by default, `train!` only loops over the data once (a single "epoch").
A convenient way to run multiple epochs from the REPL is provided by `@epochs`.

Expand Down Expand Up @@ -69,3 +73,4 @@ cb = function ()
accuracy() > 0.9 && Flux.stop()
end
```
4 changes: 3 additions & 1 deletion src/Flux.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,11 @@ export Descent, Adam, Momentum, Nesterov, RMSProp,
AdamW, RAdam, AdaBelief, InvDecay, ExpDecay,
WeightDecay, ClipValue, ClipNorm

export ClipGrad, OptimiserChain # these are const defined in deprecations, for ClipValue, Optimiser

include("train.jl")
using .Train
# using .Train: setup, @train_autodiff
using .Train: setup

using CUDA
const use_cuda = Ref{Union{Nothing,Bool}}(nothing)
Expand Down
6 changes: 6 additions & 0 deletions src/optimise/optimisers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -564,6 +564,9 @@ end
Combine several optimisers into one; each optimiser produces a modified gradient
that will be fed into the next, and this is finally applied to the parameter as
usual.
!!! note
This will be replaced by `Optimisers.OptimiserChain` in Flux 0.14.
"""
mutable struct Optimiser <: AbstractOptimiser
os::Vector{Any}
Expand Down Expand Up @@ -699,6 +702,9 @@ end
ClipValue(thresh)
Clip gradients when their absolute value exceeds `thresh`.
!!! note
This will be replaced by `Optimisers.ClipGrad` in Flux 0.14.
"""
mutable struct ClipValue{T} <: AbstractOptimiser
thresh::T
Expand Down

0 comments on commit f2a8883

Please sign in to comment.