Optimizers (recommendation.optimizers
)¶
The classes presented in this section are optimizers to modify the SGD updates during the training of a model.
The update functions control the learning rate during the SGD optimization
SGD |
Stochastic Gradient Descent (SGD) updates |
Momentum |
Stochastic Gradient Descent (SGD) updates with momentum |
NesterovMomentum |
Stochastic Gradient Descent (SGD) updates with Nesterov momentum |
AdaGrad |
AdaGrad updates |
RMSProp |
Scale learning rates by dividing with the moving average of the root mean squared (RMS) gradients. |
AdaDelta |
Scale learning rates by a the ratio of accumulated gradients to accumulated step sizes, see [4] and notes for further description. |
Adam |
Adam updates implemented as in [5]. |
Adamax |
Adamax updates implemented as in [6]. |
Stochastic Gradient Descent¶
This is the optimizer by default in all models.
Momentum¶
-
class
orangecontrib.recommendation.optimizers.
Momentum
(learning_rate=1.0, momentum=0.9)[source]¶ Stochastic Gradient Descent (SGD) updates with momentum
Generates update expressions of the form:
velocity := momentum * velocity - learning_rate * gradient
param := param + velocity
- Args:
- learning_rate: float
- The learning rate controlling the size of update steps
- momentum: float, optional
- The amount of momentum to apply. Higher momentum results in smoothing over more update steps. Defaults to 0.9.
- Notes:
- Higher momentum also results in larger update steps. To counter that, you can optionally scale your learning rate by 1 - momentum.
- See Also:
- apply_momentum: Generic function applying momentum to updates nesterov_momentum: Nesterov’s variant of SGD with momentum
Nesterov’s Accelerated Gradient¶
-
class
orangecontrib.recommendation.optimizers.
NesterovMomentum
(learning_rate=1.0, momentum=0.9)[source]¶ Stochastic Gradient Descent (SGD) updates with Nesterov momentum
Generates update expressions of the form:
param_ahead := param + momentum * velocity
velocity := momentum * velocity - learning_rate * gradient_ahead
param := param + velocity
In order to express the update to look as similar to vanilla SGD, this can be written as:
v_prev := velocity
velocity := momentum * velocity - learning_rate * gradient
param := -momentum * v_prev + (1 + momentum) * velocity
- Args:
- learning_rate : float
- The learning rate controlling the size of update steps
- momentum: float, optional
- The amount of momentum to apply. Higher momentum results in smoothing over more update steps. Defaults to 0.9.
- Notes:
Higher momentum also results in larger update steps. To counter that, you can optionally scale your learning rate by 1 - momentum.
The classic formulation of Nesterov momentum (or Nesterov accelerated gradient) requires the gradient to be evaluated at the predicted next position in parameter space. Here, we use the formulation described at https://github.com/lisa-lab/pylearn2/pull/136#issuecomment-10381617, which allows the gradient to be evaluated at the current parameters.
- See Also:
- apply_nesterov_momentum: Function applying momentum to updates
AdaGradient¶
-
class
orangecontrib.recommendation.optimizers.
AdaGrad
(learning_rate=1.0, epsilon=1e-06)[source]¶ AdaGrad updates
Scale learning rates by dividing with the square root of accumulated squared gradients. See [1] for further description.
param := param - learning_rate * gradient
- Args:
- learning_rate : float or symbolic scalar
- The learning rate controlling the size of update steps
- epsilon: float or symbolic scalar
- Small value added for numerical stability
- Notes:
Using step size eta Adagrad calculates the learning rate for feature i at time step t as:
\[\eta_{t,i} = \frac{\eta} {\sqrt{\sum^t_{t^\prime} g^2_{t^\prime,i}+\epsilon}} g_{t,i}\]as such the learning rate is monotonically decreasing.
Epsilon is not included in the typical formula, see [2].
- References:
[1] Duchi, J., Hazan, E., & Singer, Y. (2011): Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121-2159. [2] Chris Dyer: Notes on AdaGrad. http://www.ark.cs.cmu.edu/cdyer/adagrad.pdf
RMSProp¶
-
class
orangecontrib.recommendation.optimizers.
RMSProp
(learning_rate=1.0, rho=0.9, epsilon=1e-06)[source]¶ Scale learning rates by dividing with the moving average of the root mean squared (RMS) gradients. See [3] for further description.
- Args:
- learning_rate: float
- The learning rate controlling the size of update steps
- rho: float
- Gradient moving average decay factor
- epsilon: float
- Small value added for numerical stability
- Notes:
rho should be between 0 and 1. A value of rho close to 1 will decay the moving average slowly and a value close to 0 will decay the moving average fast.
Using the step size \(\eta\) and a decay factor \(\rho\) the learning rate \(\eta_t\) is calculated as:
\[\begin{split}r_t &= \rho r_{t-1} + (1-\rho)*g^2\\ \eta_t &= \frac{\eta}{\sqrt{r_t + \epsilon}}\end{split}\]- References:
[3] Tieleman, T. and Hinton, G. (2012): Neural Networks for Machine Learning, Lecture 6.5 - rmsprop. Coursera. http://www.youtube.com/watch?v=O3sxAc4hxZU (formula @5:20)
AdaDelta¶
-
class
orangecontrib.recommendation.optimizers.
AdaDelta
(learning_rate=1.0, rho=0.95, epsilon=1e-06)[source]¶ Scale learning rates by a the ratio of accumulated gradients to accumulated step sizes, see [4] and notes for further description.
- Args:
- learning_rate: float
- The learning rate controlling the size of update steps
- rho: float
- Squared gradient moving average decay factor
- epsilon: float
- Small value added for numerical stability
- Notes:
rho should be between 0 and 1. A value of rho close to 1 will decay the moving average slowly and a value close to 0 will decay the moving average fast.
rho = 0.95 and epsilon=1e-6 are suggested in the paper and reported to work for multiple datasets (MNIST, speech).
In the paper, no learning rate is considered (so learning_rate=1.0). Probably best to keep it at this value. epsilon is important for the very first update (so the numerator does not become 0).
Using the step size eta and a decay factor rho the learning rate is calculated as:
\[\begin{split}r_t &= \rho r_{t-1} + (1-\rho)*g^2\\ \eta_t &= \eta \frac{\sqrt{s_{t-1} + \epsilon}} {\sqrt{r_t + \epsilon}}\\ s_t &= \rho s_{t-1} + (1-\rho)*g^2\end{split}\]- References:
[4] (1, 2) Zeiler, M. D. (2012): ADADELTA: An Adaptive Learning Rate Method. arXiv Preprint arXiv:1212.5701.
Adam¶
-
class
orangecontrib.recommendation.optimizers.
Adam
(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]¶ Adam updates implemented as in [5].
- Args:
- learning_rate : float
- The learning rate controlling the size of update steps
- beta_1 : float
- Exponential decay rate for the first moment estimates.
- beta_2 : float
- Exponential decay rate for the second moment estimates.
- epsilon : float
- Constant for numerical stability.
- Notes:
- The paper [5] includes an additional hyperparameter lambda. This is only needed to prove convergence of the algorithm and has no practical use, it is therefore omitted here.
- References:
[5] (1, 2, 3) Kingma, Diederik, and Jimmy Ba (2014): Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980.
-
update
(grads, params, indices=None)[source]¶ Adam updates
- Args:
- grads: array
- List of gradient expressions
- params: array
- The variables to generate update expressions for
- indices: array, optional
- Indices of parameters (‘params’) to update. If None (default), all parameters will be updated.
- Returns
- updates: list of float
- Variables updated with the gradients
Adamax¶
-
class
orangecontrib.recommendation.optimizers.
Adamax
(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]¶ Adamax updates implemented as in [6]. This is a variant of of the Adam algorithm based on the infinity norm.
- Args:
- learning_rate : float
- The learning rate controlling the size of update steps
- beta_1 : float
- Exponential decay rate for the first moment estimates.
- beta_2 : float
- Exponential decay rate for the second moment estimates.
- epsilon : float
- Constant for numerical stability.
- References:
[6] (1, 2) Kingma, Diederik, and Jimmy Ba (2014): Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980.
-
update
(grads, params, indices=None)[source]¶ Adamax updates
- Args:
- grads: array
- List of gradient expressions
- params: array
- The variables to generate update expressions for
- indices: array, optional
- Indices of parameters (‘params’) to update. If None (default), all parameters will be updated.
- Returns
- updates: list of float
- Variables updated with the gradients