flax.optim.LARS

class flax.optim.LARS(learning_rate=None, beta=0.9, weight_decay=0, trust_coefficient=0.001, eps=0, nesterov=False)[source]

Layerwise adaptive rate scaling (LARS) optimizer.

See https://arxiv.org/abs/1708.03888

__init__(learning_rate=None, beta=0.9, weight_decay=0, trust_coefficient=0.001, eps=0, nesterov=False)[source]

Constructor for the LARS optimizer.

Parameters
  • learning_rate – the step size used to update the parameters.

  • beta – the coefficient used for the moving average of the gradient (default: 0.9).

  • weight_decay – weight decay coefficient to apply

  • trust_coefficient – coefficient for trust ratio computation (default: 0.001).

  • eps – epsilon used for trust ratio computation (default: no epsilon).

  • nesterov – whether to use Nesterov momentum (default: False).

Methods

__init__([learning_rate, beta, ...])

Constructor for the LARS optimizer.

apply_gradient(hyper_params, params, state, ...)

Applies a gradient for a set of parameters.

apply_param_gradient(step, hyper_params, ...)

Apply a gradient for a single parameter.

create(target[, focus])

Creates a new optimizer for the given target.

init_param_state(param)

Initializes the state for a parameter.

init_state(params)

restore_state(opt_target, opt_state, state_dict)

Restore the optimizer target and state from the state dict.

state_dict(target, state)

update_hyper_params(**hyper_param_overrides)

Updates the hyper parameters with a set of overrides.