flax.optim.Adagrad¶
- class flax.optim.Adagrad(learning_rate=None, eps=1e-08)[source]¶
Adagrad optimizer
- Parameters
learning_rate (Optional[float]) –
- __init__(learning_rate=None, eps=1e-08)[source]¶
Constructor for the Adagrad optimizer.
- Parameters
learning_rate (Optional[float]) – the step size used to update the parameters.
Methods
__init__
([learning_rate, eps])Constructor for the Adagrad optimizer.
apply_gradient
(hyper_params, params, state, ...)Applies a gradient for a set of parameters.
apply_param_gradient
(step, hyper_params, ...)Apply per-parameter gradients
create
(target[, focus])Creates a new optimizer for the given target.
init_param_state
(param)Initialize parameter state
init_state
(params)restore_state
(opt_target, opt_state, state_dict)Restore the optimizer target and state from the state dict.
state_dict
(target, state)update_hyper_params
(**hyper_param_overrides)Updates the hyper parameters with a set of overrides.