|
def | build_lr (net, param_init_net, base_learning_rate, learning_rate_blob="lr", policy="fixed", iter_val=0, kwargs) |
|
def | dedup (net, sparse_dedup_aggregator, grad) |
|
Definition at line 14 of file optimizer.py.
◆ get_auxiliary_parameters()
def optimizer.Optimizer.get_auxiliary_parameters |
( |
|
self | ) |
|
Returns a list of auxiliary parameters.
Returns:
aux_params: A namedtuple, AuxParams.
aux_params.local stores a list of blobs. Each blob is a local
auxiliary parameter. A local auxiliary parameter is a parameter in
parallel to a learning rate parameter. Take adagrad as an example,
the local auxiliary parameter is the squared sum parameter, because
every learning rate has a squared sum associated with it.
aux_params.shared also stores a list of blobs. Each blob is a shared
auxiliary parameter. A shared auxiliary parameter is a parameter
that is shared across all the learning rate parameters. Take adam as
an example, the iteration parameter is a shared parameter, because
all the learning rates share the same iteration parameter.
Definition at line 59 of file optimizer.py.
The documentation for this class was generated from the following file: