Optimizer

class maze.train.trainers.es.optimizers.base_optimizer.Optimizer

Abstract baseclass of an optimizer to be used with ES.

setup(policy: maze.core.agent.torch_policy.TorchPolicy)None

Two-stage construction to enable construction from config-files.

Parameters

policy – ES policy network to optimize

update(global_gradient: numpy.ndarray)float

Execute one update step.

Parameters

global_gradient – A flat gradient vector

:return update ratio = norm(optimizer step) / norm(theta)