DummyDistributedWorkersWithBuffer

class maze.train.parallelization.distributed_actors.dummy_distributed_workers_with_buffer.DummyDistributedWorkersWithBuffer(env_factory: Callable[], Union[maze.core.env.structured_env.StructuredEnv, maze.core.env.structured_env_spaces_mixin.StructuredEnvSpacesMixin, maze.core.log_stats.log_stats_env.LogStatsEnv]], worker_policy: maze.core.agent.torch_policy.TorchPolicy, n_rollout_steps: int, n_workers: int, batch_size: int, rollouts_per_iteration: int, initial_sampling_policy: Union[omegaconf.DictConfig, maze.core.agent.policy.Policy], replay_buffer_size: int, initial_buffer_size: int, split_rollouts_into_transitions: bool, env_instance_seeds: List[int], replay_buffer_seed: int)

Dummy implementation of distributed workers with buffer creates the workers as a list. Once the outputs are to be collected, it simply rolls them out in a loop until is has enough to be added to the buffer.

broadcast_updated_policy(state_dict: Dict)None

(overrides BaseDistributedWorkersWithBuffer)

Store the newest policy in the shared network object

collect_rollouts() → Tuple[float, float, float]

(overrides BaseDistributedWorkersWithBuffer)

implementation of

BaseDistributedWorkersWithBuffer interface

start()None

(overrides BaseDistributedWorkersWithBuffer)

Nothing to do in dummy implementation

stop()None

(overrides BaseDistributedWorkersWithBuffer)

Nothing to do in dummy implementation