class maze.train.parallelization.vector_env.sequential_vector_env.SequentialVectorEnv(env_factories: List[Callable[], maze.core.env.maze_env.MazeEnv]], logging_prefix: Optional[str] = None)

Creates a simple wrapper for multiple environments, calling each environment in sequence on the current Python process. This is useful for computationally simple environment such as cartpole-v1, as the overhead of multiprocess or multi-thread outweighs the environment computation time. This can also be used for RL methods that require a vectorized environment, but that you want a single environments to train with.


env_factories – A list of functions that will create the environments


VectorEnv implementation

get_actor_rewards() → Optional[numpy.ndarray]

(overrides StructuredVectorEnv)

Stack actor rewards from encapsulated environments.

reset() → Dict[str, numpy.ndarray]

VectorEnv implementation

seed(seeds: List[Any])None

(overrides VectorEnv)

VectorEnv implementation

step(actions: Dict[str, Union[int, numpy.ndarray]]) → Tuple[Dict[str, numpy.ndarray], numpy.ndarray, numpy.ndarray, Iterable[Dict[Any, Any]]]

Step the environments with the given actions.


actions – the list of actions for the respective envs.


observations, rewards, dones, information-dicts all in env-aggregated form.