Maze: Applied Reinforcement Learning with Python¶
For installing Maze just follow the installation instructions.
To see Maze in action check out a first example.
Clone this project template repo to start your own Maze project.
Below we list of some of Maze’s key features. The list is far from exhaustive but none the less a nice starting point to dive into the framework.
Get things rolling by training your environment and rolling out your policy in just a few lines of code with Maze’ high-level API.
Configure your applications and experiments with the Hydra config system .
Design and visualize your policy and value networks with the Perception Module.
Stick to your favourite tools and trainers by combining Maze with other RL frameworks.
This is a preliminary, non-stable release of Maze. It is not yet complete and not all of our interfaces have settled yet. Hence, there might be some breaking changes on our way towards the first stable release.
Below you find an overview of the general Maze framework documentation, which is beyond the API documentation. The listed pages motivate and explain the underlying concepts but most importantly also provide code snippets and minimum working examples to quickly get you started.
- Collecting and Visualizing Rollouts
- Imitation Learning and Fine-Tuning
- Experiment Configuration
- Introducing the Perception Module
- Action Spaces and Distributions
- Working with Template Models
- Working with Custom Models
- Maze Trainers
- Maze RLlib Runner
- Policies, Critics and Agents
- Maze Environment Hierarchy
- Maze Event System
- Configuration with Hydra
- Hydra: Overview
- Hydra: Your Own Configuration Files
- Hydra: Advanced Concepts
- Environment Rendering
- Structured Environments
- Flat Environments
- Multi-Agent RL
- Hierarchical RL
- Beyond Flat Environments with Actors
- Where to Go Next
- High-level API: RunContext
- Customizing Core and Maze Envs
- Customizing / Shaping Rewards
- Environment Wrappers
- Observation Pre-Processing
- Observation Normalization
- List of Features
- Example 1: Normalization with Estimated Statistics
- Example 2: Normalization with Manual Statistics
- Example 3: Custom Normalization and Excluding Observations
- Example 4: Using Custom Normalization Strategies
- Example 5: Plain Python Configuration
- Built-in Normalization Strategies
- The Bigger Picture
- Where to Go Next
- Tricks of the Trade
- Cheat Sheet
- Integrating an Existing Gym Environment
- Structured Environments and Action Masking
- Combining Maze with other RL Frameworks
- Plain Python Training Example (high-level)
- Plain Python Training Example (low-level)
- Tensorboard and Command Line Logging
- Event and KPI Logging
- Action Distribution Visualization
- Observation Logging