Perception Module¶
This page contains the reference documentation of Maze Perception Module.
Overview
maze.perception.blocks¶
These are basic neural network building blocks and interfaces:
Interface for all perception blocks. |
|
Perception block normalizing the input and de-normalizing the output tensor dimensions. |
|
An inference block combining multiple perception blocks into one prediction module. |
|
Models a perception module inference graph. |
Feed Forward: these are built-in feed forward building blocks:
A block containing multiple subsequent dense layers. |
|
A block containing multiple subsequent vgg style convolutions. |
|
A block containing multiple subsequent strided convolution layers. |
|
A block containing multiple subsequent graph convolution stacks. |
|
A block containing multiple subsequent graph (multi-head) attention stacks. |
|
Implementation of a torch MultiHeadAttention block. |
|
PointNet block allowing to embed a variable sized set of point observations into a fixed size feature vector via the PointNet mechanics. |
Recurrent: these are built-in recurrent building blocks:
A block containing multiple subsequent LSTM layers followed by a final time-distributed dense layer with explicit non-linearity. |
General: these are build-in general purpose building blocks:
A flattening block. |
|
A feature correlation block. |
|
A feature concatenation block. |
|
A block applying a custom callable. |
|
A global average pooling block. |
|
A block applying global pooling with optional masking. |
|
A multi-index-slicing block. |
|
A repeat-to-match block. |
|
Implementation of a self-attention block as described by reference: https://arxiv.org/abs/1805.08318 |
|
Implementation of a self-attention block as described by reference: https://arxiv.org/abs/1706.03762 |
|
A slicing block. |
|
An action masking block. |
|
A block transforming a common nn.Module to a shape-normalized Maze perception block. |
Joint: these are build in joint building blocks combining multiple perception blocks:
A block containing a flattening stage followed by a dense layer block. |
|
A block containing multiple subsequent vgg style convolution stacks followed by flattening and a dense layer block. |
|
A block containing multiple subsequent vgg style convolution stacks followed by global average pooling. |
|
A block containing multiple subsequent strided convolutions followed by flattening and a dense layer block. |
|
A block containing a LSTM perception block followed by a Slicing Block keeping only the output of the final time step. |
maze.perception.builders¶
These are template model builders:
Base class for perception default model builders. |
|
A model builder that first processes individual observations, concatenates the resulting latent spaces and then processes this concatenated output to action and value outputs. |
maze.perception.models¶
These are model composers and components:
Abstract baseclass and interface definitions for model composers. |
|
Composes template models from configs. |
|
Composes models from explicit model definitions. |
|
Represents configuration of environment spaces (action & observation) used for model config. |
These are maze.perception.models.policies
Interface for policy (actor) network composers. |
|
Composes networks for probabilistic policies. |
There are maze.perception.models.critics
Interface for critic (value function) network composers. |
|
Interface for critic (value function) network composers. |
|
One critic is shared across all sub-steps or actors (default to use for standard gym-style environments). |
|
Each sub-step or actor gets its individual critic. |
|
First sub step gets a regular critic, subsequent sub-steps predict a delta w.r.t. |
|
alias of |
|
Interface for state action (Q) critic network composers. |
|
One critic is shared across all sub-steps or actors (default to use for standard gym-style environments). |
|
Each sub-step or actor gets its individual critic. |
|
alias of |
These are maze.perception.models.build_in models
Base flatten and concatenation model for policies and critics. |
|
Flatten and concatenation policy model. |
|
Flatten and concatenation state value model. |
maze.perception.perception_utils¶
These are some helper functions when working with the perception module:
Convert an observation space to the input shapes for the neural networks |
|
Merges an iterable of dictionary spaces (usually observations or actions from subsequent sub-steps) into a single dictionary containing all the items. |
|
Merges an iterable of dictionary spaces (usually observations or actions from subsequent sub-steps) into a single dictionary containing all the items. |
|
Converts any struct to torch.Tensors. |
|
Convert torch to np |
maze.perception.weight_init¶
These are some helper functions for initializing model weights:
Compiles normc weight initialization function initializing module weights with normc_initializer and biases with zeros. |
|
Compute the bias value for a sigmoid activation function such as in multi-binary action spaces (Bernoulli distributions). |