class maze.perception.blocks.feed_forward.graph_conv.GraphConvBlock(*args: Any, **kwargs: Any)

A block containing multiple subsequent graph convolution stacks.

One convolution stack consists of one graph convolution in addition to an activation layer. The block expects the input tensors to have the form: - Feature matrix: first in_key: (batch-dim, num-of-nodes, feature-dim) - Adjacency matrix: second in_key: (batch-dim, num-of-nodes, num-of-nodes) (also symmetric) And returns a tensor of the form (batch-dim, num-of-nodes, feature-out-dim).

  • in_keys – Two keys identifying the feature matrix and adjacency matrix respectively.

  • out_keys – One key identifying the output tensors.

  • in_shapes – List of input shapes.

  • hidden_features – List containing the number of hidden features for hidden layers.

  • bias – Specify if a bias should be used at each layer (can be list or single).

  • non_lins – The non-linearity/ies to apply after each layer (the same in all layers, or a list corresponding to each layer).

  • node_self_importance – Specify how important a given node is to itself (default should be 1).

  • trainable_node_self_importance – Specify if the node_self_importance should be a constant or a trainable parameter with init value :param node_self_importance.

  • preprocess_adj – Specify whether to preprocess the adjacency, that is compute: adj^ := D^bar^(-1/2) A^bar D^bar^(-1/2) in every forward pass for the whole bach. If this is set to false, the already normalized adj^ is expected as an input. Here A^bar := A + I_n * :param self_importance_scalar, and D^bar_ii := sum_j A^bar_ij.


Compiles a block-specific dictionary of network layers.

This could be overwritten by derived layers (e.g. to get a ‘BatchNormalizedConvolutionBlock’).


Ordered dictionary of torch modules [str, nn.Module].

normalized_forward(block_input: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]

(overrides ShapeNormalizationBlock)

implementation of ShapeNormalizationBlock interface