SelfAttentionConvBlock¶
-
class
maze.perception.blocks.general.self_attention_conv.
SelfAttentionConvBlock
(*args: Any, **kwargs: Any)¶ Implementation of a self-attention block as described by reference: https://arxiv.org/abs/1805.08318
This block can then be used for 2d data (images), to compute the self attention. If two out_keys are given, the actual attention is returned from the forward pass with the second out_key. Otherwise only the computed self-attention is returned
- Parameters
in_keys – Keys identifying the input tensors. First key is self_attention output, second optional key is attention mask.
out_keys – Keys identifying the output tensors. First key is self-attention output, second optional key is attention map.
in_shapes – List of input shapes.
embed_dim – The embedding dimensionality, which should be an even fraction of the input channels.
add_input_to_output – Specifies weather the computed self attention is added to the input and returned.
bias – Specify weather to use a bias in the projections.
-
forward
(block_input: Dict[str, torch.Tensor]) → Dict[str, torch.Tensor]¶ (overrides
PerceptionBlock
)implementation of
PerceptionBlock
interface