fusilli.fusionmodels.tabularfusion.attention_and_activation

Using activation functions to fuse tabular data, with self-attention on the second tabular modality.

Classes

AttentionAndSelfActivation(prediction_task,Β ...)

Applies an attention mechanism on the second tabular modality features and performs an element wise product of the feature maps of the two tabular modalities, tanh activation function and sigmoid activation function.

ChannelAttentionModule(num_features[,Β ...])

Channel attention module.

class AttentionAndSelfActivation(prediction_task, data_dims, multiclass_dimensions)[source]

Bases: ParentFusionModel, Module

Applies an attention mechanism on the second tabular modality features and performs an element wise product of the feature maps of the two tabular modalities, tanh activation function and sigmoid activation function. Afterwards the the first tabular modality feature map is concatenated with the fused feature map.

prediction_task

Type of prediction to be performed.

Type:

str

mod1_layers

Dictionary containing the layers of the first modality. Calculated in the set_mod1_layers() method.

Type:

nn.ModuleDict

mod2_layers

Dictionary containing the layers of the second modality. Calculated in the set_mod2_layers() method.

Type:

nn.ModuleDict

fused_dim

Number of features of the fused layers. In this method, it’s the size of the tabular 1 layers output plus the size of the tabular 2 layers output.

Type:

int

fused_layers

Sequential layer containing the fused layers. Calculated in the calc_fused_layers() method.

Type:

nn.Sequential

final_prediction

Sequential layer containing the final prediction layers. The final prediction layers take in the number of features of the fused layers as input. Calculated in the calc_fused_layers() method.

Type:

nn.Sequential

attention_reduction_ratio

Reduction ratio of the channel attention module.

Type:

int

__init__(prediction_task, data_dims, multiclass_dimensions)[source]
Parameters:
  • prediction_task (str) – Type of prediction to be performed.

  • data_dims (list) – List containing the dimensions of the data.

  • multiclass_dimensions (int) – Number of classes in the multiclass classification task.

calc_fused_layers()[source]

Calculate the fused layers.

Return type:

None

forward(x)[source]

Forward pass of the model.

Parameters:

x (tuple) – Tuple containing the input data.

Returns:

List containing the output of the model.

Return type:

list

fusion_type = 'operation'

Type of fusion.

Type:

str

get_fused_dim()[source]

Get the number of features of the fused layers. Assuming mod1_layers and mod2_layers output the same dimension.

method_name = 'Activation function and tabular self-attention'

Name of the method.

Type:

str

modality_type = 'tabular_tabular'

Type of modality.

Type:

str

class ChannelAttentionModule(num_features, reduction_ratio=16)[source]

Bases: Module

Channel attention module.

fc1

First fully connected layer.

Type:

nn.Linear

relu

ReLU activation function.

Type:

nn.ReLU

fc2

Second fully connected layer.

Type:

nn.Linear

sigmoid

Sigmoid activation function.

Type:

nn.Sigmoid

__init__(num_features, reduction_ratio=16)[source]
Parameters:
  • num_features (int) – Number of features of the input tensor.

  • reduction_ratio (int) – Reduction ratio of the channel attention module.

forward(x)[source]

Forward pass of the channel attention module.

Parameters:

x (torch.tensor) – Input tensor.

Returns:

Output tensor after applying the channel attention module.

Return type:

torch.tensor