fusilli.fusionmodels.tabularfusion.crossmodal_att๏ƒ

Crossmodal multi-head attention for tabular data.

Classes

TabularCrossmodalMultiheadAttention(...)

Tabular Crossmodal multi-head attention model.

class TabularCrossmodalMultiheadAttention(prediction_task, data_dims, multiclass_dimensions)[source]๏ƒ

Bases: ParentFusionModel, Module

Tabular Crossmodal multi-head attention model.

This class implements a model that fuses the two types of tabular data using a cross-modal multi-head attention approach.

Inspired by the work of Golovanevsky et al. (2021) [1]: here we use two types of tabular data as the multi-modal data instead of 3 types in the paper.

References

Golovanevsky, M., Eickhoff, C., & Singh, R. (2022). Multimodal attention-based deep learning for Alzheimerโ€™s disease diagnosis. Journal of the American Medical Informatics Association, 29(12), 2014โ€“2022. https://doi.org/10.1093/jamia/ocac168

Accompanying code: (our model is inspired by the work of Golovanevsky et al. (2021) [1]) https://github.com/rsinghlab/MADDi/blob/main/training/train_all_modalities.py

prediction_task๏ƒ

Type of prediction to be performed.

Type:

str

attention_embed_dim๏ƒ

Number of features of the multihead attention layer.

Type:

int

mod1_layers๏ƒ

Dictionary containing the layers of the first modality.

Type:

nn.ModuleDict

mod2_layers๏ƒ

Dictionary containing the layers of the second modality.

Type:

nn.ModuleDict

fused_dim๏ƒ

Number of features of the fused layers. This is the flattened output size of the first tabular modalityโ€™s layers.

Type:

int

attention๏ƒ

Multihead attention layer. Takes in attention_embed_dim features as input.

Type:

nn.MultiheadAttention

tab1_to_embed_dim๏ƒ

Linear layer. Takes in fused_dim features as input. This is the input of the multihead attention layer.

Type:

nn.Linear

tab2_to_embed_dim๏ƒ

Linear layer. Takes in fused_dim features as input. This is the input of the multihead attention layer.

Type:

nn.Linear

relu๏ƒ

ReLU activation function.

Type:

nn.ReLU

final_prediction๏ƒ

Sequential layer containing the final prediction layers.

Type:

nn.Sequential

__init__(prediction_task, data_dims, multiclass_dimensions)[source]๏ƒ
Parameters:
  • prediction_task (str) โ€“ Type of prediction to be performed.

  • data_dims (list) โ€“ List containing the dimensions of the data.

  • multiclass_dimensions (int) โ€“ Number of classes in the multiclass classification task.

calc_fused_layers()[source]๏ƒ

Calculate the fused layers.

Return type:

None

Raises:
  • ValueError โ€“ If the number of layers in the two modalities is not the same.

  • ValueError โ€“ If dtype of the layers is not nn.ModuleDict.

forward(x)[source]๏ƒ

Forward pass of the model.

Parameters:

x (tuple) โ€“ Tuple containing the input data.

Returns:

List containing the output of the model.

Return type:

list

fusion_type = 'attention'๏ƒ

Type of fusion.

Type:

str

method_name = 'Tabular Crossmodal multi-head attention'๏ƒ

Name of the method.

Type:

str

modality_type = 'tabular_tabular'๏ƒ

Type of modality.

Type:

str