Skip to yearly menu bar Skip to main content

Affinity Workshop: WiML Workshop 1

Transformer-based Self-Supervised Learning for Medical Images

Mariia Dobko · Mariia Kokshaikyna


Medical tasks often lack big amounts of labeled data, so a self-supervised learning approach can be very helpful in retrieving useful information without supervision. Current best-performing self-supervised methods use vision transformers which let them build meaningful global-scale connections between embeddings and activation maps for different classes. Inspired by the DINO approach, we tested its performance on two medical problems: pneumothorax detection and tissue semantic segmentation. The method uses self-distillation with no labels, where two models with identical architectures are trained alongside each other but have different parameters.

Chat is not available.