Projects Last Projects
Self-Supervised Learning Methods & Vision Transformer Architecture for extracting global and meaningful representations
Tom Rahav & Noam Moshe
Supervised by Tal Neoran & Roy Velich
In our project, we implemented and deployed SSL architecture called Dino, a self-distillation, non-contrastive learning approach. This architecture is model agnostic but empirically shown to work best with Vision Transformer (ViT) architecture.
The models that were trained using the SSL architecture enabled us to assess whether this kind of SSL does indeed produce high-quality features and learn to find underlying structures in the data and identify global relations.
In this report, we will present an overview of previous research in computational histopathology, detail the methods we utilized, examine the experiments we conducted along with their outcomes, and conclude by identifying potential constraints in our proposed solutions and recommending avenues for further improvement or future research.
Please, see project report.
Please, see final presentation.