Skip links to main content
Logo of Geometric Image Processing Laboratory,
Computer Science Department Home
Technion Home Page

Projects Last Projects

Self-Supervised Learning Methods & Vision Transformer Architecture for extracting global and meaningful representations

Tom Rahav & Noam Moshe

Supervised by Tal Neoran & Roy Velich

Abstract

In our project, we implemented and deployed SSL architecture called Dino, a self-distillation, non-contrastive learning approach. This architecture is model agnostic but empirically shown to work best with Vision Transformer (ViT) architecture.
The models that were trained using the SSL architecture enabled us to assess whether this kind of SSL does indeed produce high-quality features and learn to find underlying structures in the data and identify global relations.
In this report, we will present an overview of previous research in computational histopathology, detail the methods we utilized, examine the experiments we conducted along with their outcomes, and conclude by identifying potential constraints in our proposed solutions and recommending avenues for further improvement or future research.

Pictures
Project Self-Supervised Learning Methods & Vision Transformer Architecture for extracting global and meaningful representations Picture 1
Project Report

Please, see project report.

Final Presentation

Please, see final presentation.

Copyright © 2016 by Geometric Image Processing Lab. All rights reserved.