Skip links to main content
Logo of Geometric Image Processing Laboratory,
Computer Science Department Home
Technion Home Page

Projects Last Projects

2024
Project Title:

Visually Guided Object Insertion Into Image

Picture of Visually Guided Object Insertion Into Image
Students:

Adi Tsach and Aya Spira

Supervisors:

Noam Rotstein and Roee Ganz

Description:
The field of image editing with diffusion models has advanced significantly in recent years. Traditionally, many techniques necessitated manual annotation to designate the insertion point of an object within an image. Our newly proposed method, however, relies solely on textual descriptions to integrate objects into images. In our project, we aim to substitute the textual descriptions of desired objects with their corresponding images. Dataset - ADT1999/project_from_PIPE_extended Code - https://github.com/AdiTsachGit/ML-Geometric-Image-Processing-Project.git
Project Title:

Coronary angiography video segmentation method for assisting cardiovascular disease interventional treatment

Picture of Coronary angiography video segmentation method for assisting cardiovascular disease interventional treatment
Students:

Natalie Mendelson and Daniel Katz

Supervisors:

Noam Rotstein

Description:
Percutaneous coronary intervention (PCI) is a procedure to diagnose and treat coronary artery disease but carries risks like artery dissection and perforation. The CathAlert project aims to improve an AI-based alert system to detect and warn against mispositioning of coronary catheters and wires in real-time, enhancing patient safety.
Methods: The system was developed using a large dataset of hazard-annotated frames and segmentation masks. Various training strategies and architectural modifications were implemented to improve model performance, including residual connections and temporal data processing.
Results: The enhanced system showed significant improvements in detecting key structures and potential risks, with success rates of 70-80% in segmentation tasks and ROC AUC scores of 0.71 to 0.88 for hazard alerts. Data augmentations increased robustness, though temporal data did not significantly enhance performance.
Conclusions: The research shows potential for accurate hazard identification during PCI. Future work includes further integration of temporal data and training specialized decoders to improve performance, aiming to enhance patient safety and procedural outcomes in the Cath-Lab.
Project Title:

VR Presentation Simulator Project

Picture of VR Presentation Simulator Project
Students:

Maaroof Ashkar

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
This Virtual Reality project is designed to provide a realistic and interactive training platform for presenters. By simulating real-world environments, it enables users to practice and improve their public speaking skills while interacting with a dynamic virtual audience. This tool aims to reduce presentation anxiety and enhance communication skills by offering customizable and immersive scenarios.
Project Title:

Personalized Gan Based Editing

Picture of Personalized Gan Based Editing
Students:

Elias Habib and Adi Hanna

Supervisors:

Sari Hleihil

Description:
A significant challenge in GAN-based human face manipulation is maintaining a consistent identity, as existing methods often cause changes in facial characteristics. This work presents a solution that integrates personalized generative capabilities with precise image manipulation techniques to address this issue. Our method utilizes a personalized deep generative prior to fine-tune a pre-trained face generator using a small set of portrait images (~100) of an individual, creating a local, low-dimensional manifold in the latent space. This enables semantic editing faithful to the individual’s key facial characteristics while using a user-friendly GUI with a point-to-point dragging method. We additionally enhance the accuracy and stability of image manipulation by employing classical methods such as normalizing latent vectors and expanding our manifold by using known editing direction vectors all that while maintaining comparable runtime efficiency.
Project Title:

Instant Splats

Picture of Instant Splats
Students:

Mais khoury and Christeen Shaheen.

Supervisors:

Sari Hleihil

Description:
This project introduces a novel method for monocular 3D reconstruction, utilizing a deep learning model that leverages Gaussian Splatting in a unique way. Instead of traditional rendering, we use Gaussian Splatting as an efficient 3D latent representation for features within an autoencoder architecture. This approach prioritizes speed and minimal input data, enabling real-time applications on local devices. Our pipeline defines a 3D latent representation that allows for memory and time-efficient rendering. By learning Gaussian parameters during training, we streamline the reconstruction process, bypassing the need for extensive input data and real-time computation. Extensive evaluations and ablations demonstrate that our model achieves robust and accurate 3D reconstructions from a single image input. This innovation addresses key challenges in the field, significantly enhancing reconstruction efficiency and speed without compromising accuracy.
Project Title:

Prediction of B/T Subtype, and ETV6-RUNX1 Translocation in Pediatric Acute Lymphoblastic Leukemia By Analysis of Giemsa-Stained Whole Slide Images

Picture of Prediction of B/T Subtype, and ETV6-RUNX1 Translocation in Pediatric Acute Lymphoblastic Leukemia By Analysis of Giemsa-Stained Whole Slide Images
Students:

Arkadi Piven

Supervisors:

Gil Shamai and Roy Velich

Description:
In this project, we analyze digitized Giemsa-stained bone marrow samples using deep learning techniques to predict B/T subtype and ETV6-RUNX1 translocation in Acute Lymphoblastic Leukemia (ALL). We developed a solution to predict patient-specific medical properties from digitized slides in a statistically significant manner, employing a convolutional neural network (CNN) trained through supervised learning on labeled medical data provided by multiple institutions. Moreover, we experimented with attention-based methods to further improve our results. Our results show that it is possible to predict these medical characteristics using a CNN-based method, but we need more data to further exploit the attention-based method.
2023
Project Title:

Cannabis Detection by Microscope Inspection

Picture of Cannabis Detection by Microscope Inspection
Students:

Emma Attal and Arthur Soussan

Supervisors:

Alon Zvirin and Yaron Honen

Description:
Synthetic cannabinoids contain dangerous chemicals that may be much more dangerous than regular cannabis, and its production has increased recently. Therefore, the police are interested in identifying them and differentiate between real and synthetic cannabis. The forensic department of the Israel police collaborated with the Geometric Image Processing (GIP) laboratory in the Technion to create an improved identification using innovative technology tools. Our project developed computer vision based models to identify real cannabis from microscopic pictures of cannabis.
Project Title:

Computational oncology by analysis and alignment of histopathology slide images

Picture of Computational oncology by analysis and alignment of histopathology slide images
Students:

Neta Becker and Shelly Francis

Supervisors:

Roy Velich and Tal Neoran and Gil Shamai

Description:
In modern medicine, biopsy is a common method for diagnosing breast cancer. The tissue sample is sliced into ultra-thin sections, each of which is stained using techniques such as IHC or H&E to highlight different features. A pathologist then examines the stained sections under a microscope to determine if cancerous cells are present. While IHC staining provides valuable information, it is expensive and time-consuming. H&E staining, on the other hand, is quicker and more cost-effective. Our goal was to reduce the reliance on IHC staining by calculating the alignment between IHC and H&E stained slides. This would allow us to train neural networks using both types of slides, transfer pathological information from one slide to another, and potentially rely solely on H&E stained slides. To achieve this goal, we developed a GUI that allows matching points between pairs of slides. After obtaining a set of matched points for each pair, we explored various methods for calculating the alignment, including automatic alignment, manual alignment, and triangulation. We examined different models based on various articles and proposed methods for comparing the results achieved through each method in order to develop an algorithm for our goal
Project Title:

Movement Analysis In Heart Tissue

Picture of Movement Analysis In Heart Tissue
Students:

David Berestetsky , Elad Tsur and Ofir Eldar

Supervisors:

Noam Rotstein and Idan Haim and Dr. Oren Caspi

Description:
Stem cells are being cultivated in Dr. Oren Caspi's medical research laboratory at the faculty of medicine. These stem cells are specifically categorized as cardiomyocytes. Using these cardiomyocytes, 3D tissues are constructed around two silicon rectangles. The movement of the rectangles is driven by the pulses generated by the tissues. It is established that there exists a direct correlation between the displacement of the columns and the level of force exerted by the tissue. Optimization and Automation of Movement recognition on pre-captured lab videos. The program will analyze the active force of the captured tissue, by recognizing the movement of rectangular-silicone implants inside of it.
Project Title:

Domain Adaptation

Picture of Domain Adaptation
Students:

Lior Grauer and Eden Konopnicki

Supervisors:

Tal Neoran and Roy Velich

Description:
In our project we implemented a domain adaptation model, Domain adaptation refers to the process of adapting a machine learning model trained on a source domain to perform effectively on a target domain. The ultimate goal of domain adaptation is to enable models to generalize well across different domains, allowing them to perform accurately and reliably in real-world settings.
Project Title:

Nested Diffusion Processes for Anytime Image Generation

Picture of Nested Diffusion Processes for Anytime Image Generation
Students:

Noam Elata

Supervisors:

Bahjat Kawar & Michael Elad

Description:
In this work,
we propose an anytime diffusion-based method
that can generate viable images when stopped at
arbitrary times before completion. Using existing
pretrained diffusion models, we show that the generation
scheme can be recomposed as two nested
diffusion processes, enabling fast iterative refinement
of a generated image. We use this Nested
Diffusion approach to peek into the generation
process and enable flexible scheduling based on
the instantaneous preference of the user. In experiments
on ImageNet and Stable Diffusion-based
text-to-image generation, we show, both qualitatively
and quantitatively, that our method’s intermediate
generation quality greatly exceeds that of
the original diffusion model, while the final slow
generation result remains comparable.
Project Title:

Self-Supervised Learning Methods amp; Vision Transformer Architecture for extracting global and meaningful representations

Picture of Self-Supervised Learning Methods  amp; Vision Transformer Architecture for extracting global and meaningful representations
Students:

Tom Rahav & Noam Moshe

Supervisors:

Tal Neoran & Roy Velich

Description:
In our project, we implemented and deployed SSL architecture called Dino, a self-distillation, non-contrastive learning approach. This architecture is model agnostic but empirically shown to work best with Vision Transformer (ViT) architecture. The models that were trained using the SSL architecture enabled us to assess whether this kind of SSL does indeed produce high-quality features and learn to find underlying structures in the data and identify global relations. In this report, we will present an overview of previous research in computational histopathology, detail the methods we utilized, examine the experiments we conducted along with their outcomes, and conclude by identifying potential constraints in our proposed solutions and recommending avenues for further improvement or future research.
Project Title:

Deep Learning Approach for Tissue Segmentation in Whole Slide Images

Picture of Deep Learning Approach for Tissue Segmentation in Whole Slide Images
Students:

Haneen Naaran, Amir Bishara

Supervisors:

Tal Neoran & Roy Velich

Description:
We investigate the use of deep learning models for the semantic segmentation of Hematoxylin and Eosin Whole Slide images (H&E WSI). H&E images are commonly used in pathology for the diagnosis. Automated segmentation can aid in the efficient identification of different structures within these wsi images. This segmentation problem can be solved by classic computer vision algorithms, like Otsu’s thresholding method. However, this method does not generalize well on different datasets and is not robust to artifacts on scans. Hence, we studied the use of deep learning models for this segmentation task, aiming to get better generalization ability and robustness to artifacts. We trained and tested several deep learning models, including U-Net and FusionNet, in a supervised manner on datasets of H&E images that include foreground tissue segmentation performed by the OTSU’s thresholding method. We evaluated their performance using various metrics, including visual examination. Our results show that these models can achieve high accuracy in the semantic segmentation and they generalize well on different datasets.
Project Title:

Procedurally generated Space

Picture of Procedurally generated Space
Students:

Saadi Saadi

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
Project's Description: The objective of the project was to create planets using simplex noise, with a focus on developing a level-of-detail (LOD) system that allows for rendering small details while maintaining optimal performance. Next, create a system that generates an endless space, with planets we made before serving as the building blocks. Afterward, create a planet-naming system and a feature for locating lost planets within the vast expanse of space.
Project Title:

VR-Hight dynamic avatars scene

Picture of VR-Hight dynamic avatars scene
Students:

Ran Marashi, Michael Radzeiovsky, Eitan Levi

Supervisors:

Boaz Sternfeld and Yaron Honen

Description:
Realistic life experience in a Japanese village up close using VR with famous characters. https://ranmarashi7.wixsite.com/vr-project
Project Title:

AgriVision

Picture of AgriVision
Students:

Ron Benchetrit and Mor Lavon

Supervisors:

Alon Zvirin and Yaron Honen

Description:
We extend the MSCG-Net with agricultural indices and evaluate the results. We were able to improve the model results with the addition of only a few parameters. We later on find that maybe the improvement stems from the extra parameters of the network and not the indices transforms themself.
2022
Project Title:

High Perceptual Quality Single Image Super Resolution

Picture of High Perceptual Quality Single Image Super Resolution
Students:

David-Elone Zana and Odelia Bellaiche Bensegnor

Supervisors:

Theo Adrai

Description:
Nowadays, the metric used to calculate the statistic distance between different pictures is the FID (Fréchet Inception Distance): it uses a pretrained inception network and the W2 divergence to approach the distance between them. We assume that the latent representation of each dataset has a Gaussian distribution. We also assume that the Gaussian distribution is not degenerate: we assume that the covariance matrix is a positive definite matrix: all the principal components are not zeros. To these assumptions, we can add the numerical instability and the impossibility to score a single image. The limits of this method had woke up in us the will to look for a new metric. Can we find a new solution that won’t need those assumptions and with whom we can score a single image while maintaining numerical stability?
Project Title:

vr squash

Picture of vr squash
Students:

Noofar Ophir and Dor Brekhman

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
The goal of the project is to simulate a practice environment for squash players. The scene is a squash court in which the player can grab the ball and hit it with a racket. During the game, the player can practice his squash skills and his counter strikes to a random canon’s serve.
Project Title:

Brick Smash VR

Picture of Brick Smash VR
Students:

Daniel Cohen and Meir Friedmann

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
Brick Smash VR is a VR game inspired by the widely popular classic video games Breakout and Arkanoid. Our goal was to take these classics and transform them into an immersive VR experience. In Brick Smash VR, the player plays the part of the paddle under the bricks in the original games (as if looking up at the bricks needed to be broken). The player has to break all the bricks before time runs out or they run out of balls. They start by throwing the ball, and then the same ball can be caught with either of the in-game hands and rethrown, or can be hit with a racket held in either hand. Furthermore, the player tries to collect points along the way by catching the fragments of bricks they destroy. The game also features several powerups that manipulate the game in several ways (moving the brick walls, gaining extra balls, point multipliers, etc.).
Project Title:

PHOTO WAKEUP

Picture of PHOTO WAKEUP
Students:

Yuval Reshef, Tal Haklay and Asaf-Yosef Buchnick

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
We implement the method proposed in “Photo Wake-Up: 3D Character Animation from a Single Photo” (Weng Et al.). Our pipeline takes a single photo of a human character as an input, and outputs an animated version of it, using a specific animation we decided to focus on. Our project's main contribution is using new and improved libraries to implement the pipeline described in the paper. Website - https://talhaklay535.wixsite.com/photo-wakeup Code - https://github.com/abuchnick/PhotoWakeUp
Project Title:

AR matrix effect

Picture of AR matrix effect
Students:

David Koplevatzkii and Artem Shtefan

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
The goal of the project was to reconstruct the famous scene from "the Matrix" movie, where the main character sees everything in digital rain. Considering previous work - "digital rain" shader applied on a pre-defined 3D scene, our goal was to implement this shader application in real-time. We achieved that using triplanar shader mapped on the LiDAR sensor generated mesh surface.
Project Title:

Chicken Invaders AR

Picture of Chicken Invaders AR
Students:

Gal Katz, Tamir Daniel and Ari Amitai

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
The project is inspired by our childhood game, Chicken Invaders. We used AR technology to bring Chicken Invaders to life. Using Microsoft HoloLens & Unity AR technology we succeeded in our mission. Using Hololens AR, the game is played by scanning the environment and opening portals on walls around the player. Chickens emerge from the portals and shoot eggs at the player. The player can kill the chickens by shooting a laser at them. Once it has been destroyed, it turns into baked chicken just like in the original game. Getting as many chickens as possible is the goal. Our engine identifies the environment, hand gestures and creates portals with noticeable depth. We also developed the game for the Android platform using Unity AR Foundation.
Project Title:

Emotion Detector Bot

Picture of Emotion Detector Bot
Students:

Anat Veikherman and Tal Amir

Supervisors:

Ran Breuer

Description:
In this project, we build an application that uses a web camera to record the participants' faces and detect their emotions in real-time. The application can detect several faces at once and recognize their emotions. We show the detected face and the emotion on the screen while the face is changing in real-time.
Project Title:

VR Hand Interaction Multiplayer

Picture of VR Hand Interaction Multiplayer
Students:

Leonid Shleifer and Yotam Portal

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
In this project we built a Virtual Reality application that enables two users to enter the same VR shared space and a mutual physical room and then together observe an object, move it around and scale it. In the application the users can grab the objects and move them in all axes, spin and tilt the objects in every direction, enlarge and diminish the objects, all that while the other user will see in real time the object’s motion and also the first user’s hands representations as two spheres. The application is built as a VR game in Unity and runs on (Oculus) Meta Quest 2 VR headset, the application is not cross platform since it requires the Meta Quest 2 hand tracking capabilities, more on that later. A good use for this type of application can be a design team working on a certain product, entering the application and being able to see the design in 3D, and discuss it as if it was in front of them in real life.
Project Title:

Find Closest Defibrillator

Picture of Find Closest Defibrillator
Students:

Shadi Abu-Saleh and Sameer Hamada

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
we introduce a new mobile App that navigate user to the closest Defibrillator in the Technion campus. Using our App the user would get navigation assistance to the closest defibrillator based on his current location. The app provides graphics directions on the user`s camera along with his current location and a 2D map that shows his location and the path to the chosen Defibrillator to his Location. In addition to the 2D and 3D assistance, Voice Assistance is also available which notify the user to the next direction until he reaches the Defibrillator.
Project Title:

Further Steps in Precise Shape Completion

Picture of Further Steps in Precise Shape Completion
Students:

Hadas Romov and Yiftach Edelstain

Supervisors:

Ido Imanuel

Description:
In recent reality, people are looking for new ways to connect with each other. Virtual reality can be the new meeting ground for people all around the world. However, capturing a person to view in VR is limited by equipment and capture location. To enable those new applications and usages, we want to achieve a simple setup to enable everyone to participate in this new frontier. In our project, we have extended the paper "Towards Precise Completion of Deformable Shapes", which uses a prior based approach for shape reconstruction of human partial scans and considers the irregularity of point cloud data structures. We have improved the network performance in the spatial domain and managed to create better completions using loss function terms and modifications, architectural changes and post processing on the animations. We used elaborate benchmarking and validation tools, to achieve improvements both in quality and quantity.
Project Title:

AR Sinus

Picture of AR Sinus
Students:

Nir Mualem and Oren Shavit

Supervisors:

Amit Bracha and Ron Slosberg

Description:
During a sinus surgery, the surgeon needs to see the patient’s CT scan. To make it more convenient, the CT scan will be displayed as an STL 3D file aligned with the patient’s face using AR with Microsoft HoloLens 2. The solution must include engineering parts and algorithmic parts for the registration problem as well. We developed an app using Unity platform, which should solve it and can help the surgeon during the operation. Code: https://github.com/nir6760/HL2proj
Project Title:

Stereo Camera

Picture of Stereo Camera
Students:

Yaniv Wolf

Supervisors:

Amit Bracha and Noam Rotstein

Description:
While traditional cameras are constrained to two dimensional images, it is possible to utilize several cameras to create systems capable of capturing three-dimensional images. Such systems are called stereo camera systems, and are capable of capturing the depth of every point in the scene, using a stereo matching algorithm and the distance between the cameras. In this project, I created a stereo camera from scratch, which includes wiring the cameras for synchronized hardware triggering, and building an interface capable of streaming and capturing depth images, generating and viewing point clouds, and interactively performing internal and stereo calibration of the camera array.
Project Title:

Weakly Supervised Learning Methods for Analysis of Histological Whole Slide Images

Picture of Weakly Supervised Learning Methods for Analysis of Histological Whole Slide Images
Students:

Tal Neoran

Supervisors:

Gil Shamai

Description:
In the field of histopathology, slices of tissue samples are typically stained and examined under a microscope for diagnosis of cancer and other diseases, and for analysis of different properties of the diseases. In recent years, the use whole slide imaging technology for digital scanning of glass slides has rapidly increased, allowing the employment of artificial intelligence (AI) methods to improve the analysis and prediction abilities in computational histopathology. Scanning of a glass slide results in a scan known as a whole slide image (WSI), a gigapixel image of a tissue sample constructed from multiple different magnification resolutions. Performing analysis of WSIs using deep learning algorithms poses multiple challenges, for example due to the large size of the images, and due to the lack of availability of local annotations. In this project, we implemented and assessed the potential of methods in weakly supervised and self-supervised deep learning to perform analysis and predict clinical outcomes from hematoxylin and eosin (H&E) stained WSIs of patients with breast cancer. We proposed and evaluated ways to deal with the challenges that arise when working with WSIs and applied our methods to a WSI dataset for classification of hormone receptor status information.
Project Title:

Precise Completion of Deformable 3d Shapes Using SIREN

Picture of Precise Completion of Deformable 3d Shapes Using SIREN
Students:

Sahar Proter

Supervisors:

Oshri Halimi

Description:
With VR headset’s popularity growing and its application for VR chats raising interest, the need for a fast, cheap and quality way to transform the user’s shape and movements in real time to the VR space arises. In this paper I will propose my idea for a network that can reconstruct a partial scan from a single angle which came from a depth sensor to a full precise 3D shape. I will discuss the representation of a 3D shape as an SDF and its possible advantages for shape completion, along with SIREN network that uses an easy to calculate Loss function and shows how the use of periodic activation functions such as Sine can improve both the speed and the quality of fitting a point cloud to an SDF. Finally, I will show how I used SIREN network and SDF representation to implement a new network for the use of precise completion of deformable 3d shapes.
Project Title:

Robotic vision

Picture of Robotic vision
Students:

Snir Green Eitan Baron

Supervisors:

Amit Bracha

Description:
The Robotics Excellence Program in mechanical engineering department are trying to build a robotic arm that picks tin cans (as the picture on the right) from a clutter in a box and organize it for shipment. Above the box there is a static RGBD LIDAR camera. Our project was creating the “vision” part of the robot, meaning each time the robot wants to take the next can from the clutter  he sends a request to our program that is connected to the static camera that will return the position of the most upper can.
Project Title:

Hologuard

Picture of Hologuard
Students:

Yaniv Holder and Ofir Florentz

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
A security software which can recognize faces that are in a certain data base in a video broadcast.
Project Title:

AR real-world-object interactions

Picture of AR real-world-object interactions
Students:

Mousa Arraf and Patrisious Haddad

Supervisors:

Honen Yaron & Boaz Sterfeld

Description:
we created an augmented reality game, with real-world-object interactions. two players can connect over the network and play with each other on a playing field that they see through AR. each player has a physical cube that he can move to fend incoming fireballs off.
Project Title:

Drum Legends - A VR Game

Picture of Drum Legends - A VR Game
Students:

Noa Pariente and Barak Biber

Supervisors:

Yaron Honen & Boaz Sterenfeld

Description:
Drum Legends is a virtual reality drummer game. In this game you drum on a virtual drum set which includes even a bass drum! To play the bass we constructed a DIY bass pedal which anyone can construct at home using basic materials and a Bluetooth mouse as the actual controller. Everything is affordable and easy to acquire. In the game you can play your favorite songs in front of your fans and have a lots of fun.
Project Title:

Utilizing Prior Knowledge for Non-Rigid Shape Completion

Picture of Utilizing Prior Knowledge for Non-Rigid Shape Completion
Students:

Omer Ben Hayun

Supervisors:

Ido Imanuel

Description:
In recent years, researchers have shown an increased interest in 3D human pose and shape estimation. Most studies in the field relies solely on completion from partial shape without additional information, resulting a limited models that cannot always reconstruct the partial shape precisely. The study utilized prior based approach for shape reconstruction of human partial scans that significantly improved the performance of existing methods. Additionally, in this study we developed and applied new technique for sampling from large datasets resulting solid increase of the performance across all tested learning models. The sampling methodology presented here has profound implications for future studies of machine-learning models that relies on learning from large datasets. Finally, we designed new visualization tools to explore the shape and the pose manifold of parametric body models and datasets.
Project Title:

Classification and segmentation of medical imaging

Picture of Classification and segmentation of medical imaging
Students:

Nir Shopen and Omer Kawaz

Supervisors:

Alona Z

Description:
An implementation of a DML (Distance metric learning) classifier, based on "RepMet: Representative-based metric learning for classification and few-shot object detection"
Project Title:

Learning Unique Invariant Signatures of Non-Rigid Point Clouds

Picture of Learning Unique Invariant Signatures of Non-Rigid Point Clouds
Students:

Sari Hleihil and Idan Shienfeld

Supervisors:

Ido Imanuel

Description:
We propose a metric learning framework for the construction of invariant signatures of non-rigid 3D point clouds under the isometry transformations group. We leverage the representational power of convolutional neural networks to compute these signatures and show that in comparison with classical methods, we achieve superior results that allow for higher classification accuracy using the invariant signature, and a lower pose dependency, with the additional advantage of much lower complexity, allowing for the calculation of invariant signatures for larger point clouds with orders of magnitude less time, this is achieved without the use of edge information that is commonly used for such applications. Furthermore, our proposed training scheme allows achieves superior classification accuracy than end-to-end trained classifiers using the same architecture.
Project Title:

AR Surgery Assist

Picture of AR Surgery Assist
Students:

Zahi Cohen, Noy Gini, Adriana Dolgin and Silvan Marti

Supervisors:

Amit Bracha and Ron Slosberg

Description:
The goal of the project is to use the augmented reality capabilities of the HoloLens headset for improving medical surgery process. The main task is to adjust a three-dimensional (pre-prepared) CT model on a patient's lower back, while finding the location and angle at which the model should be allocated, so that it fits exactly to the patient's body and medical procedure. In this way, it is possible to insert a needle into the right place in the patient's body in a more accurate and simple way than before. The project extensively uses voice commands, spatial mapping construction, and receiving and processing data from the sensors on the HoloLens glasses, as well as using an ICP algorithm for the purpose of fitting the model onto the patient.
Project Title:

Bike Fit

Picture of Bike Fit
Students:

Rotem Elias and Emmanuel Ferdman

Supervisors:

Haitham Fadila

Description:
Bicycle fitting is a process of adjusting a bicycle for a cyclist. A good bicycle fit is the key to improve cycling. Every cyclist should fit his bicycle to his measurements in order to improve performance, prevent long term injuries and optimize the cycling experience. The bicycle fit creates the connection between the rider’s current physical state and what it is they desire to achieve. The goal of the project was to create an application that allows to upload a short video of the cyclist riding his bicycle and rank the bicycle configurations. Such ranking should give a good initial indication of the current fitting state. The application is using digital image processing algorithms to detect the rider's body, find the coordinates and calculate the angles. In addition to the height, this information will be used by the software to identify and rate the rider's seating configurations.
2021
Project Title:

VR Blackjack

Picture of VR Blackjack
Students:

Guy Lecker ,Eduardo Abramoff and Ofek Gutman

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
In short, BlackjackVR is a PC game written in C# and implemented in Unity, our chosen graphic engine. It is using an Oculus Rift headset for the virtual reality aspect of the game and a Leap Motion Controller to elevate the gameplay experience with hand movement recognition.
Project Title:

Object Glorification

Picture of Object Glorification
Students:

Shani Bar-Gera and Shani Bigdary

Supervisors:

Yaron Honen and Boaz Sternfeld

Description:
The goal of the project was to create an application that glorifies a chosen object within a given image. After the glorification, the user would have some sort of ability to “play” with the object by performing various manipulations on the object. We decided to achieve this goal by creating an Android application that allows users to select an image, or to take one, mark an object in that given image and perform the glorification process. In this case, glorifying means blurring the background and magnifying the object. After the object is glorified the user can perform manipulations on the object such as scaling and rotating. We also decided to add a bonus option - allowing the user to select two objects at once.
Project Title:

Forest Card Wars - A WebGL VR Multiplayer Card Game

Picture of Forest Card Wars -  A WebGL VR Multiplayer Card Game
Students:

Sagi Taizi and Shachak Gil

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
Our game is a multiplayer VR game built for direct browser play, with no downloads or installs required. We developed an action packed, interactive and competitive two player card game that can be played with any VR headset, by simply opening a browser and typing in the address. The game draws inspiration from popular card games like Hearthstone, where the main goal is to damage your opponent and deplete his life points before he depletes yours, but with a fun VR twist that involves physical actions the player needs to do.
Project Title:

Aircraft Wing Shape Analysis by On-board Cameras and Deep Learning

Picture of Aircraft Wing Shape Analysis by On-board Cameras and Deep Learning
Students:

Alexander Portiankin, Ido Plat

Supervisors:

Ido Imanuel

Description:
The main drivers in today's aircraft design are performance improvements, reduction of fuel consumption, and harmful emission. We can straightforwardly achieve these goals with lightweight, large wingspan designs.  However, such configurations are inherently more flexible and susceptible to adverse aeroelastic phenomena, including reduced control authority, increased maneuver loads, excessive response to atmospheric turbulence, and flutter instability. The immediate remedy for aeroelastic problems is stiffening the structure. 
In this project, we make an essential step - we implement and test  a methodology using synthetic,  computational data.   We simulate a  large dataset of  2D  wing images and their corresponding deformation parameters and train a  neural network,  validating it on many unseen examples. We show promising results in preparation for the experimental phase, where this methodology test empirically under lab conditions.
Project Title:

Tennis Tracker

Picture of Tennis Tracker
Students:

Aviv Caspi

Supervisors:

Yaron Honen and Alon Zvirin

Description:
Analyzing sports games is a very important task for trainers and players, who wish to improve their skills. In my project, I present an end-to-end model, that can process and analyze official tennis matches. The model can detect and track the players, the ball and the court, it can also predict the bottom player stroke`s type, calculate all kinds of statistics of the gameplay and create a top view model of the game. This entire processing can be done on a standard computer with reasonable time.
Project Title:

ClimbVR

Picture of ClimbVR
Students:

Guy Fishman

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
Climb your way to victory! VR climbing simulator. realistic feeling of climbing and fear of falling. unity based game that features 2 main quests: -Snow Rescue Mission. -Jack and the Beanstalk. The game have realistic body to hand mechanics and motion for true climbing feeling including haptic feedbacks to the hands.
Project Title:

ZoomEmotion

Picture of ZoomEmotion
Students:

Efrat Israel and Akiva Zonenfeld

Supervisors:

Amit Bracha

Description:
Our project is the product of a collaboration between the Geometric Image Processing Lab (GIP) at the Computer Science department and the Educational Neuroimaging Center (ENIC) at the Technion. The goal of the project is to create a desktop application that analyzes children’s emotions during virtual classes and indicates the children that express negative emotions in real time. In addition, we provide a log with the emotions of each child during the class, including times that face was not recognized, and statistics of the class emotions as a group. The log is used for research in ENIC. We track the children along a video using face recognition and image processing tools, and output their emotion using a classifier. We are deploying a state-of-the-art model which was developed by the GIP lab for facial expression recognition on children, RonNet. Although the name of the project indicates that the video conference must be held in Zoom application, our desktop application can be used with any kind of video conference application as well as with video recording. In order to track each child as an individual we use text recognition tools and match the reappearing name with the relevant adjacent face and emotion.
Project Title:

Augmented Treasure Hunting

Picture of Augmented Treasure Hunting
Students:

Ilya Freidkin and Michael Cudryavtsev

Supervisors:

Roman Rabinovich and Ibrahim Jubran and Yaron Honen

Description:
This is a game which uses cutting age technologies of AR. Who will find treasure first? The treasure is a virtual object planted in the previously created 3D point cloud of the scene. Using in-door drones couple of players compete to find treasure. In real-time, players can see the live feed from the drone, along with the virtual objects that are hidden in some parts of the map, and rendered into the camera stream. How to play? First we need to make a preparation flight - in this flight we will build and save a point cloud map and decide where to locate our "treasure". Once it's done, each player will load our map with the treasure and will start to search for it using drone which controlled by player from a keyboard. This game can be a multiplayer, each player will search for the treasure from his machine, where was installed our game.
Project Title:

Ray-Tracing shader

Picture of Ray-Tracing shader
Students:

Or Keren and Itay Levi

Supervisors:

Guy Koplovitz and Boaz Sternfeld and Yaron Honen

Description:
A ray tracing shader which models cook torrance and black body models. The user can control the diffusive and specular ratio of each object, as well as the temprature of the black body objects.
Project Title:

High Perceptual Quality Image Denoising with a Posterior Sampling CGAN

Picture of High Perceptual Quality Image Denoising with a Posterior Sampling CGAN
Students:

Guy Ohayon and Theo Adrai

Supervisors:

Gregory Vaksman and Michael Elad

Description:
The vast work in Deep Learning (DL) has led to a leap in image denoising research. Most DL solutions for this task have chosen to put their efforts on the denoiser's architecture while maximizing distortion performance. However, distortion driven solutions lead to blurry results with sub-optimal perceptual quality, especially in immoderate noise levels. In this paper we propose a different perspective, aiming to produce sharp and visually pleasing denoised images that are still faithful to their clean sources. Formally, our goal is to achieve high perceptual quality with acceptable distortion. This is attained by a stochastic denoiser that samples from the posterior distribution, trained as a generator in the framework of conditional generative adversarial networks (CGAN). Contrary to distortion-based regularization terms that conflict with perceptual quality, we introduce to the CGAN objective a theoretically founded penalty term that does not force a distortion requirement on individual samples, but rather on their mean. We showcase our proposed method with a novel denoiser architecture that achieves the reformed denoising goal and produces vivid and diverse outcomes in immoderate noise levels.
Project Title:

Reinforcement Learning Approach for Formula Driverless Car

Picture of Reinforcement Learning Approach for Formula Driverless Car
Students:

Elior Kanfi

Supervisors:

Yaron Honen and Roman Rabinovitch

Description:
Formula Student Driverless is a competition between universities all over the world to build an autonomous race car that will drive successfully in a racetrack, with events and competitions happening all throughout the year in many cities around the world. In this project I implemented Reinforcement Learning algorithm called SAC (Soft Actor Critic) on the simulated Technion autonomous Formula car, with simulation done in Unreal Engine 4 and AirSim plugin, and input being only an image from a camera mounted on the car, while the car has no knowledge or assumptions on the shape of the track. I used a VAE (Variational Auto-Encoder) to reduce image size and thus reducing the observation space, and the output of the algorithm is a steering angle, keeping a constant speed of 7.5 m/s (27 km/h). My work in the GIP Lab in the Technion - Israel Institute of Technology, strives to contribute to the amazing Formula Student Technion Driverless team in their efforts to compete in future events. My code in GitHub: https://github.com/eliork/Reinforcement-Learning-on-Autonomous-Race-Car
Project Title:

Deep Breath

Picture of Deep Breath
Students:

Nili Furman

Supervisors:

Alon Zvirin and Yaron Honen

Description:
Chest motion and respiratory volume abnormalities or sporadic breathing rate are often associated with thorax diseases hidden under the human eye radar. Those can range from light and passing conditions to fatal and lifestyle affecting illnesses, which in many cases stay unnoticeable or falsely diagnosed. Current medical methods of chest motion abnormalities detection often rely on human medic eye observation which might be less accurate and comprehensive as computer vision, and often lead to mistreatment of patients who are unable to be physically present at the examination location due to various reasons: handicap, privacy or mostly - availability. Our work in the GIP Lab in the Technion Institute strives to provide an innovative, precise and accessible-to-all tool to detect chest motion and breathing abnormalities as seeked by medical doctors and provide the findings as fast as possible, using a 3D camera.
Project Title:

Corn leaf segmentor - Phnomics

Picture of Corn leaf segmentor - Phnomics
Students:

Adva Cohen and Shani Idgar

Supervisors:

Yaron Honen and Alon Zvirin

Description:
Analysis of maize leaves is a widespread issue, important for assessing plant growth. In our project our goals were to improve segmentation of maize leaves and to classify maize plants into two categories, untreated and fungi-infected, using our segmentation to create the dataset. Our methods to improve segmentation included a two-step inference process and improving the training by creating synthetic images. Our methods for classification included creating a Cifar-10 based CNN architecture model, trained from scratch. We demonstrate that creating a larger dataset using data augmentation and training the networks from scratch improves both segmentation and classification.
2020
Project Title:

Semantic Segmentation of Cloud Images Using Weakly-Labelled data

Picture of Semantic Segmentation of Cloud Images Using Weakly-Labelled data
Students:

Roi Tzur-Hilleli and Aviv Caspi

Supervisors:

Reut Yehonatan and Yaron Honen

Description:
Semantic segmentation is the task of labeling each pixel in an image with the class of which the pixel is a part of. In order to train a deep convolutional network that will accomplish this task, it requires a lot of hand-drawn segmantation maps and therefore a lot of resources. We wanted to check whether it is possible to save resources on the manual labeling of cloud segmentation maps and still achieve competitive results. In order to find out, we trained a deep convolutional network for semantic segmentation of cloud images on both fully-labeled data (full segmentation maps) and weakly-labeled data (scribbles) and compared the results achieved by both methods.
Project Title:

YOGA MASTER

Picture of YOGA MASTER
Students:

Barr and Noa

Supervisors:

Ron Slosberg

Description:
YOGA MASTER is a platform for yoga-practicing anywhere you want. With the Jetson Nano, you can take the yoga teacher to the park, to the beach, or just stay at home. Although the teacher is not near you can still get the feedback you need to improve your poses.
Project Title:

AR Super Mario Go

Picture of AR Super Mario Go
Students:

Shlomi Ovadia and Hila Akerman

Supervisors:

Yaron Honen

Description:
Super Mario Go is a mobile game, harvesting Unity's AR Foundation plane detection capabilities to create an 'On Surface' playground. providing two different game modes: 1. classic mode - the player can play a classic pre-designed course and place it on the 'physical' environment. 2. survival mode - the player can walk around freely in the 'physical' environment, fight and avoid enemies and trying to get a high score.
Project Title:

You Got Me Dancing

Picture of You Got Me Dancing
Students:

Zohar Rimon and Adi Arbel

Supervisors:

Elad Richardson

Description:
Motion Transfer, the task of reenacting the image of a person according to the movement of another, is an active research field in computer vision. Recent methods achieve realistic looking results in controlled scenarios. Yet it is difficult to obtain similar results for complex, crowded, in-the-wild scenes while integrating the reenacted person into the target scene. In this work, a novel workflow is proposed to tackle this challenging scenario, which we name Scene-Aware Motion Transfer (SMT). Our workflow harnesses a set of models, each attaining state-of-the-art results in its respective field, and is divided into two major stages. First, a novel person tracking pipeline is used to separate each unique identity from the crowd. Then the tracking results are utilized for a targeted single-person motion transfer, resulting in a fully automatic workflow that can handle complex videos. An extensive evaluation is presented to show the quality and robustness of the results in different scenarios.
Project Title:

The Secret Of Magic

Picture of The Secret Of Magic
Students:

Michal Guttmann and Amit Shuster

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
We developed a VR game that uses a convolutional neural network (CNN) to identify commands from the user, drawn using the controller. The game was inspired by Harry Potter and the Philosopher’s Stone (2001). The main goal of our project was to involve a Deep Learning mechanism in the VR development environment. After a lot of research, we decided to build a CNN by ourselves and use it in our game.
Project Title:

Efraim, the Autonomous Farmer

Picture of Efraim, the Autonomous Farmer
Students:

Annael Abehssera & Yonatan Gershon

Supervisors:

Roman Rabinovich

Description:
Efraim is a 3D printed robotic arm that uses an image processing real-time segmentation algorithm named Yolact, an Intel RealSense 3D camera and an Arduino board to autonomously identify & locate an object and pick it using its robotic arm.
Project Title:

Realtime Breathing using realsense

Picture of Realtime Breathing using realsense
Students:

Nili Stein and Maayan Ehrenberg

Supervisors:

Alon Zvirin and Yaron Honen

Description:
Chest motion abnormalities and sporadic breathing rate are often associated with thorax diseases hidden under the human eye radar. Those can range from light and passing conditions to fatal and lifestyle affecting illnesses, which in many cases stay unnoticeable or falsely diagnosed. Current medical methods of chest motion abnormalities detection often rely on human medic eye observation which might be less accurate and comprehensive as computer vision, and also often lead to mistreatment of patients who are unable to be physically present at the examination location due to various reasons: handicap, privacy or mostly - availability. Our work in the GIP Lab in the Technion Institute strives to provide an innovative, precise and accessible-to-all tool to detect chest motion and breathing abnormalities as seeked by medical doctors and provide the findings as fast as possible, using a 3D camera.
Project Title:

Abiotic Stress Detection In Banana Plants

Picture of Abiotic Stress Detection In Banana Plants
Students:

Itamar Gozlan

Supervisors:

Alon Zvirin and Yaron Honen

Description:
The banana plants are very important and are a big part of the nutrition of many parts of the world. More than 100 billion bananas are eaten every year in the world, making them the most popular agricultural product. 4 different qualities of treatment were applied and documented by a picture in a sequence of 17 days. In the following project, we will try to distinguish between 4 kinds of Banana treatments by observing the pictures only. We will use augmentations and introduce a novel specific data augmentation that is targeted for our dataset. We will try to give our classifier an expert's capability since the differences are not notable and might not be observable to the common viewer.
Project Title:

Low Complexity Data-Efficient Generation in Disentangled Setting

Picture of Low Complexity Data-Efficient Generation in Disentangled Setting
Students:

Shavit Borisov and Jacob Sela

Supervisors:

Elad Richardson

Description:
Recent advancements in the fields of generation are promising better results, with more control over generation. Unfortunately, these results are achieved with massive data sets used to train highly complex models by the leading experts of machine learning, making them inaccessible to "the average Joe". In this paper, we propose that the disentangled latent spaces created as a by-product of these tools can be repurposed for generation with a specific factor of variation in mind, using simple tools and little data. We demonstrate these claims by generating aging videos using NVidia StyleGAN’s latent space from a single source image.
Project Title:

Drone Wars

Picture of Drone Wars
Students:

Michael Koichu

Supervisors:

Ibrahim Jubran and RomanRabinovich and Yaron Honen

Description:
This is an Augmented Reality first-person shooter game for Android. You play as a drone armored with laser guns. You need to pass through the course of virtual obstacles, and defeat the enemy drone while striving to get the highest score in the process. You can create your own game course by configuring images and spreading them across your house. These images will be used to render virtual objects for you to interact with them in the game.
Project Title:

Augmented Reality Laser2Target

Picture of Augmented Reality Laser2Target
Students:

Lev Tunik and Maya Szabo

Supervisors:

Yaron Honen

Description:
In this project, we used Unity combined with different packages (Photon, Vuforia) to create an easy solution for people who want to navigate towards a human-destination.

We provide a simple and clean user-interface that includes an arrow+distance towards the target, and an augmented-reality overlay for faster recognition of your target.
Project Title:

Augmented Reality Indoor Navigation

Picture of Augmented Reality Indoor Navigation
Students:

Arkadi Gurevich Shahar Zivan

Supervisors:

Yaron Honen

Description:
Navigation inside a large indoor facility such as an office building or shopping mall is a problem that isn't solved by any modern day navigation solutions, due to the imprecise results of GPS based coordinates. In this project we offer a solution based on augmented reality and image recognition. Our solution is based on using image recognition to find landmarks in the environment such as posters or billboards and learning the user's position relative to them, thus receiving a precise position to base our navigation on. Our results show this method can give very precise results and allow navigation in narrow corridors and small offices with no mistake, as long as landmark is always inside. We conclude that this is a viable solution to this problem, although some position-correcting method is required when no landmark can be seen.
Project Title:

Live Matrix

Picture of Live Matrix
Students:

Roni Ben Dom and Ophir Havivi

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
In this project we created a matrix movie like effect over a pre constructed scene. Starting from a normal room and by a push of a button the scene changes: it becomes matrix world
Project Title:

Corn Plant Segmentation

Picture of Corn Plant Segmentation
Students:

Snir Homsky and Iliya Rubinchik

Supervisors:

Alon Zvirin and Yaron Honen

Description:
Mask R-CNN is a conceptually simple, flexible, and general framework for object instance segmentation which efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. This Project tackles the task of corn plant segmentation given a partially annotated small dataset using Mask R-CNN. We present a workflow for producing a large dataset from a small given dataset, which includes image augmentation, generation of artificial plant images, and generation of artificial images simulating real greenhouse scenes. Finally, we present results on leaf segmentation as well as whole plant segmentation, and discuss these results.
Project Title:

B-Glove 3D

Picture of B-Glove 3D
Students:

Tal Leibovitz, Aviel Simchi

Supervisors:

Yaron Honen and Boris Van-Sosin

Description:
As the development and usage of VR and AR technologies increased in the last years there isn’t yet a cheap and precise solution for finger-tracking problem. This follow-up project designed to continue the work we have done as members of the B-Glove hardware and software teams. In our main project, we looked for creating a solution for the finger- tracking problem using IMUs (Inertial Measurement Unit) located on the fingers and the back of the hand. With the help of IMUs, we were able to determine the orientation of each finger and the entire hand. These orientations are sent to the computer which displays a 3D hand in Unity scene.
Project Title:

Children facial expressions detection with EEG from video

Picture of Children facial expressions detection with EEG from video
Students:

Dana Goghberg, Moran Hait and Sapphire Elimelech

Supervisors:

Gary Mataev and Ron Kimmel and Tzipi Horowitz Kraus and Michal Zivan

Description:
Our project is the product of a collaboration between the Geometric Image Processing Lab (GIP) at the Computer Science department and the Educational Neuroimaging Center (ENIC) at the Technion. The goal of the project is to create an emotion recognition system based on facial expression recognition (FER) algorithm, which will be used by ENIC lab for analyzing children’s responses to various tasks, while being monitored by EEG/fMRI. We track the children along a video using face recognition and image processing tools, and output their emotion using a classifier. We used convolutional neural network to implement a new facial expression recognition system which is specifically designed for children, Although well studied on adults, only a few facial expression recognition studies have been done involving children, and consequently only a handful of small relevant datasets exist. This difficulty was added to the fact that some children’s emotions have similar representation, and are therefore difficult to differentiate. We tested our chosen model using cross-validation and a real-time emotion detection from a video recorder. The results exceeded our expected goals. We introduce a novel machine-learning system that detects facial expressions in children with 91% accuracy. We are not aware of approaches yielding similar results in the literature.
Project Title:

CHANGE DETECTION IN REMOTE SENSING SATALITE IMAGES USING DEEP LEARNING

Picture of CHANGE DETECTION IN REMOTE SENSING SATALITE  IMAGES USING DEEP LEARNING
Students:

Ayelet Alon, Inbal Tziperman Lotan and Tal Amir

Supervisors:

Alex Golts

Description:
Change detection in remote sensing images is an important part of many applications such as tracking urban changes for military purposes, tracking deforestation for climate change research, agricultural monitoring and more. We pose the change detection task as to identify differences between two photos, taken at the same geographical location at two different times. The detected changes should be a result of appearance of new or disappearance of existent objects in a scene. The images may contain other differences between them that are a result of many factors such as seasonal changes (snow, trees with/ without leaves), changes in brightness, shifted images due to slight changes in the capture angle and many more. Tracking changes manually is a tedious task and can be highly time consuming. In addition, since the task is extremely tedious, doing it manually often results in human errors. The automation of this task is therefore highly desirable. In this project, we present an automation of the change detection task using both a fully and weakly supervised semantic segmentation based neural network. We compared the performance of our model to a previously published GAN based model, and showed that though our model is easier to train it achieves similar results.
Project Title:

Procedural Map Generator

Picture of Procedural Map Generator
Students:

Ilana Ben Avraham

Supervisors:

Yaron Honen

Description:
Procedural generation is a method of creating data algorithmically instead of manually. Procedural map generation is a derivative of this method for creating unending worlds. Imagine yourself as a character in a video game exploring and traveling through a vast virtual world.. when suddenly, you reach the end of this world, a cliff, if you take one more step you will fall to infinity never reaching the ground. Procedural Map generation comes to solve this issue and allow you to explore the world without it ever ending around you. My project allows the player to explore a building which is built in a procedural manner around the player online as it moves. The building is built from basic building blocks that were built in a specialized program. These basic blocks are connected according to a set of rules which was written beforehand.
Project Title:

People Counting System

Picture of People Counting System
Students:

Ido Galil and Or Farfara

Supervisors:

Yaron Honen

Description:
This paper presents the development and details for use of a people’s counting system, able to count the people entering and exiting a specific zone from a live stream video feed obtained from IP cameras in real-time. This goal is achieved by the combination of two components: a detector component using a Convolutional Neural Net (YOLO) detecting people on the frame, and a tracking component utilizing a tracking algorithm (CSRT) which updates those people positions on the next frames. This system was built for the Technion’s libraries to monitor the amount of people in the libraries at any given time but is highly configurable and can fit different types of building and entrances.
Project Title:

Augmented reality in road navigation

Picture of Augmented reality in road navigation
Students:

Jo Muller

Supervisors:

Ron Slosberg and Gill Shamai

Description:
We have created an AR road navigation guidance system to assist during driving. Our system uses deep learning to by pass the need for GPS signal and translates Google maps instructions into a visual arrow. This augmented arrow points towards the designated path on the screen of a smartphone’s camera.
Project Title:

Physical Cloud Generation Simulation

Picture of Physical Cloud Generation Simulation
Students:

Lev Tunik and Eric Kiel

Supervisors:

Revital Konch and Yaron Honen

Description:
We generate real-time volumetric clouds in Unity using ray marching. The simulation has many parameters that influence the look of the final result and can be controlled by the user. The final simulation can be imported as an easy to use Unity package.
Project Title:

Mindfulness in Virtual Reality

Picture of Mindfulness in Virtual Reality
Students:

Ola Awisat and Hala Awisat

Supervisors:

Michal Zivan and Yaron Honen and Boaz Sterenfeld

Description:
Application that allows the user to practice mindfulness in a virtual reality environment. It uses muse headband to measure real time EEG data of the user, and triggers changes in the environment based on this data, creating a neurofeedback loop. the main purpose of the application is to help the user become more mindful on the long term.
Project Title:

Neuro feedback based Virtual reality for practicing mindfulness principles

Picture of Neuro feedback based Virtual reality for practicing mindfulness principles
Supervisors:
Description:
In this project we would like to expend the previous project by adding more channels of physiological input from the user together with more sensory feedback to the user. In addition new VR scenes will be designed.
The project’s main objectives are -
1. Getting to know the current system.
2. Integrate new physiological inputs of Heart rate and breathing (which are additional outputs of the muse 2) to the system software.
3. integrate a tactile sensory feedback to the user.
4. Create new VR scenes.
Project Title:

Smart Reality

Picture of Smart Reality
Students:

Sagi Barazani and Adi Reznik

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
In the project, we demonstrate proof of concept for replacing daily physical actions with actions that are done via gestures in augmented reality. The demo was done on LIFX - smart lamp and Spotify - music player. With the help of these capabilities, children, people with disabilities, and adults can easily perform activities that in the past required physical effort. The next step will be to connect the furthermore smart devices in our physical world so that we can control them through the virtual world.
Project Title:

RespiTrack - Respiratory patterns tracking Android App

Picture of RespiTrack - Respiratory patterns tracking Android App
Students:

Guy Berger and May Schwartz

Supervisors:

Alon Zvirin and Yaron Honen

Description:
person’s breathing pattern abnormalities are serving a major role in detecting many human diseases like asthma, acute respiratory failure, heart failure and more. Yet the detection and analysis of such pattern abnormalities isn’t yet as advanced as other advancements in medicine and in most of the cases is subject to human analysis by a human and thus involves 2 major drawbacks which we aim to solve: Physical examination - the patient must arrive at a clinic and be examined physically by a medical staff member. Human error - The diagnosis is made by a medical staff member and thus prone to human mistakes. The work that is being done by the GIP (Geometric Image Processing) Lab at Technion university in Haifa is the development an innovative automated system and method to detect normal and abnormal breathing patterns by performing analysis of breathing pattern pathologies and abnormalities in chest movement. In this project we develop an Android App that follows this methodology and aims to solve the drawbacks described above.
Project Title:

AR Shadow Device

Picture of AR Shadow Device
Students:

Almog Brand, Dani Ginsberg and Lior Wandel

Supervisors:

Yaron Honen and Boaz Sternfeld and Boris Van Sosin

Description:
Augmented reality shadows are un-realistic compared to the shadows of “real
world” objects created by the “real world” light module.
In Order to build a light model that we can deploy to the AR system, we first needed a way to measure the room’s ligh
Project Title:

Facial Expression Generation using GANs

Picture of Facial Expression Generation using GANs
Students:

Dima Birenbaum

Supervisors:

Yaron Honen and Gary Mataev

Description:
The main goal is to generate synthetic data for projects, in the machine learning field, that deals with face emotions classification. To classify images with multiple class labels using only a small number of labeled examples is a difficult task. Especially when the label (class) distribution is imbalanced. In face emotion classification we have imbalanced label distribution because some classes of emotions are relatively rare comparing to others. For example, disgust emotion is more rare than happy or sad. In this work, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, for this task, we are using classifiers based on Convolutional Neural Networks (CNN) and a variation of cycle-consistent adversarial networks such as CycleGAN, Improved CycleGAN, and The Wasserstein CycleGAN. The CycleGAN is a direct implementation of Emotion Classification with Data Augmentation Using Generative Adversarial Networks paper. In order to improve the results and avoid problems that we faced we employ different variations of CycleGANs. We show, that our empirical results can obtain a ~5% increase in the classification accuracy, after employing the GAN-based data augmentation techniques.
Project Title:

VR Game

Picture of VR Game
Students:

NitzanWinkler and Bar Neuman

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We developed a workout VR game. The game includes a tutorial stage, meant to teach the game. A training option with no timing for practice option. And finally a stage where you can play for 1 minute trying to achieve the highest score you can
2019
Project Title:

Bringing meditation to virtual reality

Picture of Bringing meditation to virtual reality
Students:

Ola Awisat and Hala Awisat

Supervisors:

Michal Zivan and Yaron Honen and Boaz Sterenfeld

Description:
This project is a part of the collaboration between Geometric Image Processing Lab (GIP) and the Center for Graphics and Geometric Computing (CGGC) at the Faculty of Computer Science at the Technion, and the Educational Neuroimaging Center (ENIC) at the Faculty of Education in Science and Technology at the Technion. The project aims to create an electroencephalography (EEG) based brain-computer-interface (BCI) that aims to provide an environment to practice mindfulness in virtual reality. The system we developed employs a virtual reality environment consisting of a realistic sea environment. The system acquires EEG data in real time and determines the mental state of the user. It then triggers positive feedback in order to reinforce the mental state of relaxation, and negative feedback that brings the attention back on relaxing. EEG data acquisition was done using Muse – an EEG device that measures the brain waves in real time and transmits them on Bluetooth. Muse uses signal processing methods to transmit spectral power in alpha, beta, gamma, delta and theta frequency ranges along with the raw EEG data. 
Project Title:

Dancing Carpet

Picture of Dancing Carpet
Students:

Orel Haim, Shai Yaakovi and Lital Wexler

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
As part of our B.Sc. in Computers Science, we developed a VR game that based on the nostalgic Dancing Carpet game. Our application uses 2 trackers that are attached to the user ankles and simulates a dancing carpet on the floor, the user is playing and chosing his preferences in a specialy design menu only by moving his feets. For an extraordinary user experience the game contains couple of diffrent enviroments, difficulty levels and songs.
Project Title:

SignLens

Picture of SignLens
Students:

Leen sruji, aseel sakas

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
an AR application which translates Sign Language, it captures the sign from the Gloves, translates and projects it live in Microsoft HoloLens headset Project website : https://aseelsakas.wixsite.com/website
Project Title:

VR TETRIS

Picture of VR TETRIS
Students:

Naama Glicksberg, Guy Lazar

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We are Gai and Naama, students in the last semester in the computer science faculty in the Technion. When we heard about the project in the lab, which includes building a virtual reality game, we were very excited and decided to choose it. When we thought of an idea for a game, the idea of Tetris came up - a simple and fun game that has been familiar to everyone for years. We thought that turning the game into 3D, so that the game cubes are flying around the room in front of the player could be very interesting and fun!
Project Title:

VR PACMAN

Picture of VR PACMAN
Students:

Batel Carmona, Mor Eliyahu and Lev Pechersky

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
Pacman of thrones is a VR remake of a classic Pacman game, using virtual reality and game of throne motifs. In our game the user plays as Pacman, who controls the game and movement like the classic Pacman game, but of course we added a few features of our own, now the user can jump, kill the ghosts, and move freely around the maze. We created a remake of the famous game Pacman in 3D environment for HTC VIVE VR. While developing the game, we tried to add multiple game mechanics in order to give the user a full experience of a true reality. We managed to create this experience by adding animations of 3D objects, adding many player’s gestures which are closed to the real reality, like: movement, jump and attack.
Project Title:

Real world physics VR experience

Picture of Real world physics VR experience
Students:

Gil Beresi, Gil Blinstein

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Our main goal in this project was to simulate real life physics experience, using HTC VIVE headset and trackers. We wanted the explore the virtual world by doing various actions using hand movements only, i.e no triggers involved. We have successfully met our goals, by developing reliable and efficient throwing algorithm that immitates a real life throw in the virtual reality.
Project Title:

Shadow Games

Picture of Shadow Games
Students:

Or Eli Pilosof, Rafi Cohen, Yael Tsafrir

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
As part of our BSc in Computer Science at the Technion, we heard about the AR experience with HoloLens and wished to try it ourselves. When we began exploring the possibilities of the HoloLens×”, we recalled one of our favorite childhood TV series: Yu-Gi-Oh! In Yu-Gi-Oh!, the characters duel each other with monsters, and the monsters are summoned using special cards. We created our own version of the Yu-Gi-Oh! game, where players use cards from the original card game to summon monsters in AR, and duel against an computerized opponent. To make the AR experience more alive and real, we added shadows to the monsters in the game, and gave the player control over the direction of the light in the scene.
Project Title:

PianoAR

Picture of PianoAR
Students:

Gal Shalom , Ariel Iny , Alex Bondar

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
As students who learn B.Sc. in Computers Science, we aspire to use computers and programs to make complex tasks simpler and easier to do. We decided to develop an augmented reality application which will teach people to play a piano. We used Hololens' AR properties as our AR platform of development. Our application uses the Hololens' webcam to recognize the piano keyboard. We used image processing to be able to recognize the piano and to detect the press of a key. While learning to play a song, the application will color the next key to press and will indicate with color change if the correct key was pressed. We hope this application will be an example of the possibilities the AR world has to offer.
Project Title:

Leap Motion Matrix - Multiple Devices

Picture of Leap Motion Matrix - Multiple Devices
Students:

Shay Dadon, Daniel Mines

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
While browsing Leap Motion desktop applications we have noticed that they all have a limited field of view. When we saw the new multiple device beta API release we have decided to try and upgrade it. As part of the project we worked to synchronize the devices, we built a wooden matrix for the cameras, built a 3D model as a case for the cameras and dealt with numerous engineering issues with the matrix, the cameras, and with some beta-version related programming issues. And so, we have succeeded in upgrading the field of view, linear to the number of camera devices. https://danielmines111.wixsite.com/matrix
Project Title:

Facial Expression Classification

Picture of Facial Expression Classification
Students:

Daniel Ohayon

Supervisors:

Elad Richardson and Yaron Honen

Description:
In this project we implement a facial expression classifier. Our facial expression classifier matches the user's facial expression to the corresponding emoji. This project, called Keymoji++ is meant to expand upon the original Keymoji project where we created an android keyboard that uses the phone's front facing camera to detect the user's emotion to be either happy, sad, surprised or angry by their facial expression and recommend the corresponding emojis above the keyboard. In this project we expanded upon the idea and added support for many different kinds of emojis.
Project Title:

Jungle Escape VR

Picture of Jungle Escape VR
Students:

Lorella Matathia, Tal Albo, Dan Simah

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
As part of our B. Sc. in Computers Science, we were curious about the Virtual Reality world. As we tried for the first time the Oculus technology, we immediately thought about the possibilities for a VR RPG game, and the amazing games that could possibly come. We created an "Escape Room" - like game with the emphasis of creating generic building blocks for future VR applications. We truly hope that this game will inspire the next generation of RPG gaming and help with the development of VR application. Furthermore, we would like to thank to the GIP & CGGC staff, and especially to Boaz Sterenfeld and Yaron Honen.
Project Title:

Cucumber Detection and Segmentation

Picture of Cucumber Detection and Segmentation
Students:

Asher Yartsev Or Shemesh Simon lousky

Supervisors:

Alon Zvirin

Description:
This Project tackles the task of segmentation and indexation on cucumber plant parts. The approach is to implement an end to end workflow to partially annotated and relatively small datasets. Mask RCNN paper from facebook research has proven very efficient in this task even on small datasets. Therefore the focus is maximizing the effectiveness of a problematic dataset on different levels. This reports discusses partially annotated datasets, and inconsistent annotation styles. The conclusion tends towards a trade-off between investing in the dataset quality and using augmentation acrobatics to quickly use the existing data. There is no doubt that consistent fully annotated real pictures will get the most out of the network.
Project Title:

BGlove

Picture of BGlove
Students:

Aviel Simchi Amit solomon

Supervisors:

Boris van Sosin and Yaron Honen and Boaz Sternfeld

Description:
As the development and usage of VR and AR technologies increased in the last years there isn’t yet a cheap and precise solution for the finger-tracking problem. In our project, we looked for creating a solution for this problem using IMUs (Inertial Measurement Unit) located on the fingertips and the back of the hand. With each IMU we were able to create a 3-dimensional axis system and thus determine the relative direction of the hand and each finger to the ground and to each other. As the HW team, we provided a glove with the abilities described above and a c# adapter that can be used in unity environment.
Project Title:

Bounce It

Picture of Bounce It
Students:

Michal Guttmann and Amit Shuster

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We developed a VR game which was inspired by several games including BBTAN by 111% - Mobile game, Hurl VR - VR game and Let's Bounce - Arcade game. Our application simulates several environments with diverse difficulty levels. In each level, you have colored cubes organized in a different configuration. The player's main goal is to throw balls at the cubes and hit all the colored cubes on the level, by hitting a cube, the cube is turned black. The more balls the user used the less score the user will get, namely, the user strives to minimize the number of used balls. Score diverse from 1 to 3 stars while 3 stars are the best. The player can enjoy both the game and the VR experience using VR goggles, trackers and gloves. The application gives the player an extraordinary experience that enables him to use several senses in order to get a high score.
Project Title:

Facial Reconstruction from Video

Picture of Facial Reconstruction from Video
Students:

Maxim Aslyansky Volodymyr Polosukhin

Supervisors:

Ron Slossberg and Elad Richardson

Description:
The goal of the project is to develop an algorithm for face reconstruction from monocular RGB video. Recently it has been shown that neural networks can achieve accurate results for single-image facial reconstruction, outside the scope of limited linear models. In this project we will explore the world of 3D Face Reconstruction from video by using the following approaches: 1. Extending the 2.5D output of current single-image methods with state-of-the-art geometric algorithms for dynamic 3D reconstruction to incrementally build a complete high-quality facial model. 2. Experimenting with novel deep learning architectures to directly regress 3D models from images and also using recurrent neural networks to make use of temporal information.
Project Title:

Translating in Style

Picture of Translating in Style
Students:

Nir Chachamovitz and Ilan Doron

Supervisors:

Elad Richardson

Description:
In this project we introduce an implementation of an open-source system for automatically recognizing and translating text in pictures, posters, road signs etc. Unlike in existing projects, we focused on preserving the original text style in order to create visually-pleasing results. In addition, effort was made to maintain and preserve the original background of text containing regions within the images. The project explores: Deep models for text detection and recognition: We used an already-trained text detection and recognition networks, and a few more deep models in several steps of the pipeline. Style transfer and image blending techniques: We used perspective transformation technique and other image blending tools.
Project Title:

Dronet

Picture of Dronet
Students:

Netanel Lev

Supervisors:

Ron Slossberg and Roman Rabinovitz

Description:
Autonomous drone that pops balloons
Project Title:

Efficient Restoration By Compression

Picture of Efficient Restoration By Compression
Students:

Yuval Shildan, Nevo Agmon, Danny Priymak

Supervisors:

Yehuda Dar

Description:
When dealing with signal compression, most compression algorithms optimize the reconstructed output with respect to the acquired input signal. A more general approach would be optimizing the compression algorithm using the signal prior to the acquisition phase as input, such that the final decompressed output is optimal. Ideally, this approach could utilize knowledge about specific acquisition devices to further optimize the compression. For example, given a known degradation model of a specific camera as an acquisition device (which can be easily modeled by the manufacturer), the presented approach could yield a highly optimized compressed result, which can be either used to transmit a signal of higher quality over the same infrastructure, or alternatively, deliver the same quality using less resources. Dar et al. [1] proposed an algorithm for joint restoration and compression of images, on which we rely in this work. This algorithm was implemented in MATLAB, which, due to MATLAB's overhead, opened up the possibility of significant efficiency improvements. For this reason, we chose to focus our efforts into optimizing the implementation runtime demands, while considering software engineering and object oriented design as top priorities. [1] Y. Dar, M. Elad, and A. M. Bruckstein, “Restoration by Compression,” IEEE Transactions on Signal Processing, vol. 66, no. 22, pp. 5833–5847, 2018.
Project Title:

Virtual Keyboard

Picture of Virtual Keyboard
Students:

Tarik Sirhan, Sabah Saloty

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
VR Keyboard application for typing texts, using ManusVR gloves that provide a realistic interaction to the user. The keyboard contains English Alphabet, Numbers, Emojis and special keys such as Enter, Caps Lock, Backspace, etc. The application may be added to several VR games in order to be used as a chat interface between players, or may be used for getting input from the user.
Project Title:

SpidermanVR

Picture of SpidermanVR
Students:

Adam Elgressy & Dmitry Vlasenko

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Our main goal was to simulate a first person Spiderman experience, using the HTC Vive Pro kit. We also put a goal for our self, to Learn about the new, and evolving world of VR, and gain hands-on experience, with the newest technology, alongside getting to know the Unity environment. We have successfully made a VR application, which simulates a first person experience, of one of the most famous superheroes of all time, Spiderman. We have made it possible to walk, climb, and swing from a web, being shot from the in-game hand, all while being able to stay stationary, at a one place, but fill like you can explore the whole world. We have simulated a physics system, which made the experience feel authentic, from the free fall acceleration, to friction with different surfaces, and many more. We would suggest for future work to be done to add different scenarios, a leaderboard, tutorial room.
Project Title:

Carnet

Picture of Carnet
Students:

Netanel Lev & Eitan Kosman

Supervisors:

Ron Slossberg

Description:
We developed an end-to-end autonomous car system that operates in a driving simulator.
2018
Project Title:

Keymoji

Picture of Keymoji
Students:

Daniel Ohayon ,Oren Afek

Supervisors:

Elad Richardson

Description:
Humans have always been communicating nonverbally using their body movements and facial expressions. Those gestures tend to deliver the speaker's message and feelings much more effectively than the words spoken. Nowadays, with instantmessaging becoming the common way of communication, those traditional gestures are represented by tiny faces called emoji. Each sent emoji represents the sender's emotion or state of mind. Those emojis are manually selected by the sender. We offer a new advanced automated way of using emojis. By supplying a fully functioning Android keyboard that captures a shot of the sender's face at a given time, analyzes his expression and state of mind, we can offer an emoji represents his feeling.
Project Title:

Rubik’s Cube Trainer VR

Picture of Rubik’s Cube Trainer VR
Students:

Denys Svyshchov, Alexander Gurevich

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Rubik’s Cube Trainer is VR application the main purpose of which is step by step learning of solving Rubik’s Cube and opportunity to solve cube on time and compete with friends who is the best Cube solver. The game uses VR gloves allowing the player realistic interaction and better understanding of solving process.
Project Title:

PicaSsoVR

Picture of PicaSsoVR
Students:

Eden Ben Oz & Ori Shem-Tov

Supervisors:

Yaron Honen and Boaz Sterenfeld

Description:
We developed a VR environment for drawing in 3D. The player can draw with both hands using controllers, and can edit their drawings by resizing, moving or deleting them. The player gets a unique drawing experience, such that they can draw “on air”, walk through their drawings, catch and move them, and so on.
Project Title:

Kinetic Painting

Picture of Kinetic Painting
Students:

Yonatan Sherwin,Roee Peleg

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Kinetic Painting is a unique VR application for drawing using hand gestures. With no physical controllers at all, you’ll be able to choose between two main painting methods. The user will be able to start the painting, erase it, change the level, change the colors of the paint, and even create some special effects. Using the leap motion controller, you can control and activate the game with various of hand gestures.
Project Title:

Advanced whack a mole

Picture of Advanced whack a mole
Students:

Or Gitli,Mirit Alush, Tal Pilo

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Advanced whack a mole is a VR game based on the common arcade game “Whack a mole”. The game includes two levels: The first one is a classic version of the common game “Whack A Mole”, well known from the 90’s arcade rooms. The impact is with a hammer that appears when the player make a fist with his right hand. The second one is an advanced version of “Whack A Mole” includes horizontal view of about 120 degrees. The player stand in front of seven pillars, and ghosts come throw from them. The goal is to hit the ghosts with his fists.
Project Title:

VR Duck Hunt

Picture of VR Duck Hunt
Students:

Snir Zango ,Rotem Ohana,Or Harchol

Supervisors:

Yaron Honen and Boaz Sterfeld

Description:
A VR game that is based on the nostalgic Mega-Son game "Duck Hunt" The player's main goal is to kill as many ducks as possible and get points, the speed of hitting the ducks is also a factor We added another environment to the game (that wasn't in the original game), a zombie mode, and adjusted both to the VR experience
Project Title:

OLM - object location memory

Picture of OLM - object location memory
Students:

Bar Neuman, Nitzan Winkler, Jonathan Weizman

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
VR Platform for a research following a demand made by a researcher in psychology from Emek Izrael Institution. The experiment focused on the subject's ability to memorize the objects in the environment and identify the changes in their positions. The environment simulates a Savanna with bushes meant to be memorized and foxes as a distraction.
Project Title:

Matrix

Picture of Matrix
Students:

Elinor Feller

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
The matrix effect was introduced in The Matrix film that was released in 1999. It was one of many novel new visual effects that were introduced in this film. In this project, I made the Matrix rain code effect by creating a Shader in Unity.
Project Title:

Tomato Classification using DL

Picture of Tomato Classification using DL
Students:

Shak Morag,Eitan K

Supervisors:

Alon Zvirin

Description:
Diagnosis of crop phenotypes is a widespread problem that affects the life of millions around the world. In our project, our goal was to create a system for classifying different parts of images in order to achieve classification and segmentation of significant plant parts. Our methods included applying neural networks. We've tested a few types of networks, mainly classifier and encoder-decoder. Using the classifier, we achieved very high accuracy (97%- 98%) when classifying parts of the image. Later, using the encoder-decoder architecture we shortened the required time for segmenting an image, from minutes to a few seconds.
Project Title:

360 Video Editing

Picture of 360 Video Editing
Students:

Daniel Weisberg Mitelman, Lorella Matathia

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
360 Video Editing is a unique VR tool that gives you an option to edit a video, without losing the special experience. With the controllers, you can interact with the editor. You are able to delete frames, show specific frames and export the edited video. The main functions that the editor provides: • Displaying a video on sphere • Dividing the video into sub-frames • Displaying sub-frames • Splitting video • Showing effects • Exporting final video
Project Title:

Bit Saber

Picture of Bit Saber
Students:

Sapir Mordoch, Bar Uliel, Moran Nisan

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Beat Saber is a unique VR game where your goal is to slash the beats (represented by small cubes) as they are coming at you. The game includes several songs, each with different difficult. The player uses VR motion controllers to wield a pair of light-sabers to slash the blocks. Each block is colored red or blue to indicate whether the red or blue saber should be used to slash it.  When a block is slashed by the suitable saber, the block is cut and the player get a point, otherwise if the player slashes the block with a saber with the opposite saber the score is reset.
Project Title:

Unity Multiplayer Game

Picture of Unity Multiplayer Game
Students:

Michael Gont, Kiril Gont

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
A multiplayer space sim game. Server-Client communication implemented from scratch on top of Unity's low-level networking API (Transport Layer API). Movement is smoothed with linear interpolation.
Project Title:

3D VR Paint

Picture of 3D VR Paint
Students:

Shlomit Sibony, Ran Mansoor, Yuval Shildan

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Create your art via our 3D VR paint. Use the latest technologies including the Manus-VR gloves (wearables) and the HTC Vive. We offer an intuitive and powerful platform which you can paint with both hands, control the painting transform and choose the colors you wish to paint with.
Project Title:

Quad-copter GIP Vision Control Farmework

Picture of Quad-copter GIP Vision Control Farmework
Students:

Nathan Sala and Ofir Zelig

Supervisors:

David Dovrat and Ohad Menashe and Roman Rabinovich

Description:
Drones can be used to various tasks and replace humans in many aspects of our lives. Our goal was to create a system that will enable to control a drone using computer vision that will enable it to fly autonomously. Eventually, we created a framework that wraps all the parts that control the flight functionality of the drone in a way that future projects could use it could focus only on computer vision.
Project Title:

VR Floor Planner

Picture of VR Floor Planner
Students:

Netanel Lev & Dolev Ben Ami

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Design, explore and share floor plans in VR.
Project Title:

Child facial expression detection

Picture of Child facial expression detection
Students:

Eden Benhamou and Deborah Wolhandler

Supervisors:

Alon Zivrin & Michal Zivan

Description:
We present an examination of facial expression detection of children in two different study environments, joint storytelling and yoga. We analyze videos of preschool children from the ENIC lab filmed during 6 months. Our data analysis combines face detection algorithms, artificial neural networks designed for emotion recognition, face recognition algorithm and image processing tools for tracking. We present results of child facial expressions during the recorded video sessions. This project was made in collaboration of ENIC-GIP labs.
Project Title:

HoloLens Face Emotion Detection

Picture of HoloLens Face Emotion Detection
Students:

Ilya Smirnov, Dani Ansheles, Denis Turov

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
The project will represent face emotion detection algorithm, that runs on Microsoft HoloLens device. The purpose of the project is providing to User real-time detected emotion as attached smiley near visible faces The project could be used to help people, with inability to identify and describe emotions by themselves. As a result, people suffering from this disorder will be able to understand other people emotions by attached smiley.
Project Title:

An Extrapolated Dynamic String Averaging Method

Picture of An Extrapolated Dynamic String Averaging Method
Students:

Victor Kolobov

Supervisors:

Simeon Reich and Rafał Zalas

Description:
The project deals with projection methods which have their origin in computerized tomography and image processing. These methods solve an optimization problem which seeks a point in the intersection of convex (constraint) sets. They do so by an iterative application of operators to the approximation point. We present a certain acceleration technique called "extrapolation". This technique allows the algorithms to take bigger steps and accelerates their convergence. We show theoretical results for the convergence of methods considered in the string averaging framework. In addition, we present a library that was developed for MATLAB which may allow researchers in the field to devise and experiment with different projection methods.
Project Title:

Symbolic Autoencoder

Picture of Symbolic Autoencoder
Students:

Oryan Barta

Supervisors:

Alon Zvirin and Yaron Honen

Description:
Developing effective unsupervised learning techniques is an essential stepping stone towards next generation machine learning models. Such models would no longer be bottlenecked by their dependence on massive labeled datasets which are often difficult or impossible to obtain. We propose a novel architecture for deep feature extraction from unlabeled data and intelligent labeling of data using an implicitly defined and learned symbolic language. The model can then be used in a semi-supervised context to reduce the amount of labeled data necessary for training.
Project Title:

Augmented Emergency Response

Picture of Augmented Emergency Response
Students:

Avi Kotlicky,Arik Rinberg

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We developed a 3D Augmented Reality Multi-User emergency response application. Our application allows all users to simultaneously see the same scene, while a command center on a PC sees in real time users and their movements displayed on a real time map. The command center can communicate with all the users in scene, display hologram warnings in areas and manage all the users. The users may send warnings and request help from the command center and other users. The users also have a user location system that allows them to keep track of their team.
Project Title:

Architecture VR

Picture of Architecture VR
Students:

Sami Abdu and Liza Rakhlina

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
For many architects, the biggest challenge is often giving the client a clear understanding of how the design will look in real life. The idea behind the project is to visualize the Interior and the Exterior design of an architecture model in VR, so that clients could gain an understanding of how a design will look to scale in a fully 3D environment.
Project Title:

Holomessages

Picture of Holomessages
Students:

Eran Tzabar

Supervisors:

Yaron Honen and Boaz Sternfeld

Description:
HoloMessages is an Augmented Reality system for sending and receiving office messages. Using the Hololens device by scanning a QR code at the entrance to the office, you can load user information and leave a message. There are new technologies that enable users to live in a new and interactive Mixed Reality world. The goal is to provide users a simple and necessary office operation, to send and receive messages and integrate the operations into the new technological world. Providing a much more comfortable workplace, which extend your working environment outside of your computer screen to the whole room. Similar when you put sticky notes on your desks that make an extension of your working environment.
Project Title:

Next Generation Image Style Transfer via CNN

Picture of Next Generation Image Style Transfer via CNN
Students:

Adar Elad & Mark Erlich

Supervisors:

Alona Golts

Description:
Image Style Transfer refers to an artistic process in which a content image is modified to include the style taken from a second image. Early signs of this idea appeared in the early 2000's, but the real breakthrough came with the impressive work of Leon Gatys and his co-authors. Their method relies on pre-learned neural networks, that has been trained for image recognition. Our project focuses on expanding and improving the prime weaknesses of Gatys’ algorithm: (i) preservation of edges; (ii) enabling treatment of large images; and (iii) speeding up the whole transfer process
Project Title:

RGB camera based heart-rate estimation using Eulerian Video Magnification (EVM)

Picture of RGB camera based heart-rate estimation using Eulerian Video Magnification (EVM)
Students:

Yonatan Amir, Doron Armon, Neriya Mazzuz

Supervisors:

Ron Slossberg and Gil Shamai

Description:
The purpose of our project is to extract information and relevant medical indicators of a potential patient using an RGB camera. We implemented few techniques in order to extract pulse in real time using a standard web camera based on a number of researches. The main research is "Eulerian Video Magnification for Revealing Subtle Changes in the World", which published at MIT. In addition, we have created a visualization the subtle changes in the patient's facial blood flow.
Project Title:

Real-Time 3D Face Reconstruction

Picture of Real-Time 3D Face Reconstruction
Students:

Shadi Endrawis

Supervisors:

Matan Sela

Description:
Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. In this project, I implemented and experimented with the iterative CNN models for extracting geometric structure from a single image. The next step in the project was to apply the model first to videos, and from there to optimize the system for real-time 3D face reconstruction using a webcam.
Project Title:

Deep Learning of Compressed Sensing Operators with Structural Similarity Loss

Picture of Deep Learning of Compressed Sensing Operators with Structural Similarity Loss
Students:

Yochai Zur

Supervisors:

Amir Adler

Description:
Compressed sensing (CS) is a signal processing framework for efficiently reconstructing a signal from a small number of measurements, obtained by linear projections of the signal. In this paper we present an end-to-end deep learning approach for CS, in which a fully-connected network performs both the linear sensing and non-linear reconstruction stages. During the training phase, the sensing matrix and the non-linear reconstruction operator are jointly optimized using Structural similarity index (SSIM) as loss rather than the standard Mean Squared Error (MSE) loss. We compare the proposed approach with state-of-the-art in terms of reconstruction quality under both losses, i.e. SSIM score and MSE score. Source code available here: https://github.com/yochaiz/SSIM
Project Title:

VR Fruit Ninja

Picture of VR Fruit Ninja
Students:

hadar ben efraim, dror levy

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We developed a VR game that based on the popular game "Fruit Ninja". Our application simulates several environments. The player's main goal is to win as much as points as he can, by cutting fruits and avoiding different obstacles. We added few more functionalities to the game so the player can enjoy both the game and the VR experience. The application gives the player an extraordinary experience that enables him to use several senses in order to get a high score.
Project Title:

FindIt - Object Detection and Tracking on HoloLens

Picture of FindIt - Object Detection and Tracking on HoloLens
Students:

Sefi Albo, Bar Albo

Supervisors:

Yaron Honen and Boaz Sternfeld

Description:
FindIt is an HoloLens application for finding objects in home environment. The idea is to help us find missing stuff at home by tracking the objects around us and remember their locations. The app also tracks the objects as they change their location in the room. The app has 3 modes: - Scan – scan the room to find objects. This mode's purpose is to initialize the app's knowledge about the objects the user wants to track. The app is taking photos which will later be processed by Object Detection Model to detect the objects in the room. - View – view the scan results. In this mode, the user can see the detected objects and their location. - Find – find the objects and track changes. This mode is the main usage of our app. When the user says "Find -Object Name-", a box will appear at the exact same place as the real object. Also, the navigation system will guide the user to its location.
Project Title:

VR wizard hunting

Picture of VR wizard hunting
Students:

Alex Salevich, Edi Frankel

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
Our project's goal is to overcome this challenge and present a new way of interacting with an application via recognition of motion and shape instead of menus and buttons. Our solution will allow to link the controller motion in 3d space to a certain behaviour in the application. In our case we will implement the idea as a game in which the user plays a wizard that can draw his spells as shapes and for each one of them a predefined spell will be activated.
Project Title:

Hologram Interface

Picture of Hologram Interface
Students:

Alex Salevich

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
The old version of this project was a stand alone application with fixed models and no control over the presentation of the holograms. The goal of this project was to extend the application in 2 ways: 1) The app will have a user interface for user control over the presentation at computer which the app is running at. 2) The app will act as a server that can be connect to by clients so that they control the many features of the main application. In order for this to work a second Client side application has been developed as a part of this project.
Project Title:

ExpressionMime

Picture of ExpressionMime
Students:

Dor Granat , Ran Yehezkel

Supervisors:

Matan Sela

Description:
The purpose of our project was to create an application that can detect faces and estimate their pose and expressions. At real-time, we can identify faces and their poses in a video, and render different 3D models of faces with various expressions over the original faces.
Project Title:

Faster and Lighter Online Sparse Dictionary Learning

Picture of Faster and Lighter Online Sparse Dictionary Learning
Students:

Shay Ben-Assayag & Omer Dahary

Supervisors:

Jeremias Sulam

Description:
Sparse representation has shown to be a very powerful model for real world signals, and has enabled the development of applications with notable performance. Combined with the ability to learn a dictionary from signal examples, sparsity-inspired algorithms are often achieving state-of-the-art results in a wide variety of tasks. However, most existing methods are restricted to small dimensions, mainly due to the computational constraints that the dictionary learning problem entails. In the context of image processing, this implies handling small image patches instead of the entire image. A novel work which has circumvented this problem is the recently proposed Trainlets framework, where the authors proposed the Online Sparse Dictionary Learning (OSDL) algorithm that is able to efficiently handle bigger dimensions. This approach is based on a double sparsity model which uses a new cropped Wavelet decomposition as the base dictionary, and an adaptive dictionary learned from examples by employing Stochastic Gradient Descent (SGD) ideas. In continuation to this work, which has shown that dictionary learning can be up-scaled to tackle a new level of signal dimensions, our project is focused on studying and improving OSDL. In this report, we present several modifications to the algorithm which are aimed at dealing with its limitations, results of experiments conducted on high dimensional large datasets, our conclusions and suggestions for future work.
Project Title:

War Room

Picture of War Room
Students:

Natan Yellin, Ay Kay, Dima Trushin

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We developed a 3D Augmented Reality Multi-User application. Our application allows all users to simultaneously see the same scene at different angles. The Users are able to cooperatively manipulate objects and plan events in the scene.
2017
Project Title:

AR Museum

Picture of AR Museum
Students:

Roi Glink, Shay Michali, Elad Alon

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
We developed an Oculus Android gearVR application with both entertainment and educational purposes. Our application mainly offer an AR gameplay while still integrating some VR scenes and thus creating a Mixed reality experience. The user will be able to stroll at any random location with the headset and with minimal preparation of specific set of images and key points will visit an “AR Museum”. That experience is gained by our application which replaces these images with art paintings, therefore offering various options for the user to experience in one room. Additionally, the application offers both VR and AR interactions with the user. The user can control the navigation of scenes and the content of the museum to his points of interest. For Example - the user can choose to see Van-Goh paintings and the museum will be all Van-Goh. Afterwards with only one click the user will be able to see Asher paintings and so on.
Project Title:

Image Inpainting Using Pre-trained Classication-CNN

Picture of Image Inpainting Using Pre-trained Classication-CNN
Students:

Adar Elad,Yaniv Kerzhner

Supervisors:

Yaniv Romano

Description:
Image inpainting is an extensively studied problem in image processing, and various tools have been brought to serve it over the years. Recently, effective solutions to this problem based on deep-learning have been added to this impressive list. This paper offers a novel and unconventional solution to the image inpainting problem, still in the context of deep-learning. As opposed to a direct solution of training a CNN to fill-in missing parts in images, this work promotes a solution based on pre-trained classification-oriented CNN. The proposed algorithm is based on the assumption that such CNN's have memorized the visual information they operate upon, and this can be leveraged for our inpainting task. The main theme in the proposed solution is the formulation of the problem as an energy-minimization task in which the missing pixels in the input image are the unknowns. This minimization aims to reduce the distance between the true image label and the one resulting from the network operating on the completed image. A critical observation in our work is the fact that for better inpainting performance, the pre-training of the CNN should be applied on small portions of images (patches), rather than the complete images. This ensures that the network assimilates small details in the data, which are crucial for the inpainting needs. We demonstrate the success of this algorithm on two datasets: MNIST digits and face images (Extended Yale B), showing in both the tendency of this method to operate very well.
Project Title:

Prism Hologram

Picture of Prism Hologram
Students:

Marina Minkin

Supervisors:

Boaz Sterenfeld and Yaron Honen

Description:
As part of the project I created holograms. I built a prism to redirect ray of light, and wrote code to transform inputs to the output format that can be reflected by the prism. The input can be either a 3D model or an input from a webcam
Project Title:

Quad-copter Remote Control

Picture of Quad-copter Remote Control
Students:

Sapir Cohen, Nir Daitch

Supervisors:

David Dovrat

Description:
In this project, our goal is to give the user a simple control scheme to fly a drone. The possible commands are take-off, hover and land. As we will demonstrate, we were able to control our UAV using a small, affordable and easy-to-operate remote controller. Also, using the Raspberry Pi as our computing module, we have created a portable solution that can be mounted on the drone itself.
Project Title:

Multi AR - Real-Time Augmented Reality Multiplayer

Picture of Multi AR - Real-Time Augmented Reality Multiplayer
Students:

Michael Pekel, Ofir Elmakias

Supervisors:

Matan Sela and Boaz Sterenfeld & Yaron Honen

Description:
An augmented reality real-time multiplayer where the player is literally part of the game - he can fight creatures or other real players, do quests together and interact indoor and outdoor. The project was based on the Google Tango platform - acquiring data about the environment (walls, planes) in real time and a precise (less than 1 meter) relative position. The game mechanism includes: ‣ Full multiplayer ecosystem - client\server udp communication. ‣ Physics - walls/planes interaction. ‣ Shoot-Hit mechanism - the real players collider position is calculated in real-time as long as the bullet trajectory. ‣ Single player quests + collaborated puzzle quests. ‣ Integration with Google Daydream controller. ‣ Single & stereo ("dual screen VR") modes. Many thanks for our supervisors & supporters: Yaron Honen, Boaz Sternfeld, Matan Sela & Alexander Porotskiy. The demo clip could be found at: https://www.youtube.com/watch?v=EHyYArZsYlA
Project Title:

Identification of Swarm Members

Picture of Identification of Swarm Members
Students:

Nir Daitch, Sapir Cohen

Supervisors:

David Dovrat

Description:
This project handles the identification of a specific object instance in a swarm of similar objects. We have created an image processing component that recognizes the existence of peers in a given frame- with a main goal of identifying peer drones.
Project Title:

Hybrid Video Coding at High Bit-Rates

Picture of Hybrid Video Coding at High Bit-Rates
Students:

Ron Gatenio, Roy Shchory

Supervisors:

Yehuda Dar

Description:
In this project we explore the prevalent hybrid video-coding concept that joins transform-coding and motion-compensation. Specifically, we study the necessity of transforming the motion-compensation prediction residuals for their coding at high bit-rates. Our research relies on empirical statistics from a simplified motion-compensation procedure implemented in Matlab, and also from the reference software of the state-of-the-art HEVC standard. Our results show that the correlation among the motion-compensation residuals gets lower as the bit-rate increases, supporting the marginal use of transform coding at high bit-rates (i.e., the residuals are directly quantized). We also developed a research tool that provides data from intermediate stages of the HEVC. The data mainly include motion-compensation residuals, motion vector data, block and frame types, and bit-budget of components. It is formatted in a structure suitable for easy usage in Matlab for future research projects.
Project Title:

Perlin City - Procedural 3D City Generation Project

Picture of Perlin City - Procedural 3D City Generation Project
Students:

Alex Nicola, Nati Goel

Supervisors:

Yaron Honen and Boaz Sternfeld

Description:
Based on a known method of creating a single random city block using perlin noise, we have created a method of procedurally generating an infinite city without the intervention of human designer. Perline city is deterministic, realistic and beautiful. To achieve best performance possible, we used object pooling, separation of building objects upon multiple frames using coroutines, detecting when to create and destroy and addition of detailed objects that are displayed and hidden based on distance Download Demo: https://drive.google.com/open?id=0B8C-GqYShcqub1pINW0zNk5obHc
Project Title:

Looming VR

Picture of Looming VR
Students:

Alla Khier,Inbar Donag

Supervisors:

Daniel Raviv and Yaron Honen and Boaz Sternfeld

Description:
This is a virtual reality simulation of a vehicle with a camera that uses visual looming cue to navigate and avoid obstacles. We visualized the looming in different ways and compared 2 methods for looming calculation. One using ranges between the camera and the obstacles, and the other using the temporal change of the texture density of the obstacle.
Project Title:

Rate ditorsion optimized tree structures for image compression

Picture of Rate ditorsion optimized tree structures for image compression
Students:

Moshiko Elisof, Sefi Fuchs

Supervisors:

Yehuda Dar

Description:
1D and 2D Signals compression using improved tree coding, exploiting similarities of dyadic blocks by merging them into one tree leaf, allowing for adaptive tree-structure. The studied algorithms compensate for an inherent issue in standard tree-based signal coding - squared blocks which limit the ability to reduce representation bit-cost. Based on: [1]Rate-Distortion Optimized Tree-Structured Compression Algorithms for Piecewise Polynomial Images by Rahul Shukla, Member, IEEE, Pier Luigi Dragotti, Member, IEEE, Minh N. Do, and Martin Vetterli, Fellow, IEEE . [2]Image compression via improved quadtree decomposition algorithms by E. Shusterman and M. Feder
Project Title:

CutOutRL - Visualizing Neural Networks with Scribbles

Picture of CutOutRL - Visualizing Neural Networks with Scribbles
Students:

Yonatan Zarecki ,Ziv Izhar

Supervisors:

Elad Richardson

Description:
Deep neural networks (DNNs) have been very successful in recent years, achieving state-of-the-art results in a wide range of domains, such as voice recognition, image segmentation, face recognition and more. In addition, reinforcement-learning (RL) training methods combined with DNN models (deep RL) have been able to solve a wide variety of games, from PONG to Mario, purely by looking at the pixel values of the screen. Various “games” have been proposed for challenging neural networks, testing their capacity to learn complex tasks. Some tasks are designed to give us human insight about the way the model operates. In this project, we challenged a deep RL model with the task of segmenting an image using scribbles. We force it to achieve good segmentations by using scribble-based segmentation in a way similar to humans. We hope to gain insight on the way the network does segmentation by looking at the scribbles it generates.
Project Title:

Drone package delivery

Picture of Drone package delivery
Students:

Gal peretz , Gal malka

Supervisors:

Ohad Menashe

Description:
This is an open source project that intended to help people to create autonomous drone missions that operate with a pixhawk controller. The project is written in C++ and Python in order to enable fast image processing and operating the drone in real time. The project also includes built in missions. Our goal was to fly to a specific GPS location, scan the area for a bullseye target and land on the center of the target. You can use the framework to create your own missions. The framework includes an API that helps to stream live video over wifi or/and record the video to file. We’ve created this project in the Geomatric Image Processing lab at the Technion. Our goal was to create a simple framework to manage simple and complex missions represented by state machine Read more in our github page or see the project Report and final presentation
Project Title:

VR Newsroom

Picture of VR Newsroom
Students:

Natan Yelin

Supervisors:

Omri Azenkot and Boaz Sterenfeld and Yaron Honen

Description:
VR Newsroom is an experiment in browsing online news in virtual reality. It explores ways that VR can be used to facilitate discovery and exploration of large amounts of content. Online news was chosen for the content of the experiment because of its dynamic nature. Live APIs were chosen so that every time VR Newsroom is loaded different content is displayed. Custom RSS/ATOM feeds are also supported.
Project Title:

Virtual Reality interior designer

Picture of Virtual Reality interior designer
Students:

Frenkel Eduard and Salevich Alexander

Supervisors:

Mata Sela and Yaron Honen

Description:
Our project's goal is to allow a user to experience a sense of presence in a room while designing it.
We intend to build a designing tool using virtual reality which will allow to design a room space while being present inside a virtual room;
The tool will allow the user to design a room interior and be able to experience his work product at the same time.
Our solution will allow to design better suited environment for the customer by letting the designer the ability to show the customer the final design and get his feedback.
Project Title:

DeepFlowers - Online Flower Recognition using Deep Neural Networks

Picture of DeepFlowers - Online Flower Recognition using Deep Neural Networks
Students:

Yonatan Zarecki and Ziv Izhar

Supervisors:

Elad Richardson

Description:
These days, it seems like everyone has their own smartphone and that internet connection is available everywhere even in the most distant corners of nature reserves. A challenging task for nature lovers is the task of flower recognition, even with heavy big and heavy flower guide books it is hard to identify each flower species exactly, and for amateurs finding anything is using these guides can be a monumental task by itself, Differences between flowers species can be very subtle, and not easy to detect even for an expert's eye. Another challenge a flower classifier has to face is the sheer amount of flowers in the world, or in a specific country.

In this project we try and harvest the power of deep convolutional neural networks (CNNs) for our recognition task which have proven to be successful in similar tasks, and using data given to us by Prof. Avi Shmida of the Hebrew University, build a flower recognizer with an open online API for all to use.
Project Title:

Brain 3D Anatomy

Picture of Brain 3D Anatomy
Students:

Tom palny, Shani Levi and Nurit Devir

Supervisors:

Yaron Honen and Boaz Sternfeld and Omri Azencot and Hagai Tzafrir

Description:
Our system consists of three main parts: The first one is to receive a 3-dimensional matrix represents the MRI scan of the brain. We used Matlab in order to create 2-dimensional images from the given matrix. Each image was saved as a PNG file and represent a specific slice of the brain. The second part is to load the images into Unity and create a 3-dimensional object from them. In Order to build this object, we used the Ray Marching algorithm. The last part was to implement the ability to present the object in virtual reality using HTC vive and allow features which will give the user the feeling of the 3-dimensional object. Our project supports the following features: • Rotating – rotate the brain using the handheld controller. • Cutting – Cut the brain along the three X, Y and Z axes. • Zoom – zoom in and zoom out. • Reset button – turn back to the initial model. • Masking – emphasize different parts of the brain according to the user choice: The user can choose one of the two options representing the parts he wants – color it or remove the unwanted parts and see only the wanted one. After using the presentation way, the user can choose the specific part in which he is interested.
Project Title:

TermiNet

Picture of TermiNet
Students:

Itai Caspi

Supervisors:

Aaron Wetzler

Description:
In recent years, the introduction of deep reinforcement learning allowed rapid progress in the pursuit after implementing general AI. One of the long-standing challenges withholding further progress is designing an agent that operates in a hierarchical manner with temporal abstractions over its actions. We present a system which disassembles the learning into multiple sub-skills without external assistance. The system consists of a deep recurrent network which learns to generate action sequences from raw pixels alone, and implicitly learns structure over those sequences. We test the model on a complex 3D first person shooter game environment to demonstrate its effectiveness.
Project Title:

Anatomy VR

Picture of Anatomy VR
Students:

Ksenia Kaganer , Dima Trushinand Adi Mesika

Supervisors:

Boaz Sternfeld and Yaron Honen

Description:
We developed a 3D Anatomic learning application. Our application assist you in the learning process by creating a realistic Virtual Reality environment. You can explore all the human body parts in a very detailed level. Navigate between different body layers, e.g. skin, muscles, bones, internal organs etc. and see all the terminology names of each body part. In addition, you can walk around the body naturally, have a look at the body from every aspect you want and holding a VR plane that slice the body and get a different anatomic cuts.
Project Title:

Villicity

Picture of Villicity
Students:

Ksenia Kaganer,Eran Tzabar

Supervisors:

yaron honen and Avi Parush and Maayan Efrat

Description:
It is widely recognized that in celiac disease, to learn and adhere to the gluten free diet is essential for ensuring a good quality of life.
It is important that the education process adopt strategies to motivate and make the learning effective, particularly for children and adolescent.
In this context, new technologies can help make the learning process more engaging.
The main idea is to use a game approach in order to make the learning and training process more engaging and intuitive.
2016
Project Title:

Puppify - Automatic Generation of Planar Marionettes from Frontal Images

Picture of Puppify - Automatic Generation of Planar Marionettes from Frontal Images
Students:

Elad Richardson and Gil Ben-Shachar

Supervisors:

Anastasia Dubrovina and Aaron Weltzer

Description:
In this project, we propose a method for fully automating the body segmentation process, thus enabling a wide variety of consumer and security applications and removing the friction caused by manual input. The process starts with a deep convolutional network, used to localize body joints, which are refined and stabilized using Reverse Ensembling and skin tone cues. The skeletal pose model is then exploited to create "auto-scribbles": automatically generated foreground/background scribble masks that can be used as inputs for a wide range of segmentation algorithms to directly extract the subject's body from the background. Simple segmentation aware cropping produces individual body part crops which can be used to generate a planar marionette for repositioning and animation.
Project Title:

Snake 3D

Picture of Snake 3D
Students:

Sapir Eltanani and Simona Gluzman

Supervisors:
Description:

3D game which relives the experience of playing the old well known game 'Snake', but this time in a three dimentional world in VR. The game is developed for the leap motion and oculus rift devices.

The aim remains the same, the player has to eat the most of the flying foods to earn points. Every eaten food grows the snake longer, but once the player touches the objects in the world around him or the snake's body he loses.

Project Title:

Imae Segmentation Using Multi-Region Active Contours

Picture of Imae Segmentation Using Multi-Region Active Contours
Students:

Chen Shapira and Tamir Segev

Supervisors:
Description:
Our project focused on the Multi-region Active Contours with a single Level-Set function method. This method allows quick & accurate image segmentation on 2d and 3d images. This is done by dividing the image into multiple regions by calculating a single nonnegative distance function, which is easily extracted using the Voronoi Implicit Interface Method.
Project Title:

Augmented Reality in Road Navigation

Picture of Augmented Reality in Road Navigation
Students:

Doron Halevy

Supervisors:
Description:
In this work, we propose a vision-based solution for globally localizing vehicles on the road using a single on-board camera and exploiting the availability of priorly geo-tagged street view images from the surrounding environment together with their associated local point clouds. Our approach is focused on the integration of image-based localization into a tracking and mapping system in order to provide accurate and globally-registered 6DoF tracking of the vehicle’s position at all times
Project Title:

QTCopter

Picture of QTCopter
Students:

Noam Yogev, Roee Mazor, Efi Shtain, Vasily Vitchevsky, Sergey buh, Alex Bogachenko and Daniel Joseph

Supervisors:
Description:

We built a platform to be used to create autonomous indoor flight capable drones and implemented a system based on computer vision to utilize and demonstrate this platform by performing four tasks, as described by a national contest sponsored by the Pearls of Wisdom voluntary association. Currently, such platforms are researched from commercial and academic points of view, but no finished products have been released.

The computer vision part of the project is responsible for communicating with the quadcopter’s navigation module, in order to guide the quadcopter to the next target, identify key objects and trigger the execution of various required auxiliary actions. The software will run on an ARM-based companion computer running Linux, mounted on the drone. It will receive images from one or multiple high speed cameras on the quadcopter and ROS messages from the navigation module running on the same computer. The OpenCV library will be used to implement the required functionalities.

Project Title:

Mobile Point Fusion

Picture of Mobile Point Fusion
Supervisors:

Aaron Wetzler

Description:
The need for real-time 3D reconstruction is becoming more and more apparent in today's world. Depth Sensors are being marketed today in consumer laptops and tablets. In the near future we expect an increase in availability of mobile devices with depth sensors, and therefore also a need for highly efficient real-time 3D reconstruction methods. Our project's goal is to enable these devices to preform 3D reconstruction in real-time. Our solution uses the input from a moving depth sensor to estimate the camera position and build a 3D model. The implementation harnesses the GPU to achieve real time preformace while taking into account the limitations of mobile devices and putting a strong emphasis on optimizations throughout the pipeline.
Project Title:

Automatic 3D Face Printing

Picture of Automatic 3D Face Printing
Students:

Hila Ben-Moshe and David Gelbendorf

Supervisors:

Alon Zvirin

Description:

Developing automatic process for building a 3D face model from a GIP facial video file. We first use Viola Jones Face Detection Algorithm to detect which frame should we choose from the video. Detection of face features and the movement of face are taken under consideration when choosing the best frame automatically. Than we process the selected frame, including automatic choose of bounding box and with fixing missing eyebrows. Finally, we write the desired model to STL format, a known format to print in 3D.

2015
Project Title:

Image Segmentation and Matting in Realtime on a Mobile Device

Picture of Image Segmentation and Matting in Realtime on a Mobile Device
Students:

Elad Richardson

Supervisors:
Description:
In the our project we've implemented a scribble-based algorithm for extracting object from natural photos and pasting them seamlessly into a different background under the constrains of a mobile device computational power. The algorithm was first developed and tested on a personal computer with the help of openCV’s C++ libraries and was then ported to Android using the Native Development Kit. We used the Android Software Development Kit in order to wrap the algorithm in a user friendly interface to create an application that anybody can use.
Project Title:

Rigid ICP Registration with Kinnect

Picture of Rigid ICP Registration with Kinnect
Students:

Choukroun Yoni and Semmel Elie

Supervisors:
Description:
The main goal of the first part of the project was to perform an Iterative Closest Point registration on two depth maps obtained using the Kinect depth sensor in C++ on the windows platform. The other purposes of this first part was to learn how to integrate alone big libraries (dynamic or not) to the project and to handle with the difficulties of implementing an algorithm on the different classes of the libraries whom do not match necessarily one with the other. The second part of the project was to bond, two by two with the precedent algorithm, different scan frames get by the Kinect with the help of its motor to get a whole body depth image.
Project Title:

Mad Panim

Picture of Mad Panim
Students:

Nadav Toledo

Supervisors:

Alon Zvirin and Yaron Honen

Description:
בניית ארגז כלים לאנליזה ראשונית של מודל\משטח פנים תלת-מימדי בניסויים קליניים עבור הרופאים. ארגז הכלים יאפשר לרופאים בין היתר להציג מודל פנים מיטבי של הנבדק לאחר סריקת וידאו של פניו, למצוא נקודות עניין על גבי המודל\משטח ולמדוד מרחקים גאודטים בין הנקודות השונות לפי שיקול דעתו של הרופא. האפליקציה מקבלת קובץ וידאו תלת מימדי ממצלמת גיפ ויודעת באופן אוטומטי לבחור את תמונת העומק הטובה ביותר מתוך סרט הוידאו. בתמונת העומק יבחרו באופן אוטומטי שלוש נקודות במרכז הפנים בעזרת ויולה וג'ונס. בעזרת ASM ימצאו באופן אוטומטי 68 נקודות בכל משטח הפנים המוצג. אלג' שפותח ע"י הסטודנט יודע באופן אוטומטי לחלוטין לבחור מתוך כל הנקודות 12 נקודות מרכזיות (5 בפה, 4 בעיניים, 3 באף). באמצעות הממשק שפותח ניתן לראות את המסלולים על גבי הפנים ,בין הנקודות הנבחרות ולקבל באופן מיידי את המרחק הגאודטי ביניהן (fast marching). בנוסף, ניתן לשמור את כל הנתונים (נקודות,מרחקים,מס' תמונה, הערות וכו') לקובץ אקסל ולעלות אותם במועד מאוחר יותר.
Project Title:

Printed circuit boards detection and image analysis

Picture of Printed circuit boards detection and image analysis
Students:

Giorgio Tabarani and Roi Divon

Supervisors:

Dr. Amir Adler

Description:
In recent years and with recent events in the country, arose the demand to be able to identify printed circuit boards taken by the Israeli Police in crime scenes in order to connect between cases and identify the source of these boards hoping to avoid similiar and unpleasant incidents in the future.

The police takes snapshots of printed circuit boards from every crime scene, mostly distorted circuits due to burns or fractures, and tries to identify their origin from manual inspection and by guessing.

In this project we were asked to develop a basic system which can perform the aforementioned identification automatically given a picture of a shred and a database of pictures of circuit boards which usually appear in crime scenes.

Project Title:

Generating 3D Colored Face Model Using a Kinect Camera

Picture of Generating 3D Colored Face Model Using a Kinect Camera
Students:

Rotem Mordoch and Nadine Toledano and Ori Ziskind

Supervisors:

Matan Sela and Yaron Honen

Description:
The constant development of cheap depth cameras, together with the ongoing integration of them on mobile devices, offers the potential of many new and exciting applications covering various of different fields. This includes personal everyday use, commercial objectives and medical solutions. In our project we propose a system which allows the user to easily create a colored 3D facial model of its own. The objective of this project is to build a user-friendly system for generating a 3D colored facial model. The solution we offer combines open source techniques for face detection in an image and a 3D reconstruction algorithm. We integrate these techniques to create a common algorithm which produces our goal. The system we have built uses depth camera stream to capture a subject’s face on each frame, and uses this information to generate a high quality colored 3D facial model. We demonstrate our results and optimizations to the solution, and offer possible future opportunities to continue our work.
Project Title:

Solving Simultaneous Linear Equations Using GPU

Picture of Solving Simultaneous Linear Equations Using GPU
Students:

Oriel Rosen and Haviv Cohen

Supervisors:

Yaron Honen

Description:
Image processing tends to demand highly complicated computations. Though some programming languages (as Matlab) are very comfortable for mathematical usage, they are less than ideal in terms of performance, leading to programs which run far too much time. The solution to that problem is to use “stronger” tools at the choke-points of the computation. By programming with low-level languages and by using parallelism, we can drastically improve our program’s performance.
Project Title:

Freehand Voxel Carving Scanning on a Mobile Device

Picture of Freehand Voxel Carving Scanning on a Mobile Device
Students:

Alex Fish

Supervisors:

Aaron Wetzler

Description:

3D scanners are growing in their popularity as many new applications and products are becoming a commodity. These applications are normally tethered to a computer and/or require expensive and specialized hardware. Our goal is to provide a 3D scanner which uses only a mobile phone with a camera. We consider the problem of computing the 3D shape of an unknown, arbitrarily shaped scene from multiple color photographs taken at known but arbitrarily distributed viewpoints using a mobile device. The estimated camera orientation and position in 3D space obtained from publicly available SLAM libraries permits us to perform a 3D reconstruction of the observed objects. We demonstrate that it is possible to achieve a good 3D reconstruction on a mobile device.

2014
Project Title:

3D Image Fusion using ICP

Picture of 3D Image Fusion using ICP
Students:

Itay Naor

Supervisors:

Alon Zvirin and Guy Rosman and Yaron Honen

Description:

This project deals with the ICP algorithm and uses it to create a complete three-dimensional model of a rigid object. First, a wrapper for an ICP algorithm included in PCL library which fuses 3D images taken by GIP Technion laboratory camera was written, and its running parameters were optimized. Afterwards, various improvements were implemented for an ICP algorithm using distance function of point to surface, parameters were examined, and fusion algorithm was written. Finally, the program was integrated to the GUI of GIP Technion laboratory .

Project Title:

Puzzle Bingo

Picture of Puzzle Bingo
Students:

Shahar Sagiv, Omri Panizel, Loui Diab and Waseem Ghraye

Supervisors:
Description:
We created a new type of game, which combines the competitive aspect of the Bingo game with the great fun of solving a puzzle. This game is being played simultaneously between 4 players (on 4 different devices) who compete by solving the given puzzle. Every player can see the real time progress of the other players with a map showing their boards. Every Puzzle has a title name and the user have a choice to stop solving the puzzle when he recognize the image, and try to guess the image title. This gives the game educational value, as the players learn to recognize places, animals, and celebrities.
Project Title:

Clinical Ceck-up System

Picture of Clinical Ceck-up System
Students:

Anna Ufliand and Sergey Yusufov

Supervisors:
Description:
המטרה העיקרית של הפרויקט היתה פיתוח מערכת אשר תהיה מסוגלת להתתממשק עם מכשיר לבדיקות רפואיות, אשר פותח בפקולטה להנדסה ביו-רפואית, לקרוא ולהציג את הנתונים מהמכשיר הנ"ל בצורה נוחה, יעילה וכך שיהיה ניתן לגשת לנתונים האלא ממכשירים שונים.כאחד הקריטריונים החשובים עבורינו היה לפתח מערכת אשר תאפשר גמישות מירבית. למכשיר ביו-רפואי יכולות להיות הרבה קונפיגורציות שונות, זו בעצם פלטפורמה המאפשרת הרכבת חיישנים שונים לביצוע בדיקות שונות. היה לנו חשוב להתייחס לכך ולפתח מערכת אשר לא תגביל את סוג וכמות החיישנים אשר בעזרתם מבצעים בדיקות.
Project Title:

Make it stand

Picture of Make it stand
Students:

Dvora Nagler

Supervisors:
Description:

Artists, designers and architects use imbalance to their advantage to produce surprising and elegant designs.
The balancing process is challenging when manipulating geometry in a 3D modeling software,
since volumes are only represented by their boundaries.
Our goal is to modify volume shape such that once printed, the model stands.
To do so, we manipulate the inner voids and sometimes it will not be enough and we will also consider deforming the model.
These two manipulations change the mass distribution and thus the center of mass position.

Project Title:

Solving Classification Problem on Hyperspectral Images

Picture of Solving Classification Problem on Hyperspectral Images
Students:

Talor Abramovich and Oz Gavrielov

Supervisors:
Description:
Hyperspectral Imaging is a spectral imaging method, which includes bands from the visible light as well as infra red. Unlike the 2D color images, which only use red, green and blue, hyperspectral image includes a third dimension of spectrum. This information can be used to classify the objects in the image, and to define the difference between asphalt, plants and water. It could even show the difference between real leaf and a plastic one. In our project we used several classification algorithms, including KNN, PCA and KSVD to classify four hyperspectral images. We compared the results and found which algorithm gives the best classification and which is the most efficient..
2013
Project Title:

Image Duel

Picture of Image Duel
Students:

יבגני מורושקו ומרום סבג

Supervisors:

חביאר טורק

Description:
משחק אינטראקטיבי בין שחקנים במהלכו כל שחקן יכול לשלוח לשחקן אחר תמונה, השחקן השני צריך לנחש מה יש בתמונה לאחר שהופעל אלגוריתם כלשהו לעיבוד תמונה המקשה על הזיהוי. השחקנים מקבלים ניקוד כתלות בזמן בו הצליחו לנחש. המשחק ממומש למכשירי אנדרואיד, כאשר קיים שרת ששומר את כל המידע וכל המשחקים מתבצעים דרכו.
Project Title:

Descriptors-Based Stereo Reconstruction

Picture of Descriptors-Based Stereo Reconstruction
Students:

Anan Abu-Yousef and Anna Ginzburg

Supervisors:

Guy Rosman

Description:
In our project we built a stereo reconstruction pipeline. We compared several descriptor algorithms and checked how they work for stereo reconstruction. We used various linear projections [1,2] to investigate the effective dimensionality of descriptors coming from stereo reconstruction.
Project Title:

Structured Light Based 3D Reconstruction With Priors

Picture of Structured Light Based 3D Reconstruction With Priors
Students:

O.Hamud and B.Matok

Supervisors:

Guy Rosman

Project Title:

3D Camera for Mobile Device

Picture of 3D Camera for Mobile Device
Students:

Deborah Cohen & Dani Voitsechov

Supervisors:

Raja Giryes

Project Title:

Touch tic-tac-toe with depth camera

Picture of Touch tic-tac-toe with depth camera
Students:

חוסיין עותמאן ועבדל מסיח מבאריקי

Supervisors:

רג'א ג'יריס

Description:
מטרת הפרויקט היא מימוש משחק איקס עיגול מגע בשימוש ב ומקרן ומצלמת . משחק איקס עיגול הינו משחק קלאסי לשני משתתפים שבו נתון לוח בגודל שלוש על שלוש. ומשתמש אחד מסמן והשני מסמן לסירוגין. מטרת כל משתמש היא ליצור רצף והראשון שמצליח ליצור רצף מנצח. מטרת פרוייקט זה הינה לממש גרסא של משחק זה על לוח מגע ללא מסך מגע. מקרן יקרין את המשחק על לוח לבן ואז המשתתפים יסמנו את המיקום שבו הם רוצים לשים את סימונם. זיהוי התנועה והסימונים נעשה בעזרת מצלמת
Project Title:

Quaternion K-SVD for Color Image Denoising

Picture of Quaternion K-SVD for Color Image Denoising
Students:

Amit Carmeli

Supervisors:
Description:
In this work, we introduce the use of Quaternions within the field of sparse and redundant representations. The Quaternion space is an extension of the complex space, where each element is composed of four parts – a real-part and three imaginary parts. The major difference between Quaternion space H and the complex C space is that the Quaternion space has non-commutative multiplication. We design and implement Quaternion variants of state-of-the-art algorithms OMP and KSVD. We show various results, previously established only for the real or complex spaces, and use them to devise the Quaternion K-SVD algorithm, nicknamed QK-SVD.
Project Title:

Structured Light Based 3D Reconstruction with Priors

Picture of Structured Light Based 3D Reconstruction with Priors
Students:

Itamar Talmi and Ofir Haviv

Supervisors:
Description:
פרויקט זה בא לבחון שימוש באלגוריתם PCA לצורך פתיחת נעילת מכשיר אנדרואיד על ידי זיהוי פנים. פרויקט זה מדגים כיצד ניתן, ע"י למידה של פני האדם כלשהו בתאורות שונות, בניית בסיס PCA של אותו אדם, ובעזרת בסיס זה, לאמת מישהו שמנסה להזדהות בעת פתיחת נעילה של מכשיר אנדרואיד.
Project Title:

CamPong – Smartphone PONG using Camera and Built-in Projector

Picture of CamPong – Smartphone PONG using Camera and Built-in Projector
Students:

Nofar Carmeli and Rom Herskovitz

Supervisors:
Description:
In this project, we introduce the use of a mobile phone, equipped with a camera and a projector, to allow real time hand detection. We present a demo of an interactive pong game that is controlled by the players’ natural hand movements during the game. To the best of our knowledge this is the first known use of a commodity cellular phone that uses an inbuilt projector to perform real time structured light projections coupled with real time image processing.
2012
Project Title:

Open Fusion

Picture of Open Fusion
Students:

Nurit Schwabsky and Vered Cohen

Supervisors:
Description:
OpenFusion is an implementation of Microsoft’s KinectFusion system. This system enables real-time tracking and reconstruction of a 3D scene using a depth sensor. A stream of depth images is received from the camera and compared to the model built so far using the Iterative Closest Point (ICP) algorithm to track the 6DOF camera position. The camera position is then used to integrate the new depth images into the growing volumetric model, resulting in an accurate and robust reconstruction. The reconstructed model is adapted according to dynamic changes in the scene without losing accuracy.
Project Title:

3D Stereo Reconstruction Using iPhone Devices

Picture of 3D Stereo Reconstruction Using iPhone Devices
Students:

Ron Slossberg and Omer Shaked

Supervisors:
Description:
Stereo Reconstruction is a common method for obtaining depth information about a given scene using 2D images of the scene taken simultaneously by two cameras from different views. This process is done by finding corresponding objects which appear in both images and examining their relative positions in the images, based on previous knowledge of the internal parameters of each camera and the relative positions of both cameras. This method relies on the same basic principle that enables our eyes to perceive depth.

 

download project details template

Copyright © 2016 by Geometric Image Processing Lab. All rights reserved.