|
Osher Azulay
I’m a Fulbright postdoctoral researcher at the
University of Michigan, Ann Arbor, working with Prof. Stella
Yu.
My research focuses broadly on embodied intelligence, at the intersection of robotics, computer
vision,
and machine learning, with the goal of enabling reliable behavior under real-world variability.
Previously, I earned my Ph.D. from Tel Aviv University in 2024, under the supervision of
Dr. Avishai Sintov. My work focused on robot
learning for manipulation, with an emphasis on leveraging multimodal signals for more adaptive
interaction.
Email /
CV
Last updated: July 2025
/
LinkedIn /
GitHub
|
|
News
- July 2025 — Started my postdoc at the University of Michigan.
- Winter 2025 — Visiting Scholar at UC Berkeley’s AUTOLab.
- April 2025 — Gave a talk at Columbia University’s ROAM Lab.
- Dec 2024 — Invited talk at Bar-Ilan University, Computer Science Department.
- Dec 2024 — Invited talk at the Technion, Mechanical Engineering Robotics Colloquium.
- Nov 2024 — Received the Fulbright Postdoctoral Fellowship.
- Oct 2024 — Defended my Ph.D. at Tel Aviv University.
- Summer 2023 — Visiting Graduate Researcher at Rutgers University, Robot Learning Lab.
- Summer 2022 — Robotics Intern Engineer at Unlimited Robotics.
- 2023 — Received Honorable Mention for Excellence in Teaching at Tel Aviv University.
- 2023 — Awarded the KLA Ph.D. Excellence Scholarship.
- 2022 — Awarded the Prof. Nehemia Levtzion Scholarship for Outstanding Doctoral Students.
|
Selected Publications
Full publication list on
Google Scholar.
|
|
VIGOR: Visual Goal-In-Context Inference for Unified Humanoid Fall
Safety
Osher Azulay,
Zhengjie Xu,
Andrew Scheffer,
and Stella X. Yu.
Tech Report.
project page
/
paper
Unified fall mitigation + stand-up recovery distilled into an egocentric-depth policy.
|
|
Embodiment-Agnostic Navigation Policy Trained with Visual
Demonstrations
Nimrod Curtis*,
Osher Azulay*,
and Avishai Sintov.
Tech Report.
project page
/
paper
/
code
/
video
Learns adaptive, collision-free motion from just a few visual demonstrations using diffusion.
|
|
Visuotactile-Based Learning for Insertion with Compliant Hands
Osher Azulay, Dhruv Metha Ramesh,
Nimrod Curtis and
Avishai Sintov.
IEEE RA-L & IROS, 2025.
project page
/
paper
/
code
Visuotactile policy learning for contact-rich insertion with zero-shot sim-to-real transfer.
|
|
AllSight: A Low-Cost and High-Resolution Round Tactile Sensor with
Zero-Shot Learning Capability
Osher Azulay,
Nimrod Curtis, Rotem
Sokolovsky,
Guy Levitski, Daniel Slomovik, Guy Lilling and
Avishai Sintov.
IEEE RA-L & ICRA, 2024.
paper
/
code
/
video
Introducing AllSight, an optical tactile sensor with a round 3D structure designed for
robotic inhand manipulation tasks
|
|
Augmenting Tactile Simulators with Real-like and Zero-Shot
Capabilities
Osher Azulay*,
Alon Mizrahi*,
Nimrod Curtis* and
Avishai Sintov.
ICRA 2024.
paper
/
code
Bridges the sim-to-real gap for 3D shaped high-resolution tactile sensing using generative
modeling.
|
|
Haptic-Based and SE(3)-Aware Object Insertion Using Compliant Hands
Osher Azulay, Max Monastirsky and
Avishai Sintov.
IEEE RA-L & ICRA, 2023.
paper
/
video
Exploring complaint hands characteristics for object insertion using haptic-based residual RL.
|
|
Learning to Throw With a Handful of Samples Using Decision
Transformers
Max Monastirsky,
Osher Azulay and
Avishai Sintov.
IEEE RA-L & IROS, 2023.
paper
/
video
Exploring the use of Decision Transformers for throwing and their ability for sim2real policy
transfer.
|
|
Learning Haptic-based Object Pose Estimation for In-hand Manipulation
Control with Underactuated Robotic Hands
Osher Azulay, Inbar
Meir and
Avishai Sintov.
IEEE Transactions on Haptics, 2022.
paper
/
code
/
video
In-hand object pose estimation and manipulation using Model Predictive Control.
|
|
Open-Sourcing Generative Models for Data-driven Robot Simulations
Eran Bamani, Osher Azulay,
Anton Gurevich, and Avishai
Sintov.
Data-Centric AI workshop, NeurIPS, 2021
project page
/
paper
Exploring the possibility of investing the recorded data in a generative model rather than directly
to a regression model for real-robot applications.
|
|
Wheel Loader Scooping Controller Using Deep Reinforcement Learning
Osher Azulay and
Amir Shapiro.
IEEE Access, 2021
paper
/
code
/
video
A deep reinforcement learning-based controller for an unmanned ground vehicle with a custom-built
scooping mechanism.
|
Teaching Experience
- Advanced
Topics in Computer Vision (EECS 542) - LEO Lecturer, University of Michigan,
Winter 2026.
- Robotics and Control
Lab - Course Designer and Teaching Assistant, Tel Aviv University, Spring
2021-2024.
- Introduction to Control Theory - Teaching Assistant, Tel Aviv University,
Fall 2020-2024.
- Introduction to Electrical Engineering - Teaching Assistant, Ben-Gurion
University, Spring 2019.
- C Programming - Teaching Assistant, Ben-Gurion University, Fall 2019.
- Introduction to Mechanical Engineering - Lab Instructor, Ben-Gurion
University, Fall 2018.
|
|