|
Osher Azulay
I’m a Fulbright postdoctoral researcher at the
University of Michigan, working with Prof. Stella
Yu. I work at the intersection of robotics, computer vision, tactile sensing, and machine
learning, aiming to advance humanoid intelligence.
Previously, I earned my Ph.D. from Tel Aviv University in 2024, under the supervision of Dr. Avishai Sintov. My work focused on robotic
in-hand manipulation, developing methods that leverage multimodal cues to enable more adaptive
interaction.
Always happy to connect—feel free to reach out.
Email /
CV /
Scholar /
LinkedIn /
Github
|
|
News
- July 2025 — Started my postdoc at the University of Michigan.
- Winter 2025 — Visiting Scholar at UC Berkeley’s AUTOLab.
- April 2025 — Gave a talk at Columbia University’s ROAM Lab.
- Dec 2024 — Invited talk at Bar-Ilan University, Computer Science Department.
- Dec 2024 — Invited talk at the Technion, Mechanical Engineering Robotics Colloquium.
- Nov 2024 — Received the Fulbright Postdoctoral Fellowship.
- Oct 2024 — Defended my Ph.D. at Tel Aviv University.
- Summer 2023 — Visiting Graduate Researcher at Rutgers University, Robot Learning Lab.
- Summer 2022 — Robotics Intern Engineer at Unlimited Robotics.
- 2023 — Received Honorable Mention for Excellence in Teaching at Tel Aviv University.
- 2023 — Awarded the KLA Ph.D. Excellence Scholarship.
- 2022 — Awarded the Prof. Nehemia Levtzion Scholarship for Outstanding Doctoral Students.
|
Selected Publications:
|
|
VIGOR: Visual Goal-In-Context Inference for Unified Humanoid Fall
Safety
Osher Azulay,
Zhengjie Xu,
Andrew Scheffer,
and Stella X. Yu.
Under review.
project page
Unified fall mitigation + stand-up recovery distilled into an egocentric-depth policy.
|
|
Embodiment-Agnostic Navigation Policy Trained with Visual
Demonstrations
Nimrod Curtis*, Osher Azulay* and Avishai Sintov.
Under review.
paper
/
code
/
video
We proposed ViDEN, a framework using visual demonstrations for scalable, collision-free navigation.
|
|
Visuotactile-Based Learning for Insertion with Compliant Hands
Osher Azulay, Dhruv Metha Ramesh,
Nimrod Curtis and Avishai Sintov.
IEEE RA-L & IROS, 2025.
website
/
paper
/
code
Sim2real learning of robust precision insertion polices with compliant hands.
|
|
AllSight: A Low-Cost and High-Resolution Round Tactile Sensor with
Zero-Shot Learning Capability
Osher Azulay,
Nimrod Curtis, Rotem
Sokolovsky, Guy Levitski, Daniel Slomovik, Guy Lilling and
Avishai Sintov.
IEEE RA-L & ICRA, 2024.
paper
/
video
/
code
Introducing AllSight, an optical tactile sensor with a round 3D structure designed for
robotic inhand manipulation tasks
|
|
Augmenting Tactile Simulators with Real-like and Zero-Shot
Capabilities
Osher Azulay*, Alon
Mizrahi*,
Nimrod Curtis* and Avishai Sintov.
ICRA 2024.
paper
/
code
Tackling the sim-to-real problem for high resolution 3D round sensors using bi-directional
Generative Adversarial Networks.
|
|
Haptic-Based and SE(3)-Aware Object Insertion Using Compliant Hands
Osher Azulay, Max Monastirsky and Avishai Sintov.
IEEE RA-L & ICRA, 2023.
paper
/
video
Exploring complaint hands characteristics for object insertion using haptic-based residual RL.
|
|
Learning to Throw With a Handful of Samples Using Decision
Transformers
Max Monastirsky,
Osher Azulay and Avishai Sintov.
IEEE RA-L & IROS, 2023.
paper
/
video
Exploring the use of Decision Transformers for throwing and their ability for sim2real policy
transfer.
|
|
Learning Haptic-based Object Pose Estimation for In-hand Manipulation
Control with Underactuated Robotic Hands
Osher Azulay, Inbar
Meir and Avishai
Sintov.
IEEE Transactions on Haptics, 2022.
paper
/
video
/
code
In-hand object pose estimation and manipulation using Model Predictive Control.
|
|
Open-Sourcing Generative Models for Data-driven Robot Simulations
Eran Bamani, Osher Azulay,
Anton Gurevich, and Avishai
Sintov.
Data-Centric AI workshop, NeurIPS, 2021
paper
/
oral
Exploring the possibility of investing the recorded data in a generative model rather than directly
to a regression model for real-robot applications.
|
|
Wheel Loader Scooping Controller Using Deep Reinforcement Learning
Osher Azulay and Amir Shapiro.
IEEE Access, 2021
paper
/
video
/
code
A deep reinforcement learning-based controller for an unmanned ground vehicle with a custom-built
scooping mechanism.
|
|