Ryo Takizawa

I am a first-year phD student at The University of Tokyo. I am a member of Intelligent Systems and Informatics Laboratory (2023-). I received my Bachelor's Degree from the Department of Precision Engineering, The University of Tokyo (2019-2023).

My main research focus is to make robots more intelligent. Rather than merely focusing on factory automation technology, I aim to develop methodologies that enable robots to adaptively expand their capabilities in real-world environments, much like humans do. With this objective, both my master's research and my doctoral research primarily focus on robotic object manipulation.

Email  /  Google Scholar  /  Github

profile photo

News

[Feb 9, 2025] Create this webpage.

Publications

Video CLIP Model for Multi-View Echocardiography Interpretation
Ryo Takizawa*, Satoshi Kodera, Tempei Kabayama, Ryo Matsuoka, Yuta Ando, Yuto Nakamura, Haruki Settai, Norihiko Takeda
arXiv, 2025  
arXiv / code (comming soom)

GazeBot enables high reusability of the learned motions even when the object positions and end-effector poses differ from those in the provided demonstrations. GazeBot achieves high generalization performance compared with state-of-the-art imitation learning methods without sacrificing its dexterity and reactivity, and its training process is entirely data-driven once a demonstration dataset with gaze data is provided.

Enhancing Reusability of Learned Skills for Robot Manipulation via Gaze and Bottleneck
Ryo Takizawa*, Izumi Karino, Koki Nakagawa, Yoshiyuki Ohmura, Yasuo Kuniyoshi
arXiv, 2025  
website / arXiv / code (comming soon)

GazeBot enables high reusability of the learned motions even when the object positions and end-effector poses differ from those in the provided demonstrations. GazeBot achieves high generalization performance compared with state-of-the-art imitation learning methods without sacrificing its dexterity and reactivity, and its training process is entirely data-driven once a demonstration dataset with gaze data is provided.

Gaze-Guided Task Decomposition for Imitation Learning in Robotic Manipulation
Ryo Takizawa*, Yoshiyuki Ohmura, Yasuo Kuniyoshi
arXiv, 2025  
arXiv / code

A simple yet robust task decomposition method based on gaze transitions. This method leverages teleoperation, a common modality in robotic manipulation for collecting demonstrations, in which a human operator's gaze is measured and used for task decomposition. Notably, our method achieves consistent task decomposition across all demonstrations for each task, which is desirable in contexts such as deep learning.


website template