The platform of choice for cutting edge AI & Robotics research
Franka Research 3 is the reference world-class, force sensitive robot system that empowers researchers with easy-to-use robot features as well as with low-level access to robot's control and learning capabilities.
Franka Research 3's robot system includes the Arm and its Control. The force sensitive and agile Arm features 7 DOF with torque sensors at each joint, industrial-grade pose repeatability of +/- 0.1 mm and negligible path deviation even at high velocities. It comes with a payload of 3 kg, a reach of 855 mm and a workspace coverage of 94.5 %.
FCI is the ideal interface to explore low-level programming and control schemes, providing the current status of the robot and enabling its direct torque control, at 1 kHz. On top of the C++ interface libfranka, integration with the most popular ecosystems ROS, ROS 2 and MATLAB & Simulink is available!
„The Franka robot system is ideal for our research in reactive, robust manipulation in open-ended environments. It is also well suited for our work toward collaborative manipulation alongside people.“
Dieter Fox, Senior Director - Robotics Research, NVIDIA
„Humans and robots are differently intelligent; by interacting, we can combine skills to do more than what we could achieve separately.“
Leila Takayama, Associate Professor - Human-Robot Interaction, University of California
„A new era of robotic applications calls for a new robot generation. Franka is laying down a solid foundation for the new challenges in real-world robotics applications.“
Oussama Khatib, Director - Robotics Lab, Stanford University
Customize your FR3
Franka Hand
The Hand is Franka Robotics' 2-finger gripper with exchangeable fingertips, fully integrated with the software of Franka Research 3, therefore plug-and-use. The fingertips can easily be changed and adapted to the objects to be grasped, e.g. by using 3D-printed fingertips.
App Package for FR3
Apps are modular building blocks that can be combined into App Workflows to prototype robot behaviors rapidly. Each App contains a context menu where the user is guided interactively to enter parameters like speed and force, as well as to set robot poses by demonstration.
RIDE
RIDE is the development interface for writing custom Apps and connecting third-party hardware and external resources. It's the ideal tool for customizing and extending the system’s capabilities.
NVIDIA® Isaac Sim™
NVIDIA Isaac Sim makes your development and testing better and faster, by creating photorealistic, physically accurate virtual environments. The scalable robotics simulation and synthetic data-generation tool is designed to seamlessly integrate with the latest robotic systems, including FR3. With such integration, you can replicate real-world scenarios and conduct comprehensive testing as well as analysis.
Franka Toolbox for MATLAB
Franka Toolbox for MATLAB provides all necessary control options and signals from the FR3 robot, resulting in a quick, intuitive, and robust way for students and researchers to evaluate their algorithms – whether in the laboratory or classroom. In the toolbox, users will find a rich set of MATLAB® scripts and Simulink® blocks, and a collection of advanced demos, covering a wide array of possibilities for controlling the robot.
ROS 2, the successor to the widely acclaimed ROS, stands as a beacon of innovation, unlocking new possibilities for researchers and industry professionals alike. In keeping with our promise to support researchers and developers with robust and versatile tools to shape the future of robotics, we have released a brand-new Franka ROS 2 package. The comprehensive package offers a plethora of functionalities to enrich your FR3 robots with the full spectrum of opportunities unleashed by ROS 2.
Franka AI Companion elegantly combines the hardware and software you need to streamline the setup and speed execution of your robotics and AI research work, while also offering NVIDIA® GPU-accelerated edge computational power and real-time 1kHz control.
Three access levels to the robot address different needs and skills, for the whole spectrum of robotics research.
DESK
The ease of use and minimal programming time makes Desk the most suitable interface for rapid prototyping, simple human robot interaction studies and demos.
It enables researchers to fully integrate the Franka Robotics system into experimental setups, and exploit its integrated high performance controllers. It is also a great tool for teaching introductory robotics.
FCI bypasses the robot’s Control to let researchers run their own control algorithms in external real-time capable PCs at 1 kHz. It is the ideal interface to explore low-level planning and control schemes.
An open and global research ecosystem enabled by a powerful robotics platform for quicker time to results and publishing. Franka Research 3 is the reference platform to integrate existing research, share breakthroughs and collaborate on projects, replicate studies and promote papers with the community.
From AI, ML, Robot Control and Motion Planning, to Manipulation and HRI.
For researchers at the cutting edge of AI & Robotics, FRANKA RESEARCH 3 provides a reference force-sensitive robotic platform and powerful control interfaces, for quick time to results and publishing. The platform also offers a low barrier to entry for researchers in search of a robot arm to automate their experimental setup, as well as a support for teaching robot control and automation courses.
{"author_name":"Dean H","author_url":"https://www.youtube.com/@deanh6334","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/OdqJuvAHvGE/hqdefault.jpg","thumbnail_width":"480","title":"Motion Reasoning for Goal-Based Imitation Learning","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"Stanford.607c3ad","height":71,"loading":"lazy","max_height":71,"max_width":200,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Stanford.607c3ad.png","width":200},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Dean H","author_url":"https://www.youtube.com/@deanh6334","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/OdqJuvAHvGE/hqdefault.jpg","thumbnail_width":"480","title":"Motion Reasoning for Goal-Based Imitation Learning","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?time_continue=7&v=OdqJuvAHvGE&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMjg2NjUsMTY0OTksMTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Motion Reasoning for Goal-Based Imitation Learning"}
Motion Reasoning for Goal-Based Imitation Learning
{"author_name":"Stephen James","author_url":"https://www.youtube.com/@StephenLloydJames","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/og4c1urBoqA/hqdefault.jpg","thumbnail_width":"480","title":"RLBench: The Robot Learning Benchmark","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"imperial_College_London.2148d25","height":53,"loading":"lazy","max_height":53,"max_width":200,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/imperial_College_London.2148d25.png","width":200},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Stephen James","author_url":"https://www.youtube.com/@StephenLloydJames","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/og4c1urBoqA/hqdefault.jpg","thumbnail_width":"480","title":"RLBench: The Robot Learning Benchmark","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?v=og4c1urBoqA","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"RLBench: The Robot Learning Benchmark"}
RLBench: The Robot Learning Benchmark
{"author_name":"argmax-ai","author_url":"https://www.youtube.com/@Argmaxai","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/7UI6QX-eZ3I/hqdefault.jpg","thumbnail_width":"480","title":"Constrained Probabilistic Movement Primitives for Robot Trajectory Adaptation","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"VW_ML.c041e69","height":120,"loading":"lazy","max_height":120,"max_width":474,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/VW_ML.c041e69.jpeg","width":474},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"argmax-ai","author_url":"https://www.youtube.com/@Argmaxai","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/7UI6QX-eZ3I/hqdefault.jpg","thumbnail_width":"480","title":"Constrained Probabilistic Movement Primitives for Robot Trajectory Adaptation","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?v=7UI6QX-eZ3I&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Constrained Probabilistic Movement Primitives for Robot Trajectory Adaptation"}
Constrained Probabilistic Movement Primitives for Robot Trajectory Adaptation
{"author_name":"Anton Bock Andersen","author_url":"https://www.youtube.com/@antonbockandersen6879","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/5SpWWB_peWc/hqdefault.jpg","thumbnail_width":"480","title":"Domain Randomisation Success","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"aau_logo_en.e9e099d","height":161,"loading":"lazy","max_height":161,"max_width":852,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/aau_logo_en.e9e099d.jpg","width":852},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Anton Bock Andersen","author_url":"https://www.youtube.com/@antonbockandersen6879","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/5SpWWB_peWc/hqdefault.jpg","thumbnail_width":"480","title":"Domain Randomisation Success","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?time_continue=2&v=5SpWWB_peWc&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Reinforcement Learning for Robotic Rock Grasp Learning in Off-Earth Space Environments"}
Reinforcement Learning for Robotic Rock Grasp Learning in Off-Earth Space Environments
{"author_name":"Èric Pairet Artau","author_url":"https://www.youtube.com/@ericpairet","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/lym5cCbjI3k/hqdefault.jpg","thumbnail_width":"480","title":"Learning Coupling Terms for Obstacle Avoidance via Low-dimensional Descriptors","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"Edinburgh.6bddc1f","height":500,"loading":"lazy","max_height":500,"max_width":500,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/Edinburgh.6bddc1f.jpg","width":500},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Èric Pairet Artau","author_url":"https://www.youtube.com/@ericpairet","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/lym5cCbjI3k/hqdefault.jpg","thumbnail_width":"480","title":"Learning Coupling Terms for Obstacle Avoidance via Low-dimensional Descriptors","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?v=lym5cCbjI3k&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Learning Generalizable Coupling Terms for Obstacle Avoidance via Low-Dimensional Geometric Descriptors"}
Learning Generalizable Coupling Terms for Obstacle Avoidance via Low-Dimensional Geometric Descriptors
{"author_name":"Adithya Murali","author_url":"https://www.youtube.com/@adithyamurali8454","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/w0B5S-gCsJk/hqdefault.jpg","thumbnail_width":"480","title":"6-DOF Grasping for Target-driven Object Manipulation in Clutter","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"nvidia-2","height":449,"loading":"lazy","max_height":449,"max_width":604,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/nvidia-2.png","width":604},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Adithya Murali","author_url":"https://www.youtube.com/@adithyamurali8454","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/w0B5S-gCsJk/hqdefault.jpg","thumbnail_width":"480","title":"6-DOF Grasping for Target-driven Object Manipulation in Clutter","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?time_continue=1&v=w0B5S-gCsJk&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"6-DOF Grasping for Target-driven Object Manipulation in Clutter"}
6-DOF Grasping for Target-driven Object Manipulation in Clutter
{"author_name":"Robotics Science and Systems","author_url":"https://www.youtube.com/@roboticsscienceandsystems2817","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/uxVBatBMe_Q/hqdefault.jpg","thumbnail_width":"480","title":"RSS 2021, Spotlight Talk 80: Provably Safe and Efficient Motion Planning with Uncertain Human Dyn...","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"mit.b24b824","height":141,"loading":"lazy","max_height":141,"max_width":300,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/mit.b24b824.jpeg","width":300},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Robotics Science and Systems","author_url":"https://www.youtube.com/@roboticsscienceandsystems2817","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/uxVBatBMe_Q/hqdefault.jpg","thumbnail_width":"480","title":"RSS 2021, Spotlight Talk 80: Provably Safe and Efficient Motion Planning with Uncertain Human Dyn...","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?v=uxVBatBMe_Q&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Provably Safe and Efficient Motion Planning with Uncertain Human Dynamics"}
Provably Safe and Efficient Motion Planning with Uncertain Human Dynamics
{"author_name":"Corrado Pezzato","author_url":"https://www.youtube.com/@corradopezzato5057","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/fSmm3xP5yMw/hqdefault.jpg","thumbnail_width":"480","title":"A novel adaptive controller for robot manipulators using active inference","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"Delft-University.ac08994","height":2872,"loading":"lazy","max_height":1384.7637415621987,"max_width":2000,"size_type":"auto_custom_max","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/Delft-University.ac08994.png","width":4148},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Corrado Pezzato","author_url":"https://www.youtube.com/@corradopezzato5057","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/fSmm3xP5yMw/hqdefault.jpg","thumbnail_width":"480","title":"A novel adaptive controller for robot manipulators using active inference","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?v=fSmm3xP5yMw&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=MTY0OTksMjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"A novel adaptive controller for robot manipulators using active inference"}
A novel adaptive controller for robot manipulators using active inference
{"author_name":"Arash Ajoudani","author_url":"https://www.youtube.com/@arashajoudani9213","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/kgod5ePdZpI/hqdefault.jpg","thumbnail_width":"480","title":"A Teleoperation Interface for Loco-manipulation Control of MOCA","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"iit.5a95f42","height":361,"loading":"lazy","max_height":361,"max_width":1000,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/iit.5a95f42.png","width":1000},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Arash Ajoudani","author_url":"https://www.youtube.com/@arashajoudani9213","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/kgod5ePdZpI/hqdefault.jpg","thumbnail_width":"480","title":"A Teleoperation Interface for Loco-manipulation Control of MOCA","type":"video","version":"1.0","width":"200"},"oembed_url":"https://www.youtube.com/watch?v=kgod5ePdZpI&embeds_referring_euri=https%3A%2F%2Fwww.franka.de%2F&source_ve_path=Mjg2NjY&feature=emb_logo","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"A Teleoperation Interface for Loco-manipulation Control of MOCA"}
A Teleoperation Interface for Loco-manipulation Control of MOCA
{"author_name":"Caelan Garrett","author_url":"https://www.youtube.com/@caelangarrett5819","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/IOtrO29DFUg/hqdefault.jpg","thumbnail_width":"480","title":"ICRA 2020 Video","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"mit.b24b824","height":141,"loading":"lazy","max_height":141,"max_width":300,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/mit.b24b824.jpeg","width":300},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Caelan Garrett","author_url":"https://www.youtube.com/@caelangarrett5819","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/IOtrO29DFUg/hqdefault.jpg","thumbnail_width":"480","title":"ICRA 2020 Video","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/IOtrO29DFUg","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Online Replanning in Belief Space for Partially Observable Task and Motion Problems"}
Online Replanning in Belief Space for Partially Observable Task and Motion Problems
{"author_name":"Toki Migimatsu","author_url":"https://www.youtube.com/@tokimigimatsu8248","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/BIb3l0hGDmw/hqdefault.jpg","thumbnail_width":"480","title":"Object-Centric Task and Motion Planning in Dynamic Environments","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"Stanford.607c3ad","height":71,"loading":"lazy","max_height":71,"max_width":200,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Stanford.607c3ad.png","width":200},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Toki Migimatsu","author_url":"https://www.youtube.com/@tokimigimatsu8248","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/BIb3l0hGDmw/hqdefault.jpg","thumbnail_width":"480","title":"Object-Centric Task and Motion Planning in Dynamic Environments","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/BIb3l0hGDmw","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Object-Centric Task and Motion Planning in Dynamic Environments"}
Object-Centric Task and Motion Planning in Dynamic Environments
{"author_name":"Lin Shao","author_url":"https://www.youtube.com/@linshao8343","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/od3jBAJES4w/hqdefault.jpg","thumbnail_width":"480","title":"Scaffold Learning: Learning to Scaffold the Development of Robotic Manipulation Skills","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"Stanford.607c3ad","height":71,"loading":"lazy","max_height":71,"max_width":200,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Stanford.607c3ad.png","width":200},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Lin Shao","author_url":"https://www.youtube.com/@linshao8343","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/od3jBAJES4w/hqdefault.jpg","thumbnail_width":"480","title":"Scaffold Learning: Learning to Scaffold the Development of Robotic Manipulation Skills","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/od3jBAJES4w","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Scaffold Learning: Learning to Scaffold the Development of Robotic Manipulation Skills"}
Scaffold Learning: Learning to Scaffold the Development of Robotic Manipulation Skills
{"author_name":"Xibai Lou","author_url":"https://www.youtube.com/@louxibai","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/E4Az6OacHt4/hqdefault.jpg","thumbnail_width":"480","title":"Learning to Generate 6-DoF Grasp Poses with Reachability Awareness (ICRA 2020)","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"minnesota.793726f","height":853,"loading":"lazy","max_height":853,"max_width":1411,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/minnesota.793726f.jpg","width":1411},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Xibai Lou","author_url":"https://www.youtube.com/@louxibai","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/E4Az6OacHt4/hqdefault.jpg","thumbnail_width":"480","title":"Learning to Generate 6-DoF Grasp Poses with Reachability Awareness (ICRA 2020)","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/E4Az6OacHt4","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Learning to Generate 6-DoF Grasp Poses with Reachability Awareness"}
Learning to Generate 6-DoF Grasp Poses with Reachability Awareness
{"author_name":"Wenbin HU","author_url":"https://www.youtube.com/@wenbinhu7742","height":"150","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/o7L2nE8dwQM/hqdefault.jpg","thumbnail_width":"480","title":"Learning Pregrasp Manipulation of Objects from Ungraspable Poses","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"tsinghua1.d6c8814","height":329,"loading":"lazy","max_height":329,"max_width":881,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/tsinghua1.d6c8814.png","width":881},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":150,"max_height":150,"max_width":200,"oembed_response":{"author_name":"Wenbin HU","author_url":"https://www.youtube.com/@wenbinhu7742","height":"150","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/o7L2nE8dwQM/hqdefault.jpg","thumbnail_width":"480","title":"Learning Pregrasp Manipulation of Objects from Ungraspable Poses","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/o7L2nE8dwQM","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Learning Pregrasp Manipulation of Objects from Ungraspable Poses"}
Learning Pregrasp Manipulation of Objects from Ungraspable Poses
{"author_name":"Loris Roveda","author_url":"https://www.youtube.com/@lorisroveda3099","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/PRLZZxzQpwM/hqdefault.jpg","thumbnail_width":"480","title":"Interaction Force Computation Exploiting Environment Stiffness Estimation for Sensorless Robots","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"supsi.ba900b5","height":347,"loading":"lazy","max_height":347,"max_width":800,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/supsi.ba900b5.jpg","width":800},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Loris Roveda","author_url":"https://www.youtube.com/@lorisroveda3099","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/PRLZZxzQpwM/hqdefault.jpg","thumbnail_width":"480","title":"Interaction Force Computation Exploiting Environment Stiffness Estimation for Sensorless Robots","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/PRLZZxzQpwM","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Interaction Force Computation Exploiting Environment Stiffness Estimation for Sensorless Robot Applications"}
Interaction Force Computation Exploiting Environment Stiffness Estimation for Sensorless Robot Applications
{"author_name":"Marc Toussaint","author_url":"https://www.youtube.com/@marctoussaint4141","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/tVFkKIIODaM/hqdefault.jpg","thumbnail_width":"480","title":"Describing Physics For Physical Reasoning: Force-based Sequential Manipulation Planning","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"maxplanck.2fef5ef","height":946,"loading":"lazy","max_height":500.6615506747817,"max_width":2000,"size_type":"auto_custom_max","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/maxplanck.2fef5ef.jpeg","width":3779},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Marc Toussaint","author_url":"https://www.youtube.com/@marctoussaint4141","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/tVFkKIIODaM/hqdefault.jpg","thumbnail_width":"480","title":"Describing Physics For Physical Reasoning: Force-based Sequential Manipulation Planning","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/tVFkKIIODaM","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Describing Physics For Physical Reasoning: Force-based Sequential Manipulation Planning"}
Describing Physics For Physical Reasoning: Force-based Sequential Manipulation Planning
{"author_name":"Learning and Intelligent Systems Lab, TU Berlin","author_url":"https://www.youtube.com/@IntelligentSystemsLabTUBerlin","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/i8yyEbbvoEk/hqdefault.jpg","thumbnail_width":"480","title":"Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from Images","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"maxplanck.2fef5ef","height":946,"loading":"lazy","max_height":500.6615506747817,"max_width":2000,"size_type":"auto_custom_max","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/maxplanck.2fef5ef.jpeg","width":3779},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Learning and Intelligent Systems Lab, TU Berlin","author_url":"https://www.youtube.com/@IntelligentSystemsLabTUBerlin","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/i8yyEbbvoEk/hqdefault.jpg","thumbnail_width":"480","title":"Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from Images","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/i8yyEbbvoEk","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from Images"}
Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from Images
{"author_name":"Arash Ajoudani","author_url":"https://www.youtube.com/@arashajoudani9213","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/_Axmnu95TyQ/hqdefault.jpg","thumbnail_width":"480","title":"A Capability-Aware Role Allocation Approach to Industrial Assembly Tasks","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"Delft-University.ac08994","height":2872,"loading":"lazy","max_height":1384.7637415621987,"max_width":2000,"size_type":"auto_custom_max","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/Partner%20Logos%202023/Delft-University.ac08994.png","width":4148},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Arash Ajoudani","author_url":"https://www.youtube.com/@arashajoudani9213","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/_Axmnu95TyQ/hqdefault.jpg","thumbnail_width":"480","title":"A Capability-Aware Role Allocation Approach to Industrial Assembly Tasks","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/_Axmnu95TyQ","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"A Capability-Aware Role Allocation Approach to Industrial Assembly Tasks"}
A Capability-Aware Role Allocation Approach to Industrial Assembly Tasks
{"author_name":"Vidyasagar Rajendran","author_url":"https://www.youtube.com/@vidyasagarrajendran238","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/0KglQmCkpAQ/hqdefault.jpg","thumbnail_width":"480","title":"A Framework for Human-Robot Interaction User Studies","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"waterloo.4f899ab","height":1200,"loading":"lazy","max_height":1200,"max_width":1200,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/waterloo.4f899ab.png","width":1200},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"Vidyasagar Rajendran","author_url":"https://www.youtube.com/@vidyasagarrajendran238","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/0KglQmCkpAQ/hqdefault.jpg","thumbnail_width":"480","title":"A Framework for Human-Robot Interaction User Studies","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/0KglQmCkpAQ","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"A Framework for Human-Robot Interaction User Studies"}
A Framework for Human-Robot Interaction User Studies
{"author_name":"iam-lab cmu","author_url":"https://www.youtube.com/@iam-labcmu","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/lK0Zro_Yca0/hqdefault.jpg","thumbnail_width":"480","title":"Search-Based Task Planning with Learned Skill Effect Models for Lifelong Robotic Manipulation","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"cmu.6b59dfc","height":99,"loading":"lazy","max_height":99,"max_width":565,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/cmu.6b59dfc.png","width":565},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"iam-lab cmu","author_url":"https://www.youtube.com/@iam-labcmu","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/lK0Zro_Yca0/hqdefault.jpg","thumbnail_width":"480","title":"Search-Based Task Planning with Learned Skill Effect Models for Lifelong Robotic Manipulation","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/lK0Zro_Yca0","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"Search-Based Task Planning with Learned Skill Effect Models for Lifelong Robotic Manipulation"}
Search-Based Task Planning with Learned Skill Effect Models for Lifelong Robotic Manipulation
{"author_name":"idil ozdamar","author_url":"https://www.youtube.com/@idiltenis","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/_Upnax8j7PQ/hqdefault.jpg","thumbnail_width":"480","title":"A Shared Autonomy Reconfigurable Control Framework for Telemanipulation of Multi-arm Systems","type":"video","version":"1.0","width":"200"}{"brand_image":{"alt":"iit.5a95f42","height":361,"loading":"lazy","max_height":361,"max_width":1000,"size_type":"auto","src":"https://24883234.fs1.hubspotusercontent-eu1.net/hubfs/24883234/iit.5a95f42.png","width":1000},"brand_link":{"no_follow":true,"open_in_new_tab":true,"rel":"nofollow noopener","sponsored":false,"url":{"content_id":null,"href":"","href_with_scheme":"","type":"EXTERNAL"},"user_generated_content":false},"media_embed":{"height":113,"max_height":113,"max_width":200,"oembed_response":{"author_name":"idil ozdamar","author_url":"https://www.youtube.com/@idiltenis","height":"113","html":"","provider_name":"YouTube","provider_url":"https://www.youtube.com/","thumbnail_height":"360","thumbnail_url":"https://i.ytimg.com/vi/_Upnax8j7PQ/hqdefault.jpg","thumbnail_width":"480","title":"A Shared Autonomy Reconfigurable Control Framework for Telemanipulation of Multi-arm Systems","type":"video","version":"1.0","width":"200"},"oembed_url":"https://youtu.be/_Upnax8j7PQ","size_type":"auto","source_type":"oembed","supported_oembed_types":["photo","video"],"width":200},"video_caption":"A Shared Autonomy Reconfigurable Control Framework for Telemanipulation of Multi-arm Systems"}
A Shared Autonomy Reconfigurable Control Framework for Telemanipulation of Multi-arm Systems