Validation: Dextrous manipulation and robot control

Benchmark for bimanual robotic manipulation of semi-deformable objects

Most of our everyday activities, such as the usage of tools to grasp small objects, require coordinated motion of the arms (e.g., reaching) or the hands (e.g., manipulation). Performing these tasks with one arm is often infeasible, mainly because the dexterity and flexibility required for such tasks are beyond a single arm’s capabilities. Similarly, a single robot arm is merely incapable of meeting the requirements of such complex tasks. A dual or multi-arm robotic system, on the other hand, extends the flexibility and capabilities of a single robot arm. It allows, for example, highly complex manipulation of small and deformable objects that would otherwise be infeasible for single arm systems. We are far from having robotic systems that can robustly achieve such dexterous manipulation skills.

One, however, can envision an extensive range of applications in households and factories that can benefit from such strategies. Examples include placing and closing lids, packaging boxes in pallets, inserting the USB cable into sockets. Completing this type of tasks requires to localize both objects relative to one another (lid on top of a box, cable into a tube) and to adapt forces and movements of the two arms in coordination. Insertion of semi-deformable materials is made particularly difficult as one cannot build an explicit model of the deformation and interaction forces. Machine learning provides a structured framework that can allow robots to learn difficult-to-model problems by using their previous experience, without explicit modeling of the task constraints and possibly taking advantage of noisy expert demonstrations. This benchmark proposes an evaluation of machine learning algorithms on a difficult multi-arm insertion task that involves collaboration among the arms to manipulate a small and semi-deformable object.

The watchmaking process highlights these shortcomings of current robotic systems and poses a few challenges to them, including (but not limited to):

• Placing an irregularly shaped object in the correct groove,
• Handling a deformable object,
• Multi-arm coordination:

– One arm stabilizing the system, and the other(s) performing the manipulation,
– Performing object manipulation that requires two tools

Under the SAHR project, we are working to provide the research community with a benchmark that evaluates machine learning algorithms on a difficult multi-arm insertion task that involves collaboration among the arms to manipulate a small and semi-deformable object form the watch assembly. This component is highlighted in the depicted CAD file of the watch assembly.

For the benchmark, and increasing the manipulability of the watch, we work with scaled-up versions of the actual watch to carefully designed scale factors of 3.5 and 5.8. These watches are 3D printed for the robotic setup.

In the benchmark, we will focus on the plate insertion task of the watch assembly process. Thus, the tasks that need to be learned are:

  1. Coordination in the synchrony of the arms to orient plate for ease of control and successful insertion,
  2. Change orientation and pressure in response to force feedback to insert the plate and its leg.

Exploiting hand dexterity for multiple object grasping

Human and robotic hand holding multiple objects

Human hands are highly dexterous systems. Hand dexterity enables humans to grasp everyday objects in distinct poses for different manipulation purposes, and even to grasp multiple objects at the same time for complex manipulation tasks. One main reason for hand dexterity lies in the exploitation of human hand’s abundant degrees of freedom.


Inspired by the human hand dexterity, we are currently developing algorithms to enhance dexterity of robotic hand with redundant degrees of freedom. Our approach largely improves robotic hand’s capability of handling object(s) in dexterous grasping and manipulation tasks.

Enhancing Dexterity in Confined Spaces: Real-Time Motion Planning for Multi-Fingered In-Hand Manipulation

Human hands are dexterous for in-hand manipulation, such as regrasp, finger gaiting, and sliding. In this work, we present a real-time motion planning framework that models the robotic hand’s collision-free space using a set of learned distance functions. These functions are readily accessible in real-time and are integrated with a dynamical systems control paradigm and sampling-based planning techniques. This approach accelerates the search for viable manipulation pathways. The combination of methods not only enables the quick creation of in-hand manipulation techniques, including advanced sliding movements, but also establishes a new standard for efficiency and effectiveness in controlling robotic hands.

Dynamic Object Shape Exploration with a Robotic Hand

If our vision is occluded or the environment is dark, we humans use our hands to identify objects around us, e.g., finding an object behind a curtain. Similar situations can frequently occur in robotic applications, where an object cannot be precisely recognized due to occlusion or poor lighting conditions. In this research, we propose an algorithm to imitate the human strategy of adjusting hand pose while exploring an unknown object. During the exploration, an informative path planning approach based on Gaussian Process Implicit Surface is adopted to reconstruct the object based on tactile sensing feedback.

Self-Correcting Model-based Robot Control

In the context of multi-body robotic systems, the coordination of a substantial number of degrees of freedom and interaction forces is a critical task. This coordination presents a considerable challenge, particularly when operating in task space, even for seemingly straightforward tasks, such as those involving a bimanual robot interacting with an object. In contrast, humans and animals possess a remarkable ability to adapt to unforeseen situations and master new skills. They accomplish this feat by learning from their mistakes and engaging in a trial-and-error process.

We envision a similar robotic system that learns through trial-and-error: it tries to achieve the task with a quadratic programming (QP)-based controller, fails (e.g., the robot drops the box), and tries again until it manages to realize the goal. Our approach builds upon existing model learning techniques, such as the ID model, to improve the QP dynamics model over time. Moreover, we utilize adaptive control methods to regulate the feedback term. This is in contrast to prior approaches that focused solely on modifying the QP cost function. By utilizing trial-and-error learning and our proposed methodology, we anticipate that our robotic system will be able to efficiently learn and accomplish complex tasks.

Real-Time Self-Collision Avoidance in Joint Space for Humanoid Robots

In this work, we introduce a method to help a humanoid robot avoid colliding into itself in real-time. We do this by teaching the robot the safe areas where it can move its joints without crashing into other parts of its body. Then, we use this knowledge to make sure the robot’s movements stay collision-free while it’s operating.

As humanoid robots become more complex with more ways they can move, it becomes harder to teach them how to avoid collisions efficiently and accurately. To solve this problem, we break down the robot into smaller, more manageable sections. We test different advanced machine learning techniques to find the best way to teach the robot these safe movement boundaries.

We put our method to the test using a specific 29-joint iCub humanoid robot, and it shows that our approach works really well in preventing collisions in real-time.

Neural Joint Space Implicit Signed Distance Functions for Reactive Robot Manipulator Control

In this work, we introduce a method for teaching a computer system how to quickly figure out how far different parts of a robot are from colliding into each other. This has been a tricky problem in robotics because when robots don’t know exactly how close they are to obstacles, they tend to be overly careful or can make mistakes. This slows down their reactions and can make their tasks more expensive to perform.

Our method uses GPU’s and the fact that the distance function used is differentiable to quickly measure these distances and find the best ways for the robot to move without bumping into things. We demonstrate that this learned knowledge can be used to control the robot in real-time. We formulate two that the robot can use this function: first by using it as a collision avoidance constraint for a quadratic programming inverse kinematics solver and second by introducing it as a collision cost in a sampling-based joint space model predictive controller (MPC)

When we put this to the test with a robot with seven motors and obstacles intentionally placed in its way, we achieve a faster control speed compared to other traditional methods, about 15% faster for one way of controlling the robot and 40% faster for another. This means the robot can react and move more quickly and efficiently to avoid obstacles.

Reactive Collision-Free Motion Generation in Joint Space via Dynamical Systems and Sampling-Based MPC

Dynamical systems are a way of planning a robot’s movement so that it doesn’t collide into things, and it can quickly react if the situation changes. They use a kind of math to reshape the robot’s planned movement to avoid obstacles. But when the obstacles are non-convex, these methods can get stuck in certain situations, which makes it hard to use them for more complicated robots with lots of degrees of freedom.

On the other hand, there’s a different method called Model Predictive Control (MPC) that also helps the robot find safe paths without collision. But this method can take a lot of time to compute, especially when the robot has many degrees of freedom and needs to consider a lot of possible movements ahead of time.

We introduce a new way of combining these methods to control a robot in a crowded space with things moving around. Our method uses the best of both worlds: it changes the robot’s planned movements to avoid obstacles using a combination of the dynamical system approach and the Model Predictive Control. We make sure the robot doesn’t get stuck by adjusting its planned path using obstacle-aware speeds, which guide it around obstacles. These speeds are determined by paths calculated by the Model Predictive Control.

We don’t run the Model Predictive Control all the time, only when the robot might get stuck. We tested this approach in both computer simulations and in the real world with a robot that has seven degrees of freedom. The results show that our method helps the robot to avoid oddly shaped obstacles, even when the environment is changing quickly, and the robot needs to react fast.

Adaptive Task Planning and Action Tuning using Large Language Models

Large Language Models (LLMs) present a promising frontier in robotic task planning by leveraging extensive human knowledge. Nevertheless, the current literature often overlooks the critical aspects of adaptability and error correction within robotic systems. This work aims to overcome this limitation by enabling robots to modify their motion strategies and select the most suitable task plans based on the context. We introduce a novel framework termed “action contextualization”, aimed at tailoring robot actions to the precise requirements of specific tasks, thereby enhancing adaptability through applying LLM-derived contextual insights. Moreover, our framework supports online feedback between the robot and the LLM, enabling immediate modifications to the task plans and corrections of errors. Our framework has achieved an overall success rate of 81.25% through extensive validation.Finally, when integrated with dynamical system (DS)-based robot controllers, the robotic arm-hand system demonstrates its proficiency in autonomously executing LLM-generated motion plans for sequential table-clearing tasks, rectifying errors without human intervention, showcasing robustness against external disturbances.
Our proposed framework also features the potential to be integrated with modular control approaches, significantly enhancing robots’ adaptability and autonomy in sequential tasks.

Publications

Chatzilygeroudis K., Fichera B., Lauzana I., Bu, F., Yao K., Khadivar F., and Billard A. (2020). “Benchmark for Bimanual Robotic Manipulation of Semi-Deformable Objects.“, IEEE Robotics and Automation Letters, 5(2), pp.2443-2450. DOI:10.1109/LRA.2020.2972837
Publication

Yao, K. and Billard, A. “Exploiting Kinematic Redundancy for Robotic Grasping of Multiple Objects.” The IEEE Transactions on Robotics (T-RO), 2023. DOI: 10.1109/TRO.2023.3253249

Publication

Farshad Khadivar*, Kunpeng Yao*, Xiao Gao, and Aude Billard. “Online Active and Dynamic Object Shape Exploration with a Multi-fingered Robotic Hand.” *Equal authorship. Submitted to Robotics and Autonomous Systems (RAS), 2023.

Publication

Mikhail Koptev, Nadia Figueroa, and Aude Billard. “Real-Time Self-Collision Avoidance in Joint Space for Humanoid Robots” IEEE Robotics and Automation Letters, 6(2), pp. 1280-1247. DOI: 10.1109/LRA.2021.3057024. April 2021

Publication

Mikhail Koptev, Nadia Figueroa, and Aude Billard. “Neural Joint Space Implicit Signed Distance Functions for Reactive Robot Manipulator Control” IEEE Robotics and Automation Letters, 8(2), 480-487 pp. DOI: 10.1109/LRA.2022.3227860. December 2022.

Publication

Xiao Gao, Kunpeng Yao, Farshad Khadivar, and Aude Billard. “Real-Time Collision-Free Motion Planning for In-Hand Manipulation with a Multi-Fingered Hand.” Submitted to Robotics: Science and Systems (RSS) 2023. Under review