Our novel sampling-based controller demonstrates sliding, rolling, and pushing motions in real time. It efficiently navigates a highly non-convex object to a specified goal, without relying on any predefined motion patterns or tactile sensing. Our algorithm adapts dynamically to achieve precise and efficient 6-Dof contact-rich object manipulation. (Large gif takes some time to load 🙂)
Read our pre-print here. This work currently under review. Here's a link to our more extensive project website.
ABSTRACT
To achieve general-purpose dexterous manipulation, robots must rapidly devise and execute contact-rich behaviors. Existing model-based controllers are incapable of globally optimizing in real-time over the exponential number of possible contact sequences. Instead, recent progress in contact-implicit control has leveraged simpler models that, while still hybrid, make local approximations. However, the use of local models inherently limits the controller to only exploit nearby interactions, potentially requiring intervention to richly explore the space of possible contacts. We present a novel approach which leverages the strengths of local complementarity-based control in combination with low-dimensional, but global, sampling of possible end-effector locations. Our key insight is to consider a contact-free stage preceding a contact-rich stage at every control loop. Our algorithm, in parallel, samples end effector locations to which the contact-free stage can move the robot, then considers the cost predicted by contact-rich MPC local to each sampled location. The result is a globally-informed, contact-implicit controller capable of real-time dexterous manipulation. We demonstrate our controller on precise, non-prehensile manipulation of non-convex objects using a Franka Panda arm.
A preliminary version of this was presented at IROS 2023 where we were invited to give a spotlight talk at the Workshop on Leveraging Models for Contact-Rich Manipulation. Read our workshop paper here.Â
Here is a link to the current iteration of source code. Please note this is research-grade code - public code base coming soon!
Our novel sampling-based controller demonstrates sliding, rolling, and pushing motions in real time. It efficiently navigates a highly non-convex object to a specified goal, without relying on any predefined motion patterns or tactile sensing. Our algorithm adapts dynamically to achieve precise and efficient contact-rich object manipulation. (Large gif takes some time to load 🙂)
Read my MS thesis here.Â
ABSTRACT
Robots should be able to operate and fit into workflows in human-centric, unconstrained environments. In the manipulation domain, being able to perform non-prehensile contact maneuvers remains a missing piece in enabling this. Prior work has approached the problem by splitting a manipulation task into sequential subtasks which is restrictive and may not lead to optimal solutions. Moreover, this method does not scale well for more complex, contact-rich manipulation tasks. On the other hand, posing the problem as a global optimal control problem and implicitly reasoning about contact interactions is intractable for online, reactive control. Recent work uses local approximations to perform real-time contact-implicit control but is limited by the local nature of the underlying simplification. We address the need for a globally aware, contact-implicit controller capable of real-time control by breaking the problem down into making two distinct hybrid decisions: (a) finding advantageous locations from where local contact implicit control is possible by sampling in a low-dimensional subspace of the state space, and (b) performing local contact-implicit control. We present results on real-time multi-contact control of a complex non-convex toy jack in simulation.
A preliminary version of this was presented at IROS 2023 where we were invited to give a spotlight talk at the Workshop on Leveraging Models for Contact-Rich Manipulation. Read our workshop paper here.
Here is a link to the current iteration of source code. Please note this is in-development, research-grade code.
We are currently working on demonstrating our algorithm on hardware - stay tuned!Â
Check out my master’s thesis presentation where I explain our algorithm in detail.
Explore more videos showcasing our controller's remarkable capabilities. In the first video, you'll see an intriguing behavior where the controller balances the object on two prongs to reach a raised target, demonstrating its ability to adaptively make intelligent control decisions. All these demonstrations are also featured in my master's thesis presentation.
ABSTRACT
Bipedal robots are promising for traversing through rough terrain quickly and efficiently due to their ability to make and break contact with the ground, however real-time footstep planning over rough terrain remains a challenge. Exteroceptive sensing modalities such as RGB-D cameras are usually used to capture the terrain landscape for downstream footstep planning algorithms. However, existing approaches use relatively brittle, handcrafted pipelines for elevation mapping and foothold segmentation. Therefore, we propose a project that learns to adjust the optimal footstep location depending on the terrain. In order to leverage the capabilities of deep learning for robust vision processing in a way that is composable with existing model-based approaches, we propose learning a terrain-conditioned residual to the LQR action-value function (Q-function) for a low dimensional, linear model of bipedal locomotion. By the end of the course, we have developed a complete simulation environment, data collection pipeline, and had several training, validation and testing iterations. We have also implemented an LQR controller with the learned residual, tested the closed loop behavior in simulation in real time, and had some preliminary results showing the potential improvement of bipedal locomotion on rough terrain with vision input using the proposed approach.
Please note that this work is unpublished at present. Video and report available on request.
Sharanya Venkatesh, Francesa Cimino, Ashna Khose, Razaq Aribidesi
Watch our Franka Emika Panda robot as it performs pick and place tasks using our custom arm control library. This library combines forward kinematics with Denavit-Hartenberg (DH) parameters, inverse kinematics with kinematic decoupling, and geometric Jacobian solutions for precise velocity control.
In this video, you'll see:
Advanced Control Algorithms: Seamless integration of forward kinematics, inverse kinematics, and velocity control for optimal performance.
Tag-Based Pose Estimation: Accurate detection and positioning of blocks for precise pick and place operations.
Efficient Task Execution: Watch the robot pick blocks and stack them perfectly at the designated destination stage.
COMING SOON!