Technology

The Robotic Hand Rotates Objects through Touch rather than Vision

The Robotic Hand Rotates Objects through Touch rather than Vision

Tactile sensors can be embedded in the robotic hand’s fingertips or palm, allowing it to sense the shape, texture, and orientation of the object it is grasping. The robotic hand can determine the current orientation of the object and calculate how much rotation is required to achieve the desired position by analyzing changes in pressure distribution or contact points during manipulation.

Inspired by the effortless way humans handle objects without seeing them, a team led by engineers at the University of California San Diego has developed a new approach that allows a robotic hand to rotate objects solely through touch, without relying on vision.

The researchers used their technique to create a robotic hand that can smoothly rotate a variety of objects, including small toys, cans, and even fruits and vegetables, without bruising or squishing them. The robotic hand completed these tasks solely on the basis of touch information. The findings could aid in the development of robots capable of manipulating objects in the dark.

The team’s work was recently presented at the 2023 Robotics: Science and Systems Conference. The researchers created their system by attaching 16 touch sensors to the palm and fingers of a four-fingered robotic hand. Each sensor costs about $12 and has a simple function: it detects whether or not an object is touching it.

In-hand manipulation is a very common skill that humans have, but it is extremely difficult for robots to master. If we can teach robots this skill, it will broaden the range of tasks they can perform.

Xiaolong Wang

What makes this approach unique is that it relies on many low-cost, low-resolution touch sensors that use simple, binary signals – touch or no touch – to perform robotic in-hand rotation. These sensors are spread over a large area of the robotic hand.

This is in contrast to other approaches, which rely on a few high-cost, high-resolution touch sensors attached to a small area of the robotic hand, primarily at the fingertips. These approaches have several flaws, according to Xiaolong Wang, a professor of electrical and computer engineering at UC San Diego who led the current study. For starters, having a small number of sensors on the robotic hand reduces the likelihood of them colliding with the object. This limits the system’s ability to sense. Second, high-resolution touch sensors that provide texture information are extremely difficult to simulate, not to mention prohibitively expensive. This makes using them in real-world experiments more difficult. Finally, many of these approaches continue to rely on vision.

“Here, we use a very simple solution,” said Wang. “We show that we don’t need details about an object’s texture to do this task. We just need simple binary signals of whether the sensors have touched the object or not, and these are much easier to simulate and transfer to the real world.”

Robotic hand rotates objects using touch, not vision

According to the researchers, having a large coverage of binary touch sensors provides the robotic hand with enough information about the object’s 3D structure and orientation to successfully rotate it without vision.

They began by training their system by simulating a virtual robotic hand rotating a variety of objects, including those with irregular shapes. At any given time during the rotation, the system determines which sensors on the hand are being touched by the object. It also evaluates the current positions and previous actions of the hand’s joints. The system uses this information to tell the robotic hand which joint should go where in the next time point.

The researchers then tested their system on the real-life robotic hand with objects that the system has not yet encountered. The robotic hand was able to rotate a variety of objects without stalling or losing its hold. The objects included a tomato, pepper, a can of peanut butter and a toy rubber duck, which was the most challenging object due to its shape. Objects with more complex shapes took longer to rotate. The robotic hand could also rotate objects around different axes.

Wang and his colleagues are now attempting to apply their approach to more complex manipulation tasks. They are currently working on techniques that will allow robotic hands to catch, throw, and juggle.

“In-hand manipulation is a very common skill that humans have, but it is extremely difficult for robots to master,” Wang explained. “If we can teach robots this skill, it will broaden the range of tasks they can perform.”