Technology

Researchers have Developed New Artificial Intelligence Software that Allows Robots to Grasp and Move Objects Quickly and Smoothly

Researchers have Developed New Artificial Intelligence Software that Allows Robots to Grasp and Move Objects Quickly and Smoothly

Lockdowns and other COVID-19 safety precautions have made online shopping more popular than ever in the last year, but the surge in demand has left many merchants scrambling to deliver orders while protecting the safety of their warehouse workers.

Researchers at the University of California, Berkeley, have developed new artificial intelligence (AI) software that allows robots to grip and move objects quickly and smoothly, perhaps allowing them to assist people in warehouse situations in the near future.

The technology is presented in a report published in the journal Science Robotics today (Wednesday, Nov. 18, 2020). Deep learning is an area of machine learning that deals with artificial neural networks, which are algorithms inspired by the structure and function of the brain.

Many acts that come easy to humans, such as determining where and how to pick up different types of objects and then coordinating the shoulder, arm, and wrist movements required to move each object from one area to another, are really rather difficult for robots to perform.

Deep learning is a critical component of self-driving automobiles, allowing them to detect a stop sign or discriminate between a pedestrian and a lamppost. It enables voice control in consumer electronics such as phones, tablets, televisions, and hands-free speakers. Robotic mobility is also jerky, which increases the danger of both products and robots being damaged.

“Warehouses are still operated primarily by humans, because it’s still very hard for robots to reliably grasp many different objects,” said Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley and senior author of the study.

“In an automobile assembly line, the same motion is repeated over and over again, so that it can be automated. But in a warehouse, every order is different.”

Goldberg and UC Berkeley postdoctoral researcher Jeffrey Ichnowski previously developed a Grasp-Optimized Motion Planner that could determine how a robot should pick up an object and move to transfer it from one spot to another.

Warehouses are still operated primarily by humans because it’s still very hard for robots to reliably grasp many different objects. In an automobile assembly line, the same motion is repeated over and over again, so that it can be automated. But in a warehouse, every order is different.

Ken Goldberg, William S. Floyd Jr.

A computer model learns to execute categorization tasks directly from images, text, or sound in deep learning. Deep learning models can attain state-of-the-art accuracy, even surpassing human performance in some cases.

The motions generated by this planner, on the other hand, were jerky. While the software’s parameters could be modified to produce smoother motions, these calculations took around half a minute on average to complete.

By incorporating a deep learning neural network into the motion planner, Goldberg and Ichnowski, in partnership with UC Berkeley graduate student Yahav Avigal and undergraduate student Vishal Satish, drastically reduced the motion planner’s computation time.

A robot can learn from examples using neural networks. Later on, the robot is frequently able to generalize to similar objects and movements.

These approximations, however, aren’t always accurate enough. The neural network’s approximation might then be optimized using the motion planner, according to Goldberg and Ichnowski.

“The neural network takes only a few milliseconds to compute an approximate motion. It’s very fast, but it’s inaccurate,” Ichnowski said. “However, if we then feed that approximation into the motion planner, the motion planner only needs a few iterations to compute the final motion.”

The researchers reduced average computation time from 29 seconds to 80 milliseconds, or less than one-tenth of a second, by merging the neural network and the motion planner. Deep learning excels at solving problems with analog inputs (and even outputs).

They are images of pixel data, documents of text data, or files of audio data, rather than a few variables in a tabular style. Robots could be assisting in warehouse situations in the next several years, according to Goldberg, thanks to this and other developments in robotic technology.

“Shopping for groceries, pharmaceuticals, clothing and many other things has changed as a result of COVID-19, and people are probably going to continue shopping this way even after the pandemic is over,” Goldberg said. “This is an exciting new opportunity for robots to support human workers.”