Engineering

Versatile Robo-Dog Runs at 3 Meters Per Second Through the Sandy Beach

Versatile Robo-Dog Runs at 3 Meters Per Second Through the Sandy Beach

A versatile robo-dog is a robotic dog that can perform a wide variety of tasks and functions. These robotic dogs are designed to be adaptable and flexible, with a range of sensors and capabilities that enable them to complete tasks such as navigation, object recognition, and communication.

On the 25th, KAIST (President Kwang Hyung Lee) stated that a research group under the direction of Professor Jemin Hwangbo of the Department of Mechanical Engineering had created a quadruped robot control system that can walk steadily and quickly even on uneven ground like a sand beach.

The research group of Professor Hwangbo created a method to replicate with a quadrupedal robot the force that a walking robot would experience on a surface consisting of granular materials like sand.

The team also developed an artificial neural network structure that can walk simultaneously while adjusting to different types of ground without prior knowledge. They then put it to reinforcement learning.

By demonstrating its robustness in changing terrain, such as the ability to move at high speed even on a sandy beach and walk and turn on soft grounds like an air mattress without losing balance, the trained neural network controller is expected to broaden the range of applications for quadrupedal walking robots.

This research, with Ph.D. Student Soo-Young Choi of KAIST Department of Mechanical Engineering as the first author, was published in January in the Science Robotics.

Reinforcement learning is an AI learning technique used to build a computer that gathers information on the outcomes of various actions in a random circumstance and uses that information to carry out a task.

It has been shown that providing a learning-based controller with a close contact experience with real deforming ground is essential for application to deforming terrain. The proposed controller can be used without prior information on the terrain, so it can be applied to various robot walking studies.

Suyoung Choi

Due to the enormous amount of data needed for reinforcement learning, it is common practice to gather data through simulations that closely resemble physical processes in the actual world.

Some potential functions of a versatile robo-dog could include:

  • Assistance: A robo-dog could assist people with disabilities or mobility issues by performing tasks such as fetching items, opening doors, or providing stability while walking.
  • Search and Rescue: Robo-dogs could be used in search and rescue missions to locate survivors or deliver supplies to people in hard-to-reach locations.
  • Security: Robo-dogs could be used as security systems, patrolling areas to detect potential intruders or hazards.
  • Entertainment: Robo-dogs could be designed for entertainment purposes, performing tricks or playing games with their owners.
  • Education: Robo-dogs could be used in educational settings to teach children about robotics and programming.

Particularly in the realm of walking robots, learning-based controllers have been used in real environments after learning from data gathered in simulations to successfully control walking in a variety of terrains.

It is crucial to create a real-world environment during the data collection stage since the performance of the learning-based controller rapidly declines when the actual environment deviates from the taught simulation environment. Therefore, the simulator must offer a comparable contact experience in order to develop a learning-based controller that can maintain balance in a deforming environment.

Based on a ground response force model that took into account the additional mass impact of granular medium identified in other studies, the study team established a contact model that predicted the force generated upon contact from the motion dynamics of a walking body.

Furthermore, the deforming landscape was effectively recreated by computing the force produced by one or more contacts at each time step.

The study team also developed an artificial neural network structure that uses a recurrent neural network to assess time-series data from the robot’s sensors to implicitly predict ground properties.

The learned controller was mounted on the robot ‘RaiBo’, which was built hands-on by the research team to show high-speed walking of up to 3.03 m/s on a sandy beach where the robot’s feet were completely submerged in the sand.

It was able to run steadily even on rougher surfaces, like grassy fields and a running track, by responding to the properties of the surface without any further programming or adjustment to the governing algorithm.

In addition, it rotated with stability at 1.54 rad/s (approximately 90° per second) on an air mattress and demonstrated its quick adaptability even in the situation in which the terrain suddenly turned soft.

The research team proved that the proposed recurrent neural network modifies the controller’s walking method in accordance with the ground properties and illustrated the significance of providing a suitable contact experience during the learning process by comparison with a controller that assumed the ground to be rigid.

The study team’s simulation and learning methodology is anticipated to aid in the development of robots that can carry out practical activities by enhancing the variety of terrains that different walking robots can traverse.

The first author, Suyoung Choi, said, “It has been shown that providing a learning-based controller with a close contact experience with real deforming ground is essential for application to deforming terrain.” He went on to add that “The proposed controller can be used without prior information on the terrain, so it can be applied to various robot walking studies.”

This research was carried out with the support of the Samsung Research Funding & Incubation Center of Samsung Electronics.