An Uphill Challenge

RunBot, already the world’s fastest bipedal robot, has now also learned to keep its balance when walking up ramps. “We have achieved a synthesis of different functionalities, between biomechanics, neuronal reflexive control, and adaptive control, which allows the machine to learn,” says Florentin Wörgötter, PhD, head of the Computational Neurosciences Group at the University of Göttingen in Germany, and leader of the team that built RunBot. The work was published in PLoS Computational Biology.

 

Creating a robot that walks as smoothly as a human is a long-standing challenge. Many walking robots are plodding and methodical, precisely calculating the trajectory of each step. The human gait is much more dynamic; our center of gravity is constantly shifting as we swing our legs forward. Last year, Wörgötter’s group produced RunBot, a successful dynamic walker that could swing its legs almost as quickly as a human. However, RunBot was limited to walking on level surfaces, unable to adjust its balance to walk up an incline.

 

To address that shortcoming, Wörgötter’s team added a learning mechanism that simulates synaptic plasticity, enabling RunBot to learn in a manner similar to humans. The learning mechanism allows RunBot to associate an infrared (IR) sensor, which detects changes in the angle of the floor, with an accelerometer, which detects the rapid acceleration of falling.

 

On its first try at a slope, Runbot teeters backward and falls (top). But it learns from its mistakes: On subsequent efforts, Runbot makes it to the top of the hill (bottom). Courtesy of Florentin WörgötterThe first time RunBot’s IR sensor detected a change in slope, the signal had no meaning, and RunBot continued to walk as normal until it fell, triggering the accelerometer. Over the next few trials, RunBot learned that the signal from the IR sensor requires a change in gait to avoid triggering the accelerometer. With guidance from the researchers, who predefined the direction in which RunBot could alter the parameters controlling its gait, RunBot experimented with different magnitudes of those parameters, resulting in different postures and stride lengths. After four or five trials, RunBot learned to lean forward and take shorter steps, similar to what humans do when walking up a slope.

 

RunBot is able to easily change its gait because of the hierarchical structure of its control systems. On the bottom level, each step is controlled by a reflexive neural network. Sensors in the feet, knee, and hip monitor the position of each joint relative to the other joints and the ground, and artificial motor neurons make minor adjustments to maintain stability. In this manner, the reflex control level autonomously generates a repetitive walking motion.

 

On top of the reflexive control lies an adaptive neural network, which controls RunBot’s posture. By tweaking the activation parameters of the reflexive motor neurons, the adaptive control system causes RunBot to lean forward and take shorter steps when its IR sensor detects an upcoming slope.
In addition to creating robots with a more human looking stride, Wörgötter’s work may be applicable to prosthetic legs. His lab recently started working with a major supplier of prosthetic devices, to apply similar neural networks in advanced prosthetics.

 

“RunBot is a successful demonstration of a small-scale 2-D biped that uses a controller that approximates a static neural network and a novel learning algorithm,” says Steven Collins, president of Intelligent Prosthetic Systems and a doctoral candidate at the University of Michigan.

 



All submitted comments are reviewed, so it may be a few days before your comment appears on the site.

Post new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.