[ weird things ] | teaching robots to walk, the evolutionary way

teaching robots to walk, the evolutionary way

The best way to make robots move may be to just let them figure out how to do it.
baby bender

While robots aren’t yet conquering the world and won’t be anytime soon, they’re finally learning to walk, and in the near future, the kind of bipedal locomotion that’s a major part of what makes humanoid robots such an enormous engineering and maintenance challenge, may get a lot easier. And not only are they learning how to walk, but we’re making them learn to walk like we do, through trial and error. Just like you don’t pause to do several million calculations before each step but simply let your motor neurons guide your muscles with thick synapse connections developed over a lifetime of walking, neither do cyberneticist Josh Bongard’s machines. Instead of all that tedious computation, their algorithms track down the optimum set of movements for all their joints and appendages, a set of movements that could simply be repeated for that particular robot when it has to move at the same speed. The same algorithms could probably be applied to teaching it how to run or jump as well, though a faster moving robot needs very strong joints and very powerful motors to withstand repeated impacts of hitting the ground with all its weight as it makes its way forward.

So how is it done? Bondard is actually applying his work on self-discovering robots, robots that discover how they’re put together and try to learn how to move regardless of how they’re altered, to new new morphologies and designs. Besides just figuring out how to move, the robots in his simulations and lab are also working on balancing themselves and finding the optimal walk cycles for their designs. Not only would this save valuable computing overhead when the machine is in action, it also addresses a very important point in programming robots. Programmers can use drivers and DLLs, collections of algorithms and logic they include in their code, and set ranges of motion themselves. However, without knowing the exact weight distribution of the machine at every step and the exact power of each rotor and actuator, as well as how it affects balance, the robot would very likely fall when it tries to take its first step. One of the solutions tried in early robotics was to cram as many sensors as possible into your machine and write complex logic to keep it moving. The one proposed by Josh Bondard is far more elegant and lets the robot figure it out for you. After all, it’s faster at computation and in the time it takes you to try ten walking routines, it can try tens of thousands.

But wait, if the robot is figuring it all out for you, what else could it figure out? Well, not much actually. From the paper detailing the mechanics of the learning process, we can see that each sensor in the robot is assigned to an artificial motor neuron object which in turn is connected to every other motor neuron object like it. Then, a robot is given the parameters optimized for it in a simulation and a squashing function, an equation designed to correct the errors inevitably made by the neuron objects and bring them closer in line with the wanted result over as many iterations as necessary. And there’s more. It turns out that to teach a machine how to stand up, it’s actually very beneficial to get it crawling like a snake first, then hobble spread-legged like a lizard, and then you can get it to stand up, building on each step because the basic movements forward and on legs are now computed and ready to apply to a new body type. Snakes can’t just fall over, and lizards with widely positioned legs are quite anatomically stable. Figure out how they move, posits Bondard, and you’re two thirds of the way there to freely walking and balancing yourself. According to him, he’s following the evolutionary path we see in the fossil record and letting his robots evolve the same way animals did in the primeval past.

That means we shouldn’t be designing robots manually or basing them on a rough idea of what we’re seeing in the natural world through an experiment or two and incorporating that feature into new machines, but allow the machines to evolve from scratch in a simulator. After thousands of virtual failures, they’ll eventually master the task and give us a very good idea for an optimal layout and morphology. Of course one very important note to keep in mind is that the resulting machines will only be good at that task and very little else. Unlike a natural organism, an evolving robot doesn’t have to be good at almost everything to survive and doesn’t need to adapt to a wide variety of environments and threats. It will be the most efficient and well adapted robot for your needs and perform them extremely well with enough trial and error as it learns, but outside that task, it will be virtually useless. Now, if you want a complex, rugged robot able to tackle a complicated environment, you’re looking at a much more sophisticated set of simulations with multiple neural networks arranged into “cortices” and ran through far more rigorous virtual conditions requiring years and years of planning to create.

See: Bongard, Josh (2011). Morphological change in machines accelerates the evolution of robust behavior. Proceedings of the National Academy of Sciences PMID: 21220304

Bongard, J., Zykov, V., & Lipson, H. (2006). Resilient Machines Through Continuous Self-Modeling Science, 314 (5802), 1118–1121 DOI: 10.1126/science.1133687

# tech // artificial intelligence / computer science / robots


  Show Comments