Artificial intelligence is often seen as disembodied: a mind like a program, floating in a digital void. But the human mind is deeply connected to our body – and an experiment with virtual creatures performing tasks in simulated environments suggests that AI may benefit from a mind-body setup.
Stanford scientists were curious about the physical-mental interaction in our own evolution, from blobs to monkeys using tools. Could it be that the brain is being influenced by the capabilities of the body and vice versa? This has already been suggested – over a century ago, in fact – and it is certainly evident that with a gripping hand one learns more quickly to manipulate objects than with a less differentiated appendix.
It is difficult to know if the same is true for an AI, because their development is more structured. Yet the questions raised by such a concept are compelling: Could an AI learn better and adapt to the world if it evolved to do so from the start?
The experiment they designed is similar in some ways to simulated environments that have been used for decades to test evolutionary algorithms. You set up a virtual space and drop simple simulated creatures into it, just a few connected geometric shapes that move around randomly. Out of a thousand of these twist shapes, you pick the 10 that twisted the furthest and make a thousand variations on those, and repeat over and over. Pretty soon you have a handful of polygons making a pretty walkable walk on the virtual surface.
It’s old hat, though: As the researchers explain, they needed to make their simulation more robust and variable. They weren’t just trying to create virtual creatures that wander around, but to study how these creatures learned to do what they do, and whether some learn better or faster than others.
To find out, the team created a simulation similar to the old ones, by depositing their sims, which they called “unimals” (for “universal animals” … we will see if this terminology takes off), initially just to learn to walk. The simple forms had a spherical “head” and a few articulated limbs resembling branches, with which they developed a number of interesting walks. Some have stumbled forward, some have developed an articulate, lizard-like walk, and others a choppy but effective style reminiscent of an octopus on land.
So far so similar to older experiences, but here the similarities more or less end.
Some of these unimals grew up on different home planets, so to speak, with rolling hills or low barriers for them to climb. And in the next phase, unimals from these different terrains clashed on more complex tasks to see if, as is often said, adversity is the mother of adaptability.
“Almost all of the previous work in this area has evolved agents on simple flat ground. In addition, there is no learning in the sense that the controller and / or the behavior of the agent are not learned via direct sensorimotor interactions with the environment ”, explained co-author Agrim Gupta at TechCrunch – in other words, they evolved by surviving but didn’t really learn by doing. “This work allows for the first time simultaneous evolution and learning in complex environments such as terrains with steps, hills, ridges and movements beyond to perform manipulations in these complex environments.”
The top 10 unimals in each environment engaged in tasks ranging from new obstacles to moving a ball towards a goal, pushing a box up a hill, or patrolling between two points. It was here that these “gladiators” really showed their virtual courage. Unimals who learned to walk in varying terrain learned their new tasks faster and performed them better than their cousins the flatlanders.
“In essence, we find that evolution rapidly selects morphologies that learn faster, thus allowing behaviors learned late in the life of early ancestors to be expressed early in the lives of their descendants,” write the authors in article, published today in the journal. Nature.
It’s not just that they’ve learned to learn faster; the evolutionary process selected body types that would allow them to adapt faster and apply the lessons faster. On flat terrain, an octopus flop can get you to the finish line just as quickly, but hills and ridges are selected for a quick, stable, and adaptable body setup. Bringing that body into the gladiatorial arena gave these hard-hitting school unimals a leg up on the competition. Their versatile bodies were better able to apply the lessons their minds were testing – and soon they left their softer competition in the dust.
What does it all mean, besides providing some entertaining GIFs of 3D stick figures galloping through virtual terrain? As the article says, the experiment “opens the door to performing large-scale in silico experiments to provide scientific information on how learning and evolution cooperatively create sophisticated relationships between complexity. environmental, morphological intelligence and learning control tasks “.
Suppose you have a relatively complicated task that you would like to automate: climbing stairs with a crawling robot, for example. You can design the moves manually or combine custom moves with those generated by the AI, but the best solution might be to have an agent develop their own movement from scratch. Experience shows that there is potentially a real advantage in making the body and mind that controls it evolve in tandem.
If you are proficient in the code, you can make the whole operation work on your own hardware: the research group made all the code and data available for free on GitHub. And also make sure that your high-end compute cluster or cloud container is ready to use: “The default settings assume you’re running code on 16 machines. Please make sure each machine has a minimum of 72 processors.