An artificial intelligence learned to skilfully control digital humanoid soccer players by working through decades’ worth of soccer matches in just a few weeks

Technology 31 August 2022

AI learned to control digital humanoid soccer players

Liu et al., Sci. Robot. 7, eabo0235

Artificial intelligence has learned to play soccer. By learning from decades’ worth of computer simulations, an AI took digital humanoids from flailing tots to proficient players.

Researchers at the AI research company DeepMind taught the AI how to play soccer in a computer simulation through an athletic curriculum resembling a sped-up version of a human baby growing into a soccer player. The AI was given control over digital humanoids with realistic body masses and joint movements.

“We don’t put infants in a 11 versus 11 match,” says Guy Lever at DeepMind. “They first learn to walk around, then they learn to dribble a ball, then you might play one vs one or two vs two.”

Advertisement

The first phase of the curriculum trained the digital humanoids to run naturally by imitating motion-capture video clips of humans playing soccer. A second phase involved practising dribbling and shooting the ball through a form of trial-and-error machine learning that rewarded the AI for staying close to the ball.

[embedded content]

The first two phases represented about 1.5 years of simulation training time, which the AI sped through in about 24 hours. But more complex behaviours beyond movement and ball control began emerging after five simulated years of soccer matches. “They learned coordination, but also they learned movement skills that we didn’t have explicitly set as training drills before,” says Nicolas Heess at DeepMind.

The third phase of training challenged the digital humanoids to score goals in two-on-two matches. Teamwork skills, such as anticipating where to receive a pass, emerged over the course of about 20 to 30 simulated years of matches, or the equivalent of two to three weeks in the real world. This led to demonstrated improvements in the digital humanoids’ off-ball scoring opportunity ratings, a real-world measure of how often a player gets in a favourable position on the pitch.

Such simulations won’t immediately lead to flashy soccer-playing robots. The digital humanoids were trained on simplified rules that allowed fouls, provided a wall-like boundary around the pitch and avoided set pieces such as throw-ins or goal kicks.

The long learning times make the work more difficult to directly transfer to real soccer robots, says Sven Behnke at the University of Bonn in Germany. However, it would be interesting to see if DeepMind’s approach is competitive in the annual RoboCup 3D Simulation League, he says.

The DeepMind team has begun teaching real robots how to push a ball toward a target and plans to investigate if the same AI training strategy works beyond soccer.

Journal reference: Science Robotics, DOI: 10.1126/scirobotics.abo0235

More on these topics: