After chess and go, DeepMind brings its new AI to football

These virtual gamers aren’t ready for competition yet, but the technology behind them could benefit the robotics of tomorrow.

The company specializing in artificial intelligence DeepMind is one of the world’s benchmarks in this field, with a whole host of exceptional contributions to its credit. It has already produced systems capable of beating any human in different games, controlling nuclear fusion, writing computer programs or determining the 3D structure of proteins. From now on, she is tackling a new discipline: football.

At first glance, this information is surprising; because if these programs work wonders in abstract disciplines such as chess or go, the problem is not quite the same when talking about a physical sporting activity as is the case here. To make its AI “play” football, Google’s satellite firm produced a system based on Neural Probabilistic Motor Primitives (NPMP).

The program begins by analyzing videos and sensor readings produced by a motion capture system (we call it Motion Capture). Today, these systems are used, for example, to produce ultra-realistic animations in films or high-budget video games.

But in the case of these works, the purpose is quite different. The DeepMind algorithm dissects this data in order to learn to imitate certain complex movements in a physical simulator, via a virtual avatar.

In principle, it resembles all the reinforcement learning systems that already exist in the world of AI. The concept is quite simple; it is a training method that consists of “rewarding” the algorithm when it meets the expected objectives and “punishing” it when it fails by adjusting a few variables from one iteration to the next. Very briefly, this allows the program to learn from its mistakes on its own.

For example, in 2017, DeepMind had amazed its world by presenting an AI whose avatar had been able to learn to walk, jump or climb without any prior information. In this case, the program was “punished” for each fall and “rewarded” as soon as it passed an obstacle.

An AI trained from real movements

But this work had obvious limitations. The solutions found by the AI ​​during its training were completely decoupled from reality, and the result was therefore quite fanciful. The objective of this new work is to follow the same path, but limiting learning to coherent movements.

To test the capabilities of their system, they taught him to imitate the movements of humans feeling the ball. ” We optimized teams of agents to play simulated football using reinforcement learning, reducing the solution to plausible moves that the AI ​​learned using data from MoCap “, explain the researchers.

At the start of the process, these avatars were just empty shells, barely able to wiggle on the ground. After about fifty hours of training, these virtual footballers had learned to get up and sprint. And after a few days, they were able to react to the positioning of the ball and other players.

It is important to clarify that these avatars do not really “play” football; they understand neither the rules nor the strategic issues. They are content to reproduce somehow the behaviors they observed during their training and are rewarded as soon as the ball crosses the goal line. A fairly rudimentary conception of this sport, but which nevertheless allowed them to engage in a “match” in good and due form.

The result is a team of coordinated humanoid players that exhibit complex behaviors at multiple scales explains the team in their research paper.

An interest in the robotics of tomorrow

Unlike his AlphaZero AI, which is already devouring all chess grandmasters raw, the objective of this work is not performance; DeepMind does not seek to train professional players, whether real football or virtual football. Instead, they try to figure out how optimize the capacity of these systems to make decisions according to the context and to predict the consequences.

As these predictive systems mature, they may become more and more interesting in practice. This is typically the kind of approach that could work wonders once incorporated into a physical robot.

One can for example imagine that a company like Boston Dynamics could use this concept to improve its famous canine robot by training it to imitate real dogs. This approach could also make it possible to strengthen utility robots, such as Elon Musk’s future Tesla Bot. It will therefore be interesting to keep this work in mind at a time when these technologies are advancing at high speed.

Leave a Comment