Sony's Gran Turismo is one of the biggest racing game series of all time, having sold over 80 million copies globally. But none of those millions of players is the fastest.
In a new breakthrough, a team led by Sony AI – the company's artificial intelligence (AI) research division – developed an entirely artificial player powered by machine learning, capable of not only learning and mastering the game, but outcompeting the world's best human players.
The AI agent, called Gran Turismo Sophy, used deep reinforcement learning to practice the game (the Gran Turismo Sport edition), controlling up to 20 cars at a time to accelerate data collection and refine its own improvement.
After just a few hours of learning how to control the game's physics – mastering how to apply both speed and braking to best stay on the track – the AI was faster than 95 percent of human players in a reference dataset.
Not to be outdone by that pesky 5 percent, GT Sophy doubled down.
"It trained for another nine or more days – accumulating more than 45,000 driving hours – shaving off tenths of seconds, until its lap times stopped improving," the team explains in a new research paper describing the project.
"With this training procedure, GT Sophy achieved superhuman time-trial performance on all three tracks … with a mean lap time about equal to the single best recorded human lap time."
It's far from the first time we've seen AI learn how to outcompete human players of games. Over the years, the conquests have piled up, with varying agents figuring out how to best mere mortals at all sorts of games.
Atari, chess, Starcraft, poker, and Go may have all been designed by human hands, but human hands are no longer the best at playing them.
Of course, those games are all either strategy-oriented games, or relatively simplistic in terms of their gameplay (in the case of Atari games). Gran Turismo – lauded by its fans not just as a video game, but also as a realistic driving simulator – is a different kind of beast.
"Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans," the researchers write in their study.
"Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical maneuvers to pass or block opponents while operating their vehicles at their traction limits."
For GT Sophy's testing, the challenge wasn't just mastering the game's tactics and traction, however. The AI also had to excel in racing etiquette – learning how to outcompete opponents within the principles of sportsmanship, respecting other cars' driving lines and avoiding at-fault collisions.
Ultimately, none of this proved to be a problem. In a series of racing events staged in 2021, the AI took on some of the world's best Gran Turismo players, including a triple champion, Takuma Miyazono.
In a July contest, the AI bested the human players in time trials, but was not victorious in head-to-head races. After some optimizations by the researchers, the agent learned how to improve its performance further, and handily won a rematch in October.
Despite all the achievements, GT Sophy's inventors acknowledge there are many areas where the AI could yet improve, particularly in terms of strategic decision-making.
Even so, in one of the most advanced racing games ever to be released, it's already a better driver than the best of us.
What that means for the future remains unknown, but it's very possible that one day systems like this could be used to control real-world vehicles with better handling than expert human drivers. In the virtual world, it's already there.
"Simulated automobile racing is a domain that requires real-time, continuous control in an environment with highly realistic, complex physics," the researchers conclude.
"The success of GT Sophy in this environment shows, for the first time, that it is possible to train AI agents that are better than the top human racers across a range of car and track types."
The findings are reported in Nature.