Gran Turismo 7 Update 1.57 introduced yet another surprising upgrade to GT Sophy, the sophisticated new artificial intelligence system designed for the game. GT Sophy 2.1 is the most ambitious release yet, allowing players to use it in custom races with player-tuned cars. We had the chance to speak with Sony AI researchers Dr. Kaushik Subramanian and Dr. Takuma Seno to learn more about version 2.1 and how it works.
Unlike more “traditional” rules-based systems which control computer opponents in racing games, Sophy is an artificially intelligent agent trained using machine learning techniques. This means that instead of being programmed with explicit rules, it has been “taught” how to go fast around a track by driving thousands of laps in the game, literally learning as it goes along.
We’ve explored this before — check out our in-depth article on how Sophy works — but just because Sophy knows how to drive a particular car on a particular track doesn’t necessarily mean it can translate those skills to cars with different performance characteristics. This limited the number of cars Sophy could drive, and changing the cars’ tuning setups was not an option, either.

That all changes with Sophy 2.1, and we were curious to know how the Sony AI team pulled this off.
“I can highlight two points here,” Seno explained. “First, we expanded the observation inputs of the agents. We added a lot of information about car tuning, how the car is tuned or customized. […] So, the agent ‘knows’ what changes the user made when he drives the car.”
“Second, we developed a new training scheme to adaptively change how we collect data on how Sophy performs. For example, if Sophy is struggling with a specific part of a race, we would increase the amount of data for that scenario so that, in the end, Sophy is experiencing all of the data [it] needs to learn from,” he continued.

Subramanian expanded on this point, adding that “when we’re able to recognize that a certain car is difficult to drive, we want to give the agent more experience with that car. That was a key component to make sure that we can deliver on different kinds of cars.”
There is still work to do, of course.
“I’ll be very honest, it’s not perfect,” Subramanian admitted. “There are definitely cases where people can tune cars that Sophy is unable to drive, but it’s pretty good in terms of the range that it has.”
Surprisingly, despite Sophy’s increased sophistication, there have not been too many surprises for the team along the way. “That’s largely due to the fact that there is a whole diverse range of cars available in the game, and once you give the agent access to those cars, the generalization actually works quite well,” Subramanian said.
As Sophy becomes more generalized, we were curious what the testing process looks like behind the scenes, and GT7’s popular engine swap system is actually an important part of this system.
“We actually have scenarios to test engine-swapped cars,” Seno revealed. “An engine swap is an easy way to produce extreme edge cases, so we have some specialized scenarios to test agents’ ability with [these cars] to see how [it] performs.”
It’s still quite a process, though.

“The entire testing pipeline is actually pretty massive, because we need to make sure the agent is doing OK under any conditions that could possibly happen in the user side,” he continued.
The Sony AI team works closely with Polyphony Digital and sends new versions of Sophy to the Gran Turismo developers in Japan every week.
“[We have] meetings with them very often,” Subramanian revealed. “We chart out a plan that we talk through and try to find ways in which we can make the player experience as fun as possible. We work closely with them to make this a reality.”
There is clearly a lot going on behind the scenes, and in the rapidly evolving world of AI, it can be tough for the team to keep up with the latest research while pushing the development of GT Sophy forward.

“We are mostly reading papers based on reinforcement learning research,” Seno said. “Fortunately, reinforcement learning is now very popular in the machine learning domain and it is used in a wide variety of applications. Each of those papers is actually developing and making reinforcement learning better.”
One area of research, though, has been of special interest to the Sony AI team. “One of the things that we are particularly interested in is trying to find ways in which we can make the training more efficient,” Subramanian added.
“What this means is, ‘Can we train faster?’, ‘Can we train with fewer resources?’ That process is something that’s going to benefit us all, because right now we only have a certain amount of time that we can take. […] Any savings that we can get with that just allows us to iterate faster and faster.”
Subramanian and Seno remained tight-lipped on what’s next for Sophy, but it has come a long way since it was first introduced and we’re looking forward to the next iteration. As always, we’ll keep an eye out for more news and bring it to you as soon as we have it.
Thanks, once again, to the Sony AI team for sitting down to speak with us!
See more articles on Gran Turismo Sophy and Sony AI.