Gran Turismo Sophy: Sony AI x Polyphony Digital

  • Thread starter Magog
  • 1,719 comments
  • 194,653 views
I’ve read this thread almost completely and a lot of things I thought about during the last couple of days after watching the Sophy presentation have been mentioned.

One thing I haven’t seen mentioned yet is the fact that Sophy is not driving with caution as we humans do. Let me try to explain.

As Fraga, Gallo and other top drivers have said, Sophy brakes later, harder, and it still makes the apex, keeps more speed and doesn’t lose time. In fact it seems it gains time more so under braking and turning than corner exit.

We, humans, tend to sacrifice corner entry and prioritise corner exit for a few reasons: we can’t dive bomb while racing as we’ve seen Sophy doing often because other cars are in front or around us and our reaction time is not perfect 100ms 100% of the time (our best would be twice as slow), neither is the reaction time of people in front of us (we always need to give a margin - call it racecraft). It’s always easier to correct while accelerating than under braking, something Sophy doesn’t care about at the moment because it has no margin of error when it gets to the level we’ve seen;

we drive and connect to the game through several different types of hardware (TV, controller, wheel and pedals) and through an internet connection, each one of them causing different input lags and friction levels/force feedback while Sophy is in the game itself, making the reaction time of 100ms even more inhuman like;

we, humans, can’t keep our focus sharp 100% of the time we race. It’s not humanly possible to react to everything, brake at every brake marker, accelerate at every apex the same way, every time with the same reaction time. Our attention to what is around us fluctuates giving us some margin of error even when we do TT and our laps are very close in terms of lap time (we blink too);

Sophy knows about precise grip levels on track and off track apparently (we saw it with two wheels on dirt and grass) while we don’t. We feel the traction and see the consequences of our actions on the screen;

our visibility differs if we’re racing at night, sunset or sunrise for example, making it harder to react to some things that happen in the race.

So, even though all of what was shown is impressive and a promise of a good future in terms of racing AI in a GT game, the hardest part is yet to be developed. Sophy has yet to learn race craft, how to adapt to dynamic weather and track conditions, tire wear and fuel consumption/economy.

I’m excited to see how it develops.
 
Last edited:
Fatigue and the like can be fudged by invoking whatever scalability ends up in the final AI.

One thing I've been thinking about is the 10 Hz figure. Yes that implies a 100 ms reaction time which is certainly optimistic for a human being, but it also doesn't really reflect the fine analogue control humans are capable of when they aren't purely reacting.

The best drivers anticipate; they have a built in model of the car, tyres, track surface and can pre-empt the car's reactions to a certain extent to specific input (which is more or less how this AI has been trained). Although we might react relatively slowly, we can process data quite quickly, especially visual data - our reaction time might be closer to 200 ms, but a game running at 5 fps would be deemed unplayable. Combine anticipation with that finer gradual control and you have something that looks rather different to a perfect stepwise reactive input at 10 Hz.

You can see artefacts of the AI's discretised input in the way the it makes some pretty drastic manoeuvres at times, with some snap steering for instance. If it were allowed to modulate its control at a faster rate, but not be given any really new information any quicker (or perhaps introduce a delay, same thing) I think it might look more natural.


And this comes full circle to the issue of pace, because a real driver (especially a novice) may struggle to make such a rapid input, so constraining rates of control changes is important in making the AI's performance scalable. If you reduce the AI's ability to steer quickly, into a corner for example, it will have to slow down.
 
But a purely algorithmic AI has effectively learned the rules you taught it, along with the bias you unwittingly wrote into them. Which will only reveal themselves in testing.

The PP will depend in the track, but the classic solution is to lap several tracks to cover the bases - maybe a skid pan, slalom and 0-100-0 type test first (which can actually be calculated directly), then some representative tracks. That has nothing to do with the AI model itself.

The deep learning process in question, which I forget but is in the papers, has been used to teach an AI to land on a representation of the moon. Without further training that AI can land successfully on other planets with other types of local terrain. It's not like the neural nets in CMR2 in that respect. And yes landing on the moon is not quite as complex a problem as racing game AI either, amusingly.

Given enough training material, it doesn't need to know the vehicle's limits in advance, any more than you or I do - and the correct PP calculation process side steps any orientation period. But those limits can actually be calculated and fed to it anyway as part of the model (like it is with all conventional AI).
Perhaps I am overcomplicating things, but to adress one point about knowing the vehicles limits, to drive the car on a track you don't, to drive it well on a track you perhaps don't, but to drive it very well you do have to learn the vehicles limits. Would Sophy not be the same in that it would need to practice with a vehicle and track to get the best from it? I get some vehicles will be similar to one another there may be exceptions, but some vehicles handle very differently to others.
 
Just to address this seperately, I don't think it is in GT7 at launch, infact they were clear that they hope it will make it to GT7 in some form at some point.

In addtion, I don't think it would ever be used to determine a cars PP value (and it shouldn't). Because it's a learning AI, you could calcualte a cars PP with it, then without changing anything, recalculate the PP and get a higher number due to the AI learning the car better and putting in a better lap.

You'd be much better with a fixed AI than a learning AI for something that needs to be used to produce a consistent measurement, it could be a relatively very simple AI to be honest.
I think I did not explain, what I meant by parts of it in the game. I agree a fixed AI is needed to be a reference point. What I meant by parts of it in the game, This is a big 2 year project and the learning side of it, was not just what Sophy was doing on the track. The Devs, will have been learning things themselves about building AIs for a racing game, This knowledge is not developed in a Vacuum and would be shared amongst the Staff at PD.
 
Perhaps I am overcomplicating things, but to adress one point about knowing the vehicles limits, to drive the car on a track you don't, to drive it well on a track you perhaps don't, but to drive it very well you do have to learn the vehicles limits. Would Sophy not be the same in that it would need to practice with a vehicle and track to get the best from it? I get some vehicles will be similar to one another there may be exceptions, but some vehicles handle very differently to others.
That comes down to how you train it and how fast you want it to be.

You can make the training contingent on some physical parameters of the car / tyre / track / setup etc., and it can explore that space fully (more fully than any game might actually produce in practice) to develop a model independent of those properties - i.e. a model of vehicle physics generally.

Then when it comes time to pit out, all it needs to know are those same parameters for its current situation and its model is still optimised. To introduce some fun scalability, you can, for example, neglect to update the weather related aspects as a race progresses to create an AI "character" who is useless in changeable conditions.

As before (and as in the case of CMR2), this is fundamentally an issue of defining the problem well from the start.
 
I will have the chance to speak with Kazunori Yamauchi and the Sony AI team about Gran Turismo Sophy in the coming days — let me know if you guys have any specific questions about Sophy that you'd like answered. 👍
The way SOPHY and pro players drive to get the fastest times in GTS is very extreme and would not work in real life (needs to be worded in a more delicate way lol). Has this event highlighted any changes to the physics that you want to make or have made?

I'm late but something like this seems good to ask.
 
So true :lol:


Unfortuantely, and as has been proven in practice with other machine learning AI's, it's very complicated. That's not to say the team aren't up to the task, time will tell on that one, but it's likely to be some way off actually bieng implemented in GT7.

It's most definitely not on rails and it does appear to be using the ingame physics, however that also means it's taking advantage of the quirks in the physics engine too. I'm not sure if you watched all of the demonstration as it was racing in a pack with other AI and human drivers. It was very impressive, though not suitable for mass implementation in GT7 in it's current state, an admission was made by Kaz that it wasn't ready yet too.

I wonder, is that Kazunori being negative perhaps :scared:.
Let's say we want B level AI drivers, shouldn't it work like it does with making them as fast as possible by simply tweaking their goals to fit a slower driver and putting in natural limits like reaction time etc..?
 
Let's say we want B level AI drivers, shouldn't it work like it does with making them as fast as possible by simply tweaking their goals to fit a slower driver and putting in natural limits like reaction time etc..?
In theory, but many games have tried to make a learning AI that they can give human traits and scale it's performance, and it's definitely not easy to scale it and make it behave like a human. That said, it's not impossible and there are people like @Griffith500 who seem to have a more solid grasp on learning AI.
 
In theory, but many games have tried to make a learning AI that they can give human traits and scale it's performance, and it's definitely not easy to scale it and make it behave like a human. That said, it's not impossible and there are people like @Griffith500 who seem to have a more solid grasp on learning AI.
Absolutely, the AI graveyard would be huge, if it wasn't for the delete function :)
But from my understanding the fundamental idea is that the AI drivers are trained based on variables. So instead of having one model in the game where you tinker with its data, you would have a several models that were trained before they were added to the game.

And that would give PD endless possibilities to tweak the AI drivers.
 
How much console CPU power does Sophy need compared to the old AI in previous GT titles?

Great question. And an extension would be can Sophy be applied to all the AI cars in the race? Is there sufficient computing power with a PS5 to achieve this?
 
Great question. And an extension would be can Sophy be applied to all the AI cars in the race? Is there sufficient computing power with a PS5 to achieve this?
A good question indeed. Another question off that would be, would the PS5 we play on need that power to constantly have the AI learning. Or could they get the AI level to a point and then effectively bottle it into the existing AI algorithm and make it scalable. Would require much less power potentially than an AI that's always learning, or 20.
 
As far as I know, the computing power should be sufficient, the real computing power requirements don't come from running the AI per-se, it comes from developing it and training it quickly. For example, one of the things done to train it quickly is to have it runinng many different simulations of itself in different situations simultaniously. That isn't possible on say a PS4, but once the AI is developed and trained to the degree PD want to implement it into GT7 then you don't need to keep running those simulations anymore, you can just let it continue to learn at a slower pace or disable the learning entirely.

Machine learning AI's have been running fine (as in the AI program works not that the AI itself is fine, that's more a mixed bag) in PC and console games for well over a decade, the original XBox could do it, so there's no reason the PS4 shouldn't be able to run Sophy. It's just aobut getting it developed to the right point before implementation.
 
Last edited:
As far as I know, the computing power should be sufficient, the real computing power requirements don't come from running the AI per-se, it comes from developing it and training it quickly. For example, one of the things done to train it quickly is to have it runinng many different simulations of itself in different situations simultaniously. That isn't possible on say a PS4, but once the AI is developed and trained to the degree PD want to implement it into GT7 then you don't need to keep running those simulations anymore, you can just let it continue to learn at a slower pace or disable the learning entirely.

Machine learning AI's have been running fine (as in the AI program works not that the AI itself is fine, that's more a mixed bag) in PC and console games for well over a decade, the original XBox could do it, so there's no reason the PS4 shouldn't be able to run Sophy. It's just aobut getting it developed to the right point before implementation.

What games on the OG Xbox used machine learning AI?
 
Not that I don't believe you but do you have a link to them describing the process?




There are some interesting tidbits of info from across the Forza series relating to it's AI in there.

Forza has used it's Drivatar system since the very first game, it's gone thorugh changes over time, naturally.

RFactor is probably the first racing game I can recall playing that had learning AI drivers, that also came out in 2005, I didn't play the original Forza for a while after launch, close to FM2's release IIRC.
 
Last edited:
Machine learning AI's have been running fine (as in the AI program works not that the AI itself is fine, that's more a mixed bag) in PC and console games for well over a decade, the original XBox could do it, so there's no reason the PS4 shouldn't be able to run Sophy. It's just aobut getting it developed to the right point before implementation.
Just because whatever ML AI Turn 10 has runs on limited hardware, doesn't mean that this new implementation can. It's a completely different program, and just because it's been created by machine learning that tells you exactly zero about what the hardware requirements might be to run it. That's like saying because Polyphony wrote GT1 and it ran on the PS1, then GT7 must run on a PS1 as well.

Machine learning is a method of refining variables and responses from a program based on outside input. That's all. If you want to know specifics of the hardware currently required to run Sophy, read the published paper. I've already posted a snippet that outlines the broad hardware specs that they're running it on, and they're far from trivial.
 
Just because whatever ML AI Turn 10 has runs on limited hardware, doesn't mean that this new implementation can. It's a completely different program, and just because it's been created by machine learning that tells you exactly zero about what the hardware requirements might be to run it. That's like saying because Polyphony wrote GT1 and it ran on the PS1, then GT7 must run on a PS1 as well.

Machine learning is a method of refining variables and responses from a program based on outside input. That's all. If you want to know specifics of the hardware currently required to run Sophy, read the published paper. I've already posted a snippet that outlines the broad hardware specs that they're running it on, and they're far from trivial.
This is true, my point isn't that it's no more demanding that the AI from 2005, just that it's very possible to make machine learning AI work on PS4/PS5. There's no reason to believe Sophy (or a version of it) won't be possible on PS4 and that it'll be too demanding for the system.

From my understanding, a lot of the hardware Sophy is currently running on is for the purpose of the development and learning of the AI.
 
Last edited:
I doubt it, he said it's hopefully coming to GT7 and implied in the future not at launch.

Also to have a machine learning AI do something like that wouldn't be relaible, cars tested as it progressively learns will have infalted PP scores as the AI is driving better. It would need to be a fixed AI, likely one that doesn't make any mistakes and doesn't learn. There's no need for it to be learning, in fact it learning in that application would cause unrelaible results.
Anything happening within GT7 will be a fixed AI pattern, or an image of Sophy's behavior at instant X. The actual deep learning process alone required more than 10 times the power we'll have to run the whole game on our consoles. I would assume every possibility would have been pre-simulated in that case.
 
Anything happening within GT7 will be a fixed AI pattern, or an image of Sophy's behavior at instant X. The actual deep learning process alone required more than 10 times the power we'll have to run the whole game on our consoles. I would assume every possibility would have been pre-simulated in that case.
That's very much an assumption, there's a probability it would retain some element of learning, granted, the majority of that learning will have been done prior. Ultimately we don't know what state it will be in if/when it gets added to GT7, we can only speculate.

As for it's use to test a cars PP ratings, read the disussion following that post. There's clearly more consideration to make than I was making at the time of the post quoted.
 




There are some interesting tidbits of info from across the Forza series relating to it's AI in there.

Forza has used it's Drivatar system since the very first game, it's gone thorugh changes over time, naturally.

RFactor is probably the first racing game I can recall playing that had learning AI drivers, that also came out in 2005, I didn't play the original Forza for a while after launch, close to FM2's release IIRC.


That's incredibly refreshing to watch someone be able to talk about a game who is actually a good communicator. It's informative and concise, explains technical concepts without devolving into jargon, and avoids vague and abstract philosophising.
 
Last edited:
This is true, my point isn't that it's no more demanding that the AI from 2005, just that it's very possible to make machine learning AI work on PS4/PS5. There's no reason to believe Sophy (or a version of it) won't be possible on PS4 and that it'll be too demanding for the system.

From my understanding, a lot of the hardware Sophy is currently running on is for the purpose of the development and learning of the AI.
The reason is the hardware that it's currently running on. Yes, the backbone of that hardware is for the "learning" part of the AI (the trainer), but even the hardware that's handling the interface between the PS4s and the trainer is non-trivial.

I've said this before, but 2 cores and 3.3GB RAM to interface with one AI is significant. That's just the part running the PS4 control and forwarding data to the trainer. It's going to take improving that by a couple of orders of magnitude in order to get a small handful of AI running alongside the actual game.

That's a really, really good reason to believe that it won't be possible on PS4. I get that people want to be optimistic, but there's plenty to justify why we're unlikely to be seeing this in public games any time soon. Saying that there's no reason to think that it won't be possible on PS4 is just ignoring the actual facts of the paper.
View attachment 1112871

A V100 is a significant piece of hardware, and 8 CPUs and 55GB memory is substantial for the training hardware. In an environment where the agent is no longer being trained presumably that bit could be omitted and the rollout worker's compute node would handle "driving" the AI, but two CPUs and 3.3GB of RAM is still a fair bit compared to what modern consoles have available.

However, I'd assume that they've made next to no effort to optimise this. It's far easier in research to just throw extra hardware at the problem rather than waste time trying to optimise the agent. It's entirely possible that significant improvements can be made, but even so they'd have to be massive in order to get say, 10+ AI into a race. Still, even if they can't do it for PS5 that sort of hardware requirement is something that is potentially reasonable for hardware within the next 5-10 years (PS5Pro or PS6 maybe).

Based on the above, there's no way it seems like a PS4 could run a game like GTS and this agent. Let alone multiple copies of it. PS5 maybe if they optimise, simplify, and run a limited number of agents but it's hard to say.
 
Sometimes Sophy made very sharp and very discrete wheel movements, looks very unrealistic and arcade-like. Sophy's speed of turning wheel is too fast imo. People arms have inertia. And at first we (people) need to see result of our sudden sharp wheel movement, and only after that we can make correction of the turning to the same or opposite side. Constant 10 Hz for Sophy is a very fast brain, need to correct.
And Sophy needs some additional kilograms in car weight and less grip tyres to make his/her/its actions smoother
 
Back