Gran Turismo Sophy: Sony AI x Polyphony Digital

  • Thread starter Magog
  • 1,719 comments
  • 194,639 views
if You disable the learning entirely, you might get very odd results when it comes up against a car/track/tune/settings combo it hasn’t practiced before.

In addition, what if it’s learnt to be more proficient at RWD cars over FWD cars at the point the learning is disabled? To be honest, I see a plethora of possible issues with using it to get a consistent PP rating.

It’s a lot simpler to just use a simple AI to do a simple task consistently rather than try to dumb down a complex one.
They can use a fixed model for the PP and something else for racing. That is the trivial part.

Besides, in reality, it won't need to practice, it'll ship with all the practice it needs. PD can issue updates with extra learning in it, tweaks etc. It makes sense to keep the overall package consistent for every player, and add in variety via other means.

Let's separate the machine learning process from the learned model that it creates. If we even get to use that model as opponent AI at all; it'd be more likely to be for PP first than opponents ever.


In those live races against humans, it's very likely that the AI was iterated in the background between the events, perhaps even each race. That's because they're demonstrating the machine learning algorithm itself. No need for that in the game, although you could adapt its pace in response to anything you choose once you make the model itself scalable.

If you want to teach it new tricks, e.g. to counter exploits or bugs as they appear, then it makes more sense to leave that to PD to update at regular intervals. Maybe send them replay data or something.
 
They can use a fixed model for the PP and something else for racing. That is the trivial part.
That’s why using Sophy makes zero sense, they don’t need to work hard to get it tweaked right they just need a really simple AI that’s consistent.
Besides, in reality, it won't need to practice, it'll ship with all the practice it needs. PD can issue updates with extra learning in it, tweaks etc. It makes sense to keep the overall package consistent for every player, and add in variety via other means.
That’s not feasible, to get Sophy to be equally competent in every car, track, tune and settings combination possible is nigh on impossible. Therefore you’ll get inconsistent PP ratings.
Let's separate the machine learning process from the learned model that it creates. If we even get to use that model as opponent AI at all; it'd be more likely to be for PP first than opponents ever.


In those live races against humans, it's very likely that the AI was iterated in the background between the events, perhaps even each race. That's because they're demonstrating the machine learning algorithm itself. No need for that in the game, although you could adapt its pace in response to anything you choose once you make the model itself scalable.

If you want to teach it new tricks, e.g. to counter exploits or bugs as they appear, then it makes more sense to leave that to PD to update at regular intervals. Maybe send them replay data or something.
It’s completely unnecessary to try to shoe-horn a machine learning AI into a task it’s not suited to that’s relative simple to achieve with much less work.
 
Last edited:
...some are so negative about this AI, Sophy. Can't wait for it to be integrated into the game so we can see if you guys really are faster than the winners of the GT Sport championship or not...
Exactly how many of those who have offered constructive criticism have ever claimed to be?

None.

So please don't post flame-bait.
 
Last edited:
...some are so negative about this AI, Sophy. Can't wait for it to be integrated into the game so we can see if you guys really are faster than the winners of the GT Sport championship or not...
What negativity are you seeing? I'm seeing a lot of positivity and some constructive discussion in here.
 
Which is strange to me because another research study was done last May, and didn't seem to get much recognition. Doing essentially the same exact thing in GT Sport, which I assume this research was baselined from.
I mean you could just read introduction of said papers where they clearly describe the current state of the art and contrast it with their approach. But I will spare you the trouble, the reason this is a big deal is firstly, it is a culmination of those previous papers that go from building a certain model-free learning method to super-human performance on time trials (at time of publishing they mention that to best of their knowledge there is no other similar AI already) and 1v1 trials and we go on to beating the top drivers in the game in real-time in an actual race and all the complexities that entails. Lastly, you might have heard of other AIs like Deep Blue who have also achieved super human performance but one key aspect here is that to the best of my knowledge this was done in games that do not require continuous actions like GT. What I mean is in chess, the AI makes a move after the opponent and the game is essentially "paused" until it has to act as it does not require any input until the AI wants to play its turn. Whereas in GT, the AI needs to continuously act in the game and there is no baseline action either like just go full throttle while or no throttle while waiting next set of inputs. This indicates very robust behavior.
 
Last edited:
...some are so negative about this AI, Sophy. Can't wait for it to be integrated into the game so we can see if you guys really are faster than the winners of the GT Sport championship or not...
If I had a nickel for every time that people raised (valid) concerns about how an experiment (because that's what this is!) is actually implemented on a mass level, and how potential problems are potentially solved, while also acknowledging that this still doesn't fix the short term problem of AI in GT being bad (and not having been shown to be fixed that effectively, though time of course will tell)

Then I'd probably be a richer man then I am.
 
The possibilities certainly are endless, it's just easier said than done without getting pretty wonky results. But hopefully it's a task they are up to. If they pull this off it has the potential to be a long term game changer. But machine learning AI isn't new, and it's notoriously difficult to make it feel natural and scalable and not make it do strange things at the same time. Time, technology and techniques move on of course so it's an exciting project.
If it does wonky stuff it will be just another GT player 👀

I watched some of the races and our AI champions as they are now would be a fantastic improvement. And I would think that slowing them down to B level wouldn't be to complicated.
 
...some are so negative about this AI, Sophy. Can't wait for it to be integrated into the game so we can see if you guys really are faster than the winners of the GT Sport championship or not...
I would think it's quite easy to program a car to go round a track quickly ( as if it was on rails) This new ai seems to me to not based on realistic physics in its implementation. The real test will be how multiple cars behave with each other, hopefully not in a long line of sheep like all previous gt games.
 
Last edited:
If it does wonky stuff it will be just another GT player 👀
So true :lol:

I watched some of the races and our AI champions as they are now would be a fantastic improvement. And I would think that slowing them down to B level wouldn't be to complicated.
Unfortuantely, and as has been proven in practice with other machine learning AI's, it's very complicated. That's not to say the team aren't up to the task, time will tell on that one, but it's likely to be some way off actually bieng implemented in GT7.
I would think it's quite easy to program a car to go round a track quicky ( as if it was on rails) This new ai seems to me to not based on realistic physics in its implementation. The real test will be how multiple cars behave with each other, hopefully not in a long line of sheep like all previous gt games.
It's most definitely not on rails and it does appear to be using the ingame physics, however that also means it's taking advantage of the quirks in the physics engine too. I'm not sure if you watched all of the demonstration as it was racing in a pack with other AI and human drivers. It was very impressive, though not suitable for mass implementation in GT7 in it's current state, an admission was made by Kaz that it wasn't ready yet too.

I wonder, is that Kazunori being negative perhaps :scared:.
 
Last edited:
I think this a byproduct of a setting that affects grip when going off road. When set to realistic, even when dipping 2 wheels in the dirt, you have to slow down or you are ****ed.
Only 4 wheels dirt to reduce grip during an amount of time. Even setted to real grip. That's why Sophie like to put 2 wheels on the dirt cutting his driving lines. Normally even top drivers don't choose this option because this can upset the car's behavior easily .
AI driving inputs are so quick , so much faster than a top real driver , that why AI can do this option. In my opinion. (Also Igor Fraga was talking about something in this way during his Le Mans test.)
 
I think turning on tire wear will be the biggest challenge for Sophy. Optimizing lines based on consistent inputs and outputs over thousands of kilometers is one thing, but will it understand and deal with how grip subtly (and not so subtly) changes over time (i.e. racing at the limit on lap 1 vs. lap 15 will have very different results), at what point it needs to back off the throttle or pit? Dealing with all of those more real-world changing variables will be interesting to see.
 
That’s why using Sophy makes zero sense, they don’t need to work hard to get it tweaked right they just need a really simple AI that’s consistent.

That’s not feasible, to get Sophy to be equally competent in every car, track, tune and settings combination possible is nigh on impossible. Therefore you’ll get inconsistent PP ratings.

It’s completely unnecessary to try to shoe-horn a machine learning AI into a task it’s not suited to that’s relative simple to achieve with much less work.
You're against the idea of using an AI model trained by "machine learning" - fine.

That does not mean it is at all unsuitable.

To continue to claim it is so shows a lack of understanding of the difference between the machine learning process and the actual resulting model.


This is a non issue since Sophy is technically external research anyway and it says nothing about what approach PD have taken for PP calculation and opponent AI in GT7 proper, so you really need not fret at this stage.

I still think it warrants discussion and further research, it's fun. I personally believe the eventual GT7 implementation of Sophy will be a driving assist and training tool.
 
You're against the idea of using an AI model trained by "machine learning" - fine.

That does not mean it is at all unsuitable.

To continue to claim it is so shows a lack of understanding of the difference between the machine learning process and the actual resulting model.


This is a non issue since Sophy is technically external research anyway and it says nothing about what approach PD have taken for PP calculation and opponent AI in GT7 proper, so you really need not fret at this stage.

I still think it warrants discussion and further research, it's fun. I personally believe the eventual GT7 implementation of Sophy will be a driving assist and training tool.
No, I understand it well. There are problems with getting a machine learning AI like Sophy to perform consistently for every vehicle, track, tune and settings combination possible. At it's simplest it learns through trial and error and then human input tweaks it's values that determine what it wants to do or avoid.

It would take millions if not billions of tests to get it to try every combination once, let alone to drive well using every combination and then you'd have to make sure it hasn't become more proficient with one combination than any of the others. It would be considerably easier to just create a basic AI to do the job consistently without that hassle, and since we know Sophy will only come to GT7 in a future update, if at all, we know they aren't using it for the PP calcaultions anyway.

I'm not slamming Sophy at all, I think it's an exciting development and keen to see how the project progresses. Rather, I'm saying using it for the PP scores is an uneccessarily complex way to solve a simple problem, therefore it makes no sense.

In a testing environment it could be worth exploring how consistent you can make it just pick up a new car, track, tune and settings combination to one it's used to, but it's certainly not worth using it in GT7 to replace a system that we have no reason to beleive won't work at launch.
 
Last edited:
R3E is known for it's excellent AI that can seem almost human at times. We don't need super advanced AI like Sophy, but as a concept I think it's cool. I like that it could potentially be used by real race drivers to find new and unintuitive racing lines. But you know, if this brings better AI to GT7, I'm happy too.
 
No, I understand it well. There are problems with getting a machine learning AI like Sophy to perform consistently for every vehicle, track, tune and settings combination possible. At it's simplest it learns through trial and error and then human input tweaks it's values that determine what it wants to do or avoid.

It would take millions if not billions of tests to get it to try every combination once, let alone to drive well using every combination and then you'd have to make sure it hasn't become more proficient with one combination than any of the others. It would be considerably easier to just create a basic AI to do the job consistently without that hassle, and since we know Sophy will only come to GT7 in a future update, if at all, we know they aren't using it for the PP calcaultions anyway.

I'm not slamming Sophy at all, I think it's an exciting development and keen to see how the project progresses. Rather, I'm saying using it for the PP scores is an uneccessarily complex way to solve a simple problem, therefore it makes no sense.

In a testing environment it could be worth exploring how consistent you can make it just pick up a new car, track, tune and settings combination to one it's used to, but it's certainly not worth using it in GT7 to replace a system that we have no reason to beleive won't work at launch.
How do you know it's performing consistently in the first place? How would you measure that? Target lap times? Set by whom, or what? How do you know that is consistent?

Game AI is fundamentally a problem of defining the problem. So it actually doesn't matter what approach you use, it's always a lot of hassle to tune an AI. No matter what. If you want good results I mean. Because you have to race it to determine that. A lot.


How many runs do you think it took just to figure out how to drive an car in the first place? The deep learning process used learned the underlying physics, not the car or track. Granted it may not come across every issue or exciting opportunity just by driving one car / track combo. But it's actually not much more work to have it drive every track, not once it can actually drive one track already. Where are you getting the figure of millions or billions more runs? How many did it need to learn to turn one lap? How long did that take in real time?

The PP problem is far from simple if you actually want consistency. A "simple AI" (??) is not guaranteed to be more consistent across cars, tracks, states of tune etc. You need an AI that can account for those things without bias - one with deep learning experience of the underlying physics, perhaps. It's not a simple problem if you actually try to tackle it.


But it's moot because Sophy won't be in GT7 at launch.


Did I mention that Colin McRae Rally 2.0 in 2000 had a neural net AI?
 
How do you know it's performing consistently in the first place? How would you measure that? Target lap times? Set by whom, or what? How do you know that is consistent?

Game AI is fundamentally a problem of defining the problem. So it actually doesn't matter what approach you use, it's always a lot of hassle to tune an AI. No matter what. If you want good results I mean. Because you have to race it to determine that. A lot.


How many runs do you think it took just to figure out how to drive an car in the first place? The deep learning process used learned the underlying physics, not the car or track. Granted it may not come across every issue or exciting opportunity just by driving one car / track combo. But it's actually not much more work to have it drive every track, not once it can actually drive one track already. Where are you getting the figure of millions or billions more runs? How many did it need to learn to turn one lap? How long did that take in real time?

The PP problem is far from simple if you actually want consistency. A "simple AI" (??) is not guaranteed to be more consistent across cars, tracks, states of tune etc. You need an AI that can account for those things without bias - one with deep learning experience of the underlying physics, perhaps. It's not a simple problem if you actually try to tackle it.


But it's moot because Sophy won't be in GT7 at launch.


Did I mention that Colin McRae Rally 2.0 in 2000 had a neural net AI?
You make a valid point, any AI can be inconsistent. But I still see a learning AI as a complex solution to the problem. As for the deep learning process, it learns the physics and the cars and tracks. The vehicles do not have the same limits or behave the same way at those limits so the AI will learn that. It's all part of the process.

I would argue an AI that doesn't learn and hasn't been taught has less bias. Not infallible sure, but the the PP rating itself is going to be heavilly dependant on the track it's set on. Like you say, it's all moot. And yes, I know about the AI in CM 2.0.
 
But a purely algorithmic AI has effectively learned the rules you taught it, along with the bias you unwittingly wrote into them. Which will only reveal themselves in testing.

The PP will depend in the track, but the classic solution is to lap several tracks to cover the bases - maybe a skid pan, slalom and 0-100-0 type test first (which can actually be calculated directly), then some representative tracks. That has nothing to do with the AI model itself.

The deep learning process in question, which I forget but is in the papers, has been used to teach an AI to land on a representation of the moon. Without further training that AI can land successfully on other planets with other types of local terrain. It's not like the neural nets in CMR2 in that respect. And yes landing on the moon is not quite as complex a problem as racing game AI either, amusingly.

Given enough training material, it doesn't need to know the vehicle's limits in advance, any more than you or I do - and the correct PP calculation process side steps any orientation period. But those limits can actually be calculated and fed to it anyway as part of the model (like it is with all conventional AI).
 
Why is this information being released right at the door step of GT7 release, since it appears GT7 will be using the it's original AI, and this appears to be for some future game. Releasing this information might be interpreted for GT7
Considering that gt7 will be around for years, I think you are probably off base.

If this AI gets implemented into GT7 that would be cool. If this AI is also as fast or faster than the Drivers in the GTC I’m in trouble lol. Hopefully if they get implemented they are more aware and give you a serious battle and don’t feel robotic and programmed to play follow the leader. Also hope there’s various difficulty settings to choose from this. Numbers or being able to scale the difficulty would be good. As of now all we get is three options.
It would seem to me all they have to do is run x amount of passes to find a desired lap time or driving level
 
This new sophie A.I. felt like it just came out of nowhere for me. Polyphony and Sony all along behind closed doors was creating a new A.I. system that learns and drives as well as some of the best GT Sport players in the world. What a day, I wish it would be coming to GT7 at launch. hehe

Though to be fair, this has been sort of on course for Polyphony lately in some cases. Their car sounds got major improvements in GT Sport, we went from not having a proper livery editor to having one of the best ones out there by GT Sport and especailly now in GT7, and from having a lacking customization system in GT to quite a remarkable one in GT7.
Kaz had said at some point that they were working on A.I. improvements.

I guarantee that if this is ever actually implemented, that it basically leads to the same events that we have now, except the rabbits now are cheetahs.

How much money was dumped into this where hiring a few programmers with experience in racing game AI could have sufficed? It seems like such a roundabout way to potentially fix a problem.
They will be using the information and technology for other things, you get that right. As to cheetahs, all they should have to do is use an ai or what ever you want to call it with less track time and lock it in there.
 
Yeah, and then there’s conserving tyres and knowing when to go in for fuel. All this, applied to 400+ cars and 90+ track layouts. Very time consuming I’d say.

Just imagine how long it’d take for it to learn one good pit strategy in one car on a 20 lap La Sarthe time trial, let alone race alongside 15 other cars.
There are tons of variables but I think a simple algorithm would solve it pretty quickly. However, it would probably just be easier to trigger the event once the tires have reached X% or X% of fuel
 
There are tons of variables but I think a simple algorithm would solve it pretty quickly. However, it would probably just be easier to trigger the event once the tires have reached X% or X% of fuel
But then it doesn’t learn how the tires gets to X% and how long it takes for the fuel to run out, does it?
 
Last edited:
I've just sat through the video again and I still can't get my head around what SOPHY can do with the car. I can watch a top human driver setting an exceptional lap and believe what I'm seeing but with SOPHY, it feels like I'm seeing GT's physics engine foibles laid bare and exploited.

It's bloody clever stuff but the way they put a lap in looks like a parody of a human's efforts.
If the AI drove the same way in Assetto Corsa, it would crash a lot.
 
kaz talked about updating the gt7 AI in one of the recent interviews. he also mentioned this in the new videos

5187-A6-F1-2-C1-F-47-DB-9-D71-BEAB8-C37-D76-D.jpg
Did you know where this picture is from? Ive already tried to scour the Internet for the bideo of kaz presenting this yet i didnt seem to find it
 
I will have the chance to speak with Kazunori Yamauchi and the Sony AI team about Gran Turismo Sophy in the coming days — let me know if you guys have any specific questions about Sophy that you'd like answered. 👍
How much console CPU power does Sophy need compared to the old AI in previous GT titles?
 
I'm not versed in AI programming, but of the factors I see left out are adrenaline and fatigue. Program the AI as if it’s using a controller, then we’ll see what’s what. ;)
 
This new sophie A.I. felt like it just came out of nowhere for me. Polyphony and Sony all along behind closed doors was creating a new A.I. system that learns and drives as well as some of the best GT Sport players in the world. What a day, I wish it would be coming to GT7 at launch. hehe

Though to be fair, this has been sort of on course for Polyphony lately in some cases. Their car sounds got major improvements in GT Sport, we went from not having a proper livery editor to having one of the best ones out there by GT Sport and especailly now in GT7, and from having a lacking customization system in GT to quite a remarkable one in GT7.
Sony AI was busy training Sophy. Sony consumer businesses was busy training Kaz.
 
Back