Gran Turismo Sophy: Sony AI x Polyphony Digital

  • Thread starter Magog
  • 1,719 comments
  • 194,631 views
This is very fascinating research.

Interesting that the AI has developed it's skills from scratch. It seems like the secret sauce here has been the very careful selection of reward parameters to develop the agent, whereas more traditional agents that train off supplied data sets are more dependent on the data set provided.

It's always interesting to watch how an AI, an agent with no comprehension of physical reality, approaches these sorts of things. The AI is incredibly aggressive in some of the steering angles and lines it takes in a way that is not immediately intuitive to a human but is clearly faster. On one hand, it's probably not realistic in an absolute sense - it's more like Sophy exploiting flaws in the physics simulation. On the other hand, it's impressive to see a rule set being pushed to it's absolute limit.

The time trial laps feel a bit like TAS speedruns - impressive but if you spend enough time iterating and fine tuning with near perfect input precision then you can get some pretty startling results. The racing seems more interesting, particularly with how they're thinking about the reward parameters for courteous driving. That will potentially also have applications to penalty systems among real players.

It's great to see evidence of a really good AI associated with Gran Turismo, but I have reservations about how immediately applicable to the game it will be. I think Kaz is excited by the prospect, but I wouldn't assume that he has a good grasp on how computationally complex it would be to just dump something like this into a PS5. The PS5 is a great machine for a gaming console, but it's orders of magnitude weaker than the hardware they're currently using for this work. There is no guarantee that even a single cut down Sophy agent will run on a PS5 at the same time as the rest of the game.

This feels like something that is more applicable to the next generation of GT games than this one, but I hope the work continues so that when the time comes to implement it then it's as good as it can be.
And being only 10Hz should also mean its easier to implement on console hardware.
It should do. It's currently running on completely separate hardware though,

1644452903068.png


A V100 is a significant piece of hardware, and 8 CPUs and 55GB memory is substantial for the training hardware. In an environment where the agent is no longer being trained presumably that bit could be omitted and the rollout worker's compute node would handle "driving" the AI, but two CPUs and 3.3GB of RAM is still a fair bit compared to what modern consoles have available.

However, I'd assume that they've made next to no effort to optimise this. It's far easier in research to just throw extra hardware at the problem rather than waste time trying to optimise the agent. It's entirely possible that significant improvements can be made, but even so they'd have to be massive in order to get say, 10+ AI into a race. Still, even if they can't do it for PS5 that sort of hardware requirement is something that is potentially reasonable for hardware within the next 5-10 years (PS5Pro or PS6 maybe).
Perhaps one of the most relevant question is wether or not a bone stock PS4 can run this?
Based on the above, there's no way it seems like a PS4 could run a game like GTS and this agent. Let alone multiple copies of it. PS5 maybe if they optimise, simplify, and run a limited number of agents but it's hard to say.
@Tidgney said in his ACC stream the other day, that the developers of Gran Turismo do in fact play other titles, PC and console. Not earth shaking news by any means being that they would be stupid not to. But it was interesting nonetheless that he confirmed it, and later commented that it’s their goal is to take the best from all titles and platforms, and Granturismo-fy it
If I recall, Kazunori has said in the past that they don't play or look at other racing games to compare or inspire Gran Turismo. That's the only reason it's news, because it's assumed that developers on every other racing games are at least familiar with the competition. It was a bit of a kerfuffle when it came up way back when, because as you say they'd be stupid not to.
 
Is the AI going to make mistakes like a real person? Is it going to feel pressure when being followed closely? Is it going to slow and quicken it's pace based on it's gap? Or is it just going to run perfect fast laps over and over again and be smart enough to move out of the way and get around a real human? Because without failure, it doesn't sound human at all; it's sounds like Skynet.
 
The information gathered from this experiment must be used to improve the dismal AI that we saw in GT Sport. If this is what PD and Sony are capable of then there's really no excuse for poor AI anymore.
 
There won't be any excuses. It'll just be about, "we can do better" or "Ït's not perfect" or "ït's not where we want it to be" or "We're making improvements". I guess.
 
I was thinking of this earlier, having a AI which can consistently do the perfect lap could be a game changer for BOP. In theory these simulated laps can be run to create the perfect BOP to the thousandth of a second, it can even be made circuit specific if the simulations can be done quickly enough.
 
The time trial laps feel a bit like TAS speedruns - impressive but if you spend enough time iterating and fine tuning with near perfect input precision then you can get some pretty startling results. The racing seems more interesting, particularly with how they're thinking about the reward parameters for courteous driving. That will potentially also have applications to penalty systems among real players.
You know, that's probably the best way of putting it. This really does feel like the racing game AI equivalent of a TAS run - as an exercise in what can be done if you let machines do their thing, it's quite impressive. But as anything other then that, especially when it comes to practical use cases? That's when things start to break down a bit.
 
I was thinking of this earlier, having a AI which can consistently do the perfect lap could be a game changer for BOP. In theory these simulated laps can be run to create the perfect BOP to the thousandth of a second, it can even be made circuit specific if the simulations can be done quickly enough.
Yeah, but it will only work (be perfect) if you can recreate the different nuances from car to car. I mean, I’d be fast as hell in the GR.3 458 if I didn’t have to tiptoe around every corner avoiding lift-off oversteer.
 
Last edited:
The information gathered from this experiment must be used to improve the dismal AI that we saw in GT Sport. If this is what PD and Sony are capable of then there's really no excuse for poor AI anymore.
Hm. I'm not sure it quite works like that. This sort of approach isn't necessarily amenable to running directly on consumer hardware, at least not in the short term.

There was no excuse for poor AI already, but I don't think this changes anything for GT7. It's cutting edge research and a showpiece that is clearly getting them a lot of good publicity. But it doesn't mean we should start expecting this AI in real games any more than we should start expecting the AoE4 AI to play like AlphaStar.

These things are a demonstration of what is possible, not a demonstration of what is practical. We put a man on the moon in 1969, but that doesn't mean that there's consumer level moon travel available. At best it's an example of what can be achieved when money and resources are (essentially) no object, and so it remains to be seen how effective it can still be when money and resources are taken into account.

Technology Readiness Level is a way of describing the progress of projects like this that potentially lead to real products. What we have here is not an actual system, nor is it a prototype. It's probably not even component verification, as using big stacks of server grade hardware isn't going to be a practical application of the technology. It's probably about TRL3 - and experimental proof of concept. It demonstrates that this theoretical idea also works in practice, without getting bogged down in the technical details of the implementation.

TRL3 is years from any real implementation of a technology. The last project I worked on took 5 years to go from TRL3 to full production, and it was significantly less complex and groundbreaking than this. Even taking into account that this is not NASA/military and so doesn't necessarily have the same administrative overheads/general foot dragging, and that software developers in general quite like the "move fast and break things" model of development, this is nowhere near practical use for a consumer.

This is not to say that it's not amazing technology because it is. But people should very much keep their expectations in check for what we will see of this in the near future. We might get a time trial against Sophy ghosts. If we're very lucky we might get a race against one or a very small number of Sophys, and they will probably have to run the Sophy AI in the cloud and treat it as a multiplayer opponent. That's not going to scale well to millions of players - a true implementation will need to run natively on the console. Based on current information, they're a long way from running a single Sophy on a PS5 let alone multiples.
 
I will have the chance to speak with Kazunori Yamauchi and the Sony AI team about Gran Turismo Sophy in the coming days — let me know if you guys have any specific questions about Sophy that you'd like answered. 👍

Does each specific Sophy AI agent develop its own unique driving style through the training they undergo? We saw in the Le Mans race a couple of Sophy drivers that seemed more aggressive, but I wasn't sure if that was just due to the specific circumstances each faced in the race.
 
Question to the nerds.

How much more complicated is this new AI system compared to other systems that ACC and the like use?

Where I’m going with this… is what if PD supplemented an AI system like other games use, with Sophy. You could have a grid of 20. 14 would be standard AI, one would be the player, 5 cars could be Sophy’s. Would it be less taxing on the hardware?
 
I was thinking of this earlier, having a AI which can consistently do the perfect lap could be a game changer for BOP. In theory these simulated laps can be run to create the perfect BOP to the thousandth of a second, it can even be made circuit specific if the simulations can be done quickly enough.
Balance of Perfomansophy. Seems logical. :)
 
This is very fascinating research.

Interesting that the AI has developed it's skills from scratch. It seems like the secret sauce here has been the very careful selection of reward parameters to develop the agent, whereas more traditional agents that train off supplied data sets are more dependent on the data set provided.

It's always interesting to watch how an AI, an agent with no comprehension of physical reality, approaches these sorts of things. The AI is incredibly aggressive in some of the steering angles and lines it takes in a way that is not immediately intuitive to a human but is clearly faster. On one hand, it's probably not realistic in an absolute sense - it's more like Sophy exploiting flaws in the physics simulation. On the other hand, it's impressive to see a rule set being pushed to it's absolute limit.

The time trial laps feel a bit like TAS speedruns - impressive but if you spend enough time iterating and fine tuning with near perfect input precision then you can get some pretty startling results. The racing seems more interesting, particularly with how they're thinking about the reward parameters for courteous driving. That will potentially also have applications to penalty systems among real players.

It's great to see evidence of a really good AI associated with Gran Turismo, but I have reservations about how immediately applicable to the game it will be. I think Kaz is excited by the prospect, but I wouldn't assume that he has a good grasp on how computationally complex it would be to just dump something like this into a PS5. The PS5 is a great machine for a gaming console, but it's orders of magnitude weaker than the hardware they're currently using for this work. There is no guarantee that even a single cut down Sophy agent will run on a PS5 at the same time as the rest of the game.

This feels like something that is more applicable to the next generation of GT games than this one, but I hope the work continues so that when the time comes to implement it then it's as good as it can be.

It should do. It's currently running on completely separate hardware though,

View attachment 1112871

A V100 is a significant piece of hardware, and 8 CPUs and 55GB memory is substantial for the training hardware. In an environment where the agent is no longer being trained presumably that bit could be omitted and the rollout worker's compute node would handle "driving" the AI, but two CPUs and 3.3GB of RAM is still a fair bit compared to what modern consoles have available.

However, I'd assume that they've made next to no effort to optimise this. It's far easier in research to just throw extra hardware at the problem rather than waste time trying to optimise the agent. It's entirely possible that significant improvements can be made, but even so they'd have to be massive in order to get say, 10+ AI into a race. Still, even if they can't do it for PS5 that sort of hardware requirement is something that is potentially reasonable for hardware within the next 5-10 years (PS5Pro or PS6 maybe).

Based on the above, there's no way it seems like a PS4 could run a game like GTS and this agent. Let alone multiple copies of it. PS5 maybe if they optimise, simplify, and run a limited number of agents but it's hard to say.

If I recall, Kazunori has said in the past that they don't play or look at other racing games to compare or inspire Gran Turismo. That's the only reason it's news, because it's assumed that developers on every other racing games are at least familiar with the competition. It was a bit of a kerfuffle when it came up way back when, because as you say they'd be stupid not to.
I would never expect Sophy to run locally on any PlayStation hardware. I feel like this is purely seen by Polyphony as a cloud-based solution, and with GT7 requiring a network connection for most things that seems within the realm of possibility to include, especially as the agents only need new race data every 10hz.
 
Last edited:
Do we want a better AI? Yes.
Do we want the AI to be unbeatable, even if we pull off some Igor Fraga skills? No.
Do we want to be friends with an AI named Sophy, especially when it's unbeatable and get the feeling it helps society? Wtf...

Come on PD. :lol::lol::lol:
 
I will have the chance to speak with Kazunori Yamauchi and the Sony AI team about Gran Turismo Sophy in the coming days — let me know if you guys have any specific questions about Sophy that you'd like answered.
Do AI drivers make mistakes that cause them to go off track or cause them to have collisions with other AI drivers.

Not referring to a typical crowded turn one but other places during a race
 
First of all, hats off to Sony & PD for this collab. Even though we've had the 2 previous studies showing AI driving faster than humans in a time trial setting, this is much larger in scope. I didn't expect to see AI machine learning outracing the best GT Sport drivers so soon. If they can implement this in the game in a realistic time frame, it would probably be GT's biggest contribution to the racing game genre since the first one's bringing simulation to the masses.

Now, a couple of my rambling thoughts (some might already have been covered in the thread, I didn't have time to read every post in detail):

- In terms of pace, the AI is stupendously fast. But we already know it's capable of doing this from the previous study at Tokyo Expressway. The AI can be perfect in its input all the time, and it can react in literally frame-by-frame of the physics engine. That's why it can pull off the 4 wheel drift kerb entry at DTS' hairpin. A human will never be able to do this with any sort of consistency, because our reaction time and input device is limited. In a twisted way, I find it very satisfying to see the top GT players feel the desperation us plebs feel when watching their replays. I've always said that the difference between a good driver and an alien is just accuracy. Well, the difference between AI and an alien, is just superhuman accuracy ;)

- Taking the above then, an easy way to make the AI more "human" would be to limit their input frequency (e.g. if they are now running at 60 Hz, possibly tone it down to 5 Hz - human fastest reaction time is around 0.2 secs I believe). So a difficulty slider in-game would basically just be a frequency slider. Then they can add extra parameters like mistake probability, overtaking/defence aggressiveness, etc.

- Moving on from that, the weird way the AI drives shows that GT physics still has many unrealistic loopholes. For example the DTS kerb drift and using the grass on Maggiore last corner entry and Sarthe Dunlop curves. So in a way, having the AI push the limits will also help PD plug those loopholes in the future (hopefully). Back in GT5/6, drifting the corner entry used to be the fastest way to go round the track (just look up the GT Academy replays from 2012-2014). I thought this was finally gone, but it seems it's still there, just more difficult for humans to do.

- Same with abusing track limits (Sarthe pit entry lol). It's comical that PD thought training the AI on the track with the worst track limit exploits would make for good advertisement. Fix your penalty boundaries PD!!!

- All 3 races are done with no tyre/fuel consumption. It would be interesting to see if bringing these 2 factors would make the AI take more realistic racing lines, because 4 wheel drifting every corner entry would definitely kill the tyres quicker than the conventional line.

- For the racecraft, it is genuinely impressive to see the AI improve from the July race to the October one. However I think they pushed it a step too far in aggressiveness to compensate. The much higher number of penalties in the second round shows that. Because the AI is so much faster, it can pressure the human player into a mistake, and then just crowds/divebombs them at the nearest opportunity. Once it gets ahead, game over. It feels very forced and not very nuanced. It's not a classic one-on-one battle like with a similarly skilled human player, where you can be trading blows for multiple corners and not know who will come out ahead. The battle with Yamanaka at La Sarthe in the first round makes you think they can, but Yamanaka is just hanging back here for the final overtake into the Porsche Curves (common strategy for Sarthe in slipstream dependent cars). It's not the AI's brilliance that kept it in the battle for so long.

- In terms of commercial implementation in the game, this is where the biggest barrier lie IMO. The time (and computational effort) to run the simulations to get the AI to this point would be enormous taking into account all the cars/tuning/tracks/weather/tyre/fuel possibilities in the game. In the article it says it takes 300,000 km (10-12 days) for an AI to get to alien level. Just taking GT Sport, we have 338 cars and 82 raceable layouts. That gives 27,716 combinations. Multiply by 10 days = 759.3 YEARS of simulations :eek: I don't know about you, but I think none of us would be alive by then :lol: Of course you can speed up the process by having more computational power, but how much money are Sony/PD willing to spend to get this done? If it takes 1000 virtual PS4 machines to run one combo, to get everything done in 5 years you'd need roughly 152,000 virtual PS4s. And would you accept if shifting their budget means we have less car/track licenses for example? And whether that's good use of computing power and energy instead of solving other world problems/scientific research?


Future thoughts/possibilities:

- Could the AI be used for Tuning cars? No need for us humans to test manually, just let the AI run every permutation of every suspension/LSD/downforce/gearing setting to find the perfect tune. But as we've seen, the AI doesn't drive like humans so it could result in a very fast, but also very unstable car.

- B-spec! Let Big Brain Bob complete the races that are too difficult for you (and also grind money while you're AFK :lol:). But if all the AI are super, then Bob is just back to being its average self again...

- Strategy. Similar to tuning, no need to run your own practice race to figure out the best strategy for FIA. Just let the AI pound round and figure out the best tyre/fuel management.

- More of a curiosity, I'd love to see the AI take the Red Bull X1 and Tomahawk X around Nordschleife to see what a perfect lap would look like. Human record is just under 3 minutes currently (in fact there's a TT in GTS right now).

- I'd love to see Sony/PD team enter Roborace. Look it up if you don't already know ;)

- And lastly, I hope this doesn't turn into Skynet/Ultron :lol:
 
Last edited:
disappointing that this isn't available at launch. looks promising if it can be adjusted and used for the career mode races. I especially hope GT7's AI isn't as slow as GT Sports. It makes the whole career mode super boring. Luckily the online racing was brilliant which is why GT Sport is my favorite GT game.
 
The battle between Sophy Violette and T. Miyazono in the final corner was really entertaining and telling of the sytem they are developing - very promising. Super clean by Violette through there!


This 100% --- Crazy Ol' Maurice, they called him! Do not speculate positive things about Gran Turismo, fools!

1644463962244.png


They published their first study in August 2020. You're correct. These years under covid have dragged brutally. It felt like longer.
Why is it ludicrous? I posted it mostly so that people like the person I originally quoted can see the backstory to the Sony AI PD collaboration if they are unaware of it.
Whaaat!!?? You admitted you were wrong, and were able to find why you were wrong and showed everyone --- HOW DARE YOU!!! Who do YOU think you are, hmmmm?
 
From what we've seen so far, the AI was shown off using three specific car/track combos (AMG/Dragon Trail, Porsche/Maggiore, RedBull/Le Mans).

If they were to implement the model shown in the video straight to GT7, how capable would the AI be if they were made to use different cars/race on different tracks? Loved seeing the AI in action, but kinda having a hard time imagining how it would handle in different environments.
 
Whaaat!!?? You admitted you were wrong, and were able to find why you were wrong and showed everyone --- HOW DARE YOU!!! Who do YOU think you are, hmmmm?

You do realize he's talking to one of the admins and one of the highest ranking members of staff that isn't Jordan, right
 
Yea, especially since I’ve never seen “Sophie” spelled “Sophy” in my friggin’ life
Sony and Polyphony. That's the name. It's also quite cute and charming. Better than Siri, Cortana, Alexa and Bixby anyway.
 
Last edited:
Taking the above then, an easy way to make the AI more "human" would be to limit their input frequency (e.g. if they are now running at 60 Hz, possibly tone it down to 5 Hz
Agend was running at 10 Hz, read #391 comment above yours.

The time (and computational effort) to run the simulations to get the AI to this point would be enormous taking into account all the cars/tuning/tracks/weather/tyre/fuel possibilities in the game
I think that Sony doesn't need to train agend at every track for every car and condition. I think that situation is similar to DLSS: as I know DLSS was trained originally at some games and can be implemented to any game just by adding some DLL's to game folder and enable them.

---------

GT's career design is not for Sophy - races are only about 3 laps, you start at last position, so you have only 5-8 minutes to push forward to 1st. Sophy must press throttle slower when exiting turns, must brake far before turns. In other way your laps can not be faster 5 seconds than Sophy's laps
 
Hm. I'm not sure it quite works like that. This sort of approach isn't necessarily amenable to running directly on consumer hardware, at least not in the short term.

There was no excuse for poor AI already, but I don't think this changes anything for GT7. It's cutting edge research and a showpiece that is clearly getting them a lot of good publicity. But it doesn't mean we should start expecting this AI in real games any more than we should start expecting the AoE4 AI to play like AlphaStar.

These things are a demonstration of what is possible, not a demonstration of what is practical. We put a man on the moon in 1969, but that doesn't mean that there's consumer level moon travel available. At best it's an example of what can be achieved when money and resources are (essentially) no object, and so it remains to be seen how effective it can still be when money and resources are taken into account.

Technology Readiness Level is a way of describing the progress of projects like this that potentially lead to real products. What we have here is not an actual system, nor is it a prototype. It's probably not even component verification, as using big stacks of server grade hardware isn't going to be a practical application of the technology. It's probably about TRL3 - and experimental proof of concept. It demonstrates that this theoretical idea also works in practice, without getting bogged down in the technical details of the implementation.

TRL3 is years from any real implementation of a technology. The last project I worked on took 5 years to go from TRL3 to full production, and it was significantly less complex and groundbreaking than this. Even taking into account that this is not NASA/military and so doesn't necessarily have the same administrative overheads/general foot dragging, and that software developers in general quite like the "move fast and break things" model of development, this is nowhere near practical use for a consumer.

This is not to say that it's not amazing technology because it is. But people should very much keep their expectations in check for what we will see of this in the near future. We might get a time trial against Sophy ghosts. If we're very lucky we might get a race against one or a very small number of Sophys, and they will probably have to run the Sophy AI in the cloud and treat it as a multiplayer opponent. That's not going to scale well to millions of players - a true implementation will need to run natively on the console. Based on current information, they're a long way from running a single Sophy on a PS5 let alone multiples.
I'm not suggesting a direct carryover of technology onto the PS5 because it would be impractical. However I'd be extremely disappointed if the AI doesn't improve.
 
Last edited:
Question to the nerds.

How much more complicated is this new AI system compared to other systems that ACC and the like use?

Where I’m going with this… is what if PD supplemented an AI system like other games use, with Sophy. You could have a grid of 20. 14 would be standard AI, one would be the player, 5 cars could be Sophy’s. Would it be less taxing on the hardware?
If you could provide me some reference material for ACC's in-game AI maybe I could say something but at this point I know nothing except that it is likely AI similar to many other games which adapts to various inputs using things like decision trees but most of it is pre-baked. It might have some code stating what the ideal line is, some code to make it change its behavior if its about to lose traction, other code that governs its behavior for overtakes. For Ex: if the upcoming straight is long enough, try to pass with slingshot if in range. I may be very wrong here though.

So as I have not seen anyone else tackle this, I with my very very limited knowledge of ML but a graduate's level of understanding of computational methods and optimization algorithms will take a crack at it.


People here have a shaky grasp of what is happening with ML specifically reinforcement learning used here and what words like "AI" or "model" even mean in LM literature. I have not worked with ML or deep learning BUT I have worked with evolutionary algorithms (CMA-ES) https://en.wikipedia.org/wiki/CMA-ES
which have also been previously used for similar optimizations as the abstract of the paper states. Fortunately, there are some similarities in these approaches even though the implementation differs. Let me try to explain a bit here:

Evolutionary algorithm: Usually you are looking to optimize some cost function (let us assume here its lap time or average speed or some function that incorporates multiple of these desirable factors). Now as the name suggests this is done using "evolution" and "selection" of candidates that provide the best result for optimizing the cost function. You start first with some random guess of a good candidate, then the algorithm essentially "remixes" the initial generation to make a new generation according to some criteria which then are evaluated and the best ones are then crossed together with some pre-determined randomness always to make the next generation and so on. The CMA-ES algorithm can also "move in the right direction" so to speak, meaning that the crossing is done in such a way that the new generation is closer to where the algorithm thinks the true optimum lies. You can think of 3-D graph that looks like a mountain, the algorithm essentially tries to drive the cost function to the top of the mountain by looking at which area has a slope rather than randomly going in all directions looking for the peak (Very basic strategy and not a very robust one). This is all done with some numerical weights to the functions that spit out the new generation and the evaluation of each generation. These weights are quite similar to the "rewards" in deep reinforcement learning. Essentially, some weights make certain good behaviors like no penalty make the evaluation higher while bad ones like contact might make it lower. You can tune these weights for making the algorithm faster or behave differently. You can also make these weights themselves be automatically tuned by looking at the slope of the mountain I mentioned earlier, and this is what CMA does.

Deep reinforcement ML: Similar, but here after our initial agent, lets call him, instead of using evolution by mixing candidates, the current agent's parameters are tuned using data that it can "learn" from as we use the data to feed the algorithm. Here, Sophy can be fed the data for the track and and its opponents and we keep rewarding Sophy positively for good things like course progress and negatively for behaviors we want to discourage like contact, going off-track, etc. These rewards the the ones cited in Sony's paper along with other ones. Sophy can basically run the track many times to try to find the optimum with the help of the algorithm.

In fact, I the weights used for Sophy are listed in table 1 and are as follows:
Weights Sophy.JPG


With all this said, you might ask what exactly is the "thing" that gets saved or what do you get with the optimum solution. Well a matrix, really. In Sophy's case they use a modified QR Q-function (disclaimer, I am not very familiar with this algorithm or function myself) https://en.wikipedia.org/wiki/Q-learning

I will point out that the rub here is that they have modified this to accept continuous actions as inputs instead of numbers defining a state at each step.

Still reading? Well the answer to your questions is basically here. This Q-function is what a trained agent will have and obviously will need some work done to be implemented as an AI in the game as it will have to be consistent with whatever the game's implementation of AI model currently is.

So, while I cannot for certain obviously (who can in any scientific matter), once the model has been trained, running it on local hardware as pre-baked AI should not be any more taxing than running the current AI cars.
 
I will have the chance to speak with Kazunori Yamauchi and the Sony AI team about Gran Turismo Sophy in the coming days — let me know if you guys have any specific questions about Sophy that you'd like answered. 👍
1. It was said that it takes one day to learn the track and another day or two to get into the 5%. Does this process need to be learned for every car with every setup specifically or will the process be faster once the AI has a general knowledge of the algorithm for the specific car or circuit?
2. More or less the same question for the racing etiquette. Does the etiquette need to be learned anew every time for every combination of car, track, opponents, track conditions, etc., or does it only need fine tuning once learned in principles?
3. At which point of time does the AI begin to take superhuman lines on the grass and past kerbs that not even the best GT Sport players would dare to take? Does it do that on day one already or when trying to reach the limit of the car?
4. Did you also test tyre wear and fuel consumption and how it affects the AI? Are humans using less fuel and tyres (because the way the AI drives in this Schumacher-esque way I would assume it is using more fuel and tyres)? And can the AI be trained to conserve tyres and fuel?
5. How does the AI fare in changable conditions? Will it adapt quickly enough to not push past the limit once grip levels reduce? Or would it happen to make mistakes more often? Generally, is the AI mistake prone and does tend to do mistakes like a human does or is it acting more supernatural with almighty knowledge about every factor imaginable?
 
I will have the chance to speak with Kazunori Yamauchi and the Sony AI team about Gran Turismo Sophy in the coming days — let me know if you guys have any specific questions about Sophy that you'd like answered. 👍
Will this be implemented for b-spec as well? (If B-spec even comes to GT7)
 
Back