GrindGPT: AI for Sardegna 800WTC

  • Thread starter NismoGT
  • 14 comments
  • 1,875 views
23
Netherlands
Netherlands
bciartd
So, I've been doing Sagrinda™ for quite a while now, trying to collect all the cars. Since I've golded the whole game it's pretty much the only major goal left for me now.
After going through this event with different cars and setups, I stuck to Nissan R92CP with Turbodatsun's tune for the quickest time. And after several hundreds of races there it started to feel a little mechanical, you know.

Then this idea started to crawl into my mind: what if I could train a neural network to do this race for me? That would be a fun and impressive thing to do, isn't it?
Sure, training a full-blown racing AI is not a feasable thing unless you have a whole corporation backing you up with their resources and R&D for years. But I don't need a general-purpose racing agent, I just need to teach it to do one particular track on one particular car with a fixed tune and race strategy.

Unfortunately, I'm not a ML scientist and have zero expertise in building an AI. I just have a general understanding of what is needed to build and train it. And some free time. :dopey:
Since we have a giant community of racing enthusiasts here, I would love to go through some sanity check before wasting a lot of time on something that is impossible. :lol:

Several aspects of training an AI and running it afterwards.


Training data
Since I'm already doing this race dozens of times daily it's possible to accumulate a dataset of 100-200 attempts in a month or so.
  1. Video recording of the race.
    Fixed view, no looking backwards / to the sides. No need for audio. 60 FPS is more than enough.
    Also it should be safe to convert the frame size from 4k (3840 × 2160) to 720p (1280×720) to lower the size of the model (and its inputs) and improve the calculation/training speed.
  2. Recording of the telemetry.
    Since PD is generous enough to provide us with the precise telemetry it can be used for training, so no need to extract those values with some OCR from the video output.
    The values that I see particularly useful: Speed, Engine RPM, Fuel Level, Current Lap, Best/Last Lap Time, some of the flags (Rev Limiter, Handbrake), Current Gear, Brake, Throttle.
    Maybe some others too, but I do think the less inputs the simpler the training.
  3. Recording of the controller inputs.
    Since there's no steering data in the telemetry and I need the AI to be able to navigate through Pit Stop menu (plus post-race menus) and adjust the Fuel Map during the race, I have to record all the controller inputs as well.
    Also these inputs will be used as a reference in training data (the AI will need to learn to produce similar inputs as a result of its work).

Learning process
The main trick is to teach the model to do the race not just in the clear laps (with no traffic), but also to be able to overtake the field, change the Fuel Map after the Pit Stop, do the Pit Stop itself.
The main goal of the model is to do the race in the minimum time possible while keeping the Clean Race bonus.
I'll have to manually map all the 15 laps in the training data for every attempt - this way the AI should understand the repetitive nature of the task + it will be possible to reinforce it for a better time for a particular lap.
The laps themselves are different within one race:
  • Lots of overtaking on laps 1-2, 6-9 and 13-15, (almost) no traffic on other laps;
  • Fuel Map 2 with shortshifting on laps 1-8, Fuel Map 1 with no shortshifting on laps 9-15;
  • Mandatory Pit Stop at the end of lap 8.
But Lap 1 of one attempt is the same as Lap 1 of the other attempt. So it makes sense to compare the lap times of same laps across different attempts, not the different laps within single attempt.
Plus comparing the overall race time, of course.

Pit Stop / menus navigation doesn't have any variations, so that part should be learned pretty quickly.

Having dataset with desired outputs (recorded from my plays) also should speed up the learning process significantly. Although there's got to be a way to mark some inputs as "mandatory" (Pit Stop / menus navigation), "desired" (faster laps) or "undesired" (crashes, going off track, ingnoring Yellow Flag and so on).


Output data
The trained AI should emulate controller inputs as a result of its work while being fed a live video of the game + live telemetry.
I imagine it to work the following way:
Turn on the game, pick the car and the event, put the cursor on the Start button. Connect video and telemetry to the computer, connect the emulated controller to the PS5. Start the model execution (and optionally the recording of the training data - we still want to improve the model behaviour, right?), wait through the race, stop execuition (and recording) when the game arrives at the starting position. Recalculate the model using the latest data. Rinse and repeat.


Open questions
Now that you've finally read all of this, here are the open questions that I have in mind:
  1. Is there a way to reliably record video, telemetry and controller inputs at the same time as live playing the game?
    I assume that it's possible since many streamers do that daily, but what's the easiest and simplest way to do that? I'm especially concerned with controller inputs being in sync with all the other data.
  2. Is there a way to emulate controller inputs coming from a connected computer?
    I don't need to emulate feedback or trigger effects, I just want the game to take the generated inputs and not to complain about it.
  3. Do any of you have ML backgound so I can discuss the architecture of the model, the training / fine-tuming process and not waste months of my life blindly probing every variation possible?
Disclaimer
My goal here is not to create a cheat to beat the game. Yes, I'm wildly frustrated with the necessity of grinding for days just to buy an overpriced virtual old car that is barely useful for in-game racing. But that's not the point.
The point is to make grinding fun and do something different while I am at it. And maybe, MAYBE, create something beautiful along the way.
Modern problems require modern solutions.
 
There is nothing off-the-shelf that can do this for you. There's a reason Sophy has taken years and millions of dollars to develop and it still can't do strategy. So I sure hope you have some VCs backing you and some top engineering talent in the hiring pipeline.
 
Last edited:
About the inputs, many people already use the Remote Play function to open a app to automate some commands, I already used once for a grind. But I dont remember the names, but it should be easy to find.
 
So, I've been doing Sagrinda™ for quite a while now, trying to collect all the cars. Since I've golded the whole game it's pretty much the only major goal left for me now.
After going through this event with different cars and setups, I stuck to Nissan R92CP with Turbodatsun's tune for the quickest time. And after several hundreds of races there it started to feel a little mechanical, you know.

Then this idea started to crawl into my mind: what if I could train a neural network to do this race for me? That would be a fun and impressive thing to do, isn't it?
Sure, training a full-blown racing AI is not a feasable thing unless you have a whole corporation backing you up with their resources and R&D for years. But I don't need a general-purpose racing agent, I just need to teach it to do one particular track on one particular car with a fixed tune and race strategy.

Unfortunately, I'm not a ML scientist and have zero expertise in building an AI. I just have a general understanding of what is needed to build and train it. And some free time. :dopey:
Since we have a giant community of racing enthusiasts here, I would love to go through some sanity check before wasting a lot of time on something that is impossible. :lol:

Several aspects of training an AI and running it afterwards.


Training data
Since I'm already doing this race dozens of times daily it's possible to accumulate a dataset of 100-200 attempts in a month or so.
  1. Video recording of the race.
    Fixed view, no looking backwards / to the sides. No need for audio. 60 FPS is more than enough.
    Also it should be safe to convert the frame size from 4k (3840 × 2160) to 720p (1280×720) to lower the size of the model (and its inputs) and improve the calculation/training speed.
  2. Recording of the telemetry.
    Since PD is generous enough to provide us with the precise telemetry it can be used for training, so no need to extract those values with some OCR from the video output.
    The values that I see particularly useful: Speed, Engine RPM, Fuel Level, Current Lap, Best/Last Lap Time, some of the flags (Rev Limiter, Handbrake), Current Gear, Brake, Throttle.
    Maybe some others too, but I do think the less inputs the simpler the training.
  3. Recording of the controller inputs.
    Since there's no steering data in the telemetry and I need the AI to be able to navigate through Pit Stop menu (plus post-race menus) and adjust the Fuel Map during the race, I have to record all the controller inputs as well.
    Also these inputs will be used as a reference in training data (the AI will need to learn to produce similar inputs as a result of its work).

Learning process
The main trick is to teach the model to do the race not just in the clear laps (with no traffic), but also to be able to overtake the field, change the Fuel Map after the Pit Stop, do the Pit Stop itself.
The main goal of the model is to do the race in the minimum time possible while keeping the Clean Race bonus.
I'll have to manually map all the 15 laps in the training data for every attempt - this way the AI should understand the repetitive nature of the task + it will be possible to reinforce it for a better time for a particular lap.
The laps themselves are different within one race:
  • Lots of overtaking on laps 1-2, 6-9 and 13-15, (almost) no traffic on other laps;
  • Fuel Map 2 with shortshifting on laps 1-8, Fuel Map 1 with no shortshifting on laps 9-15;
  • Mandatory Pit Stop at the end of lap 8.
But Lap 1 of one attempt is the same as Lap 1 of the other attempt. So it makes sense to compare the lap times of same laps across different attempts, not the different laps within single attempt.
Plus comparing the overall race time, of course.

Pit Stop / menus navigation doesn't have any variations, so that part should be learned pretty quickly.

Having dataset with desired outputs (recorded from my plays) also should speed up the learning process significantly. Although there's got to be a way to mark some inputs as "mandatory" (Pit Stop / menus navigation), "desired" (faster laps) or "undesired" (crashes, going off track, ingnoring Yellow Flag and so on).


Output data
The trained AI should emulate controller inputs as a result of its work while being fed a live video of the game + live telemetry.
I imagine it to work the following way:
Turn on the game, pick the car and the event, put the cursor on the Start button. Connect video and telemetry to the computer, connect the emulated controller to the PS5. Start the model execution (and optionally the recording of the training data - we still want to improve the model behaviour, right?), wait through the race, stop execuition (and recording) when the game arrives at the starting position. Recalculate the model using the latest data. Rinse and repeat.


Open questions
Now that you've finally read all of this, here are the open questions that I have in mind:
  1. Is there a way to reliably record video, telemetry and controller inputs at the same time as live playing the game?
    I assume that it's possible since many streamers do that daily, but what's the easiest and simplest way to do that? I'm especially concerned with controller inputs being in sync with all the other data.
  2. Is there a way to emulate controller inputs coming from a connected computer?
    I don't need to emulate feedback or trigger effects, I just want the game to take the generated inputs and not to complain about it.
  3. Do any of you have ML backgound so I can discuss the architecture of the model, the training / fine-tuming process and not waste months of my life blindly probing every variation possible?
Disclaimer
My goal here is not to create a cheat to beat the game. Yes, I'm wildly frustrated with the necessity of grinding for days just to buy an overpriced virtual old car that is barely useful for in-game racing. But that's not the point.
The point is to make grinding fun and do something different while I am at it. And maybe, MAYBE, create something beautiful along the way.
Modern problems require modern solutions.
Save yourself the trouble and just run the Beat at Daytona with a rubber band around your controller. Hell, you'll probably make money faster that way.
 
While I agree to those saying "just use what we have" for making money...
I also see the other side of "trying something new because" I can...

But unfortunatly I cant and so I just wanted to add I would like to see the final product trying its best, like Yosh AI for Trackmania :)
 
I understand how you would be able to create a "keylogger" type AI to finish a certain race by following the previously recorded inputs. I don't however understand how you would train it to avoid obstacles, overtake cars and generally stay within the track limits. All it takes is something like a tiny amount of wheelspin or oversteer to make all the pre-recorded inputs totally useless.
 
I understand how you would be able to create a "keylogger" type AI to finish a certain race by following the previously recorded inputs. I don't however understand how you would train it to avoid obstacles, overtake cars and generally stay within the track limits. All it takes is something like a tiny amount of wheelspin or oversteer to make all the pre-recorded inputs totally useless.
He wants to create an AI, not a script to replicate the races.
That is the difference between "recorded inputs to replicate" and "machine learning to make the inputs by a software depending on the detected situation ahead in relation to what the software agent has learned to happen".
 
He wants to create an AI, not a script to replicate the races.
That is the difference between "recorded inputs to replicate" and "machine learning to make the inputs by a software depending on the detected situation ahead in relation to what the software agent has learned to happen".
I understood that, but the first post really only mentions recording the inputs. These are two very different things, replicating pre-recorded actions vs "thinking" and making actions based on that.
I just don't see how this could work, based on what was described in the first post.
The "Learning process" paragraph seems very unclear to me - what makes it go from replicating the recorded inputs to making its own decisions and actions?

Not trying to shoot down the idea.
 
Last edited:
I've seen people on YT create AI's for Trackmania and apparently it takes tens of thousands of laps/tries to get the AI to even learn the track, let along run the track with other opponents that adjust their driving according to how you are driving. So best of luck, but I'll stick to autogrinding Daytona if I need to get credits, which I don't.
 
Last edited:
create AI's for Trackmania
There is a difference between TM and GT that has to be noted:
TM is deterministic: there is no wind that changes each time the player selects to run any event, so acceleration braking and cornering is always the same with identical input - with wind this changes in all directions.
Also there is no change of physics for tyrewear and speed by fuel levels.
And there is no opponent AI that acts as random obstacles.

It would be easier to create a "The Pass" AI than actual racing AI because "The Pass" missions also are deterministic (otherwise they would alternate between easy and impossible to solve in some rare extreme cases), which could propably be used as a first learning tool to see the AI learn the basics, and then import it to another track to master.
 
I've seen people on YT create AI's for Trackmania and apparently it takes tens of thousands of laps/tries to get the AI to even learn the track, let along run the track with other opponents that adjust their driving according to how you are driving. So best of luck, but I'll stick to autogrinding Daytona if I need to get credits, which I don't.
Could you please point me in the direction of the currently best autogrinding thingy?
 
Could you please point me in the direction of the currently best autogrinding thingy?


He does not speak about the rubberband but it is the same method, plus you use motion steering, you lean the controller with a bit of tilt to the right, which will maintain the car close to the wall, and an rubberband to accelerate for you.

After that it's up to you to decide how many laps you want to do, I think there are diminishing returns in terms of prize after 22-24 laps.

Rinse and repeat.

The same method applies to Honda S660 too (having a low PP settings sheet and a high PP settings sheets and switching them after the prize is calculated)
 
Last edited:
There is a difference between TM and GT that has to be noted:
TM is deterministic: there is no wind that changes each time the player selects to run any event, so acceleration braking and cornering is always the same with identical input - with wind this changes in all directions.
Also there is no change of physics for tyrewear and speed by fuel levels.
And there is no opponent AI that acts as random obstacles.

It would be easier to create a "The Pass" AI than actual racing AI because "The Pass" missions also are deterministic (otherwise they would alternate between easy and impossible to solve in some rare extreme cases), which could propably be used as a first learning tool to see the AI learn the basics, and then import it to another track to master.
Thats the thing. Trackmania is static. There aren't any changes. An AI that not only needs to learn the track but also that can adapt to how the game's AI performs, weather changes, wind direction, tire wear, fuel weight, etc. would be a pretty massive thing to try to create. It's probably why PD's AI has been an ever-evolving thing for decades and many still think it's not that great. Sophy is supposed to be their next big step but PD hasn't even been confident enough to release it on all their tracks, and for no more than 3-5 laps.

All this effort to try to create an AI for grinding credits when you can just rubber band a controller and go around Daytona with no effort seems like a very long walk for a small drink of water.
 
Last edited:
Back