Im not sure is nothing special, remember google make quite a halo that they ai Alpha Zero beat not ai cheess software and imo racing game is more complex than chess.
Have you heard of OpenAI? Prior to GPT, DALL-E and GitHub Copilot, OpenAI made a few iterations upon a Dota 2 bot that learns in a roughly similar way to how Sophy does for GT7.
In a reduced feature set game of Dota 2, this bot could defeat professional players.
This bot was first demonstrated live in 2017.
(They later greatly expanded the feature set of the games, which reduced it's efficiency but it still held a 99.4% winrate over 42,729 games.)
You're probably right about a racing game being more complex than chess.
But ... Sophy really doesn't "play a racing game".
It has significantly reduced data passed to it, some of which could be considered cheating if a player had access to them.
Sophy also only plays "60 turn games". It only finds the best solution for the next 6 seconds at 10Hz.
It does not see the track, but instead how much track is to the left & right, lap distance & incline.
It does not see the movements of a car, but instead tyre load, tyre slip angle, angular velocities, car velocity & acceleration.
This comes to around 100 data inputs, and there are only 3 data outputs. (Throttle, brake, steer. It uses AT gears.)
A game like chess has ~400 different states every 2 moves.
AlphaZero
must calculate every move until the end of the game, because a move in chess can invisibly cause disaster more than 100 turns later. There's a whole lot more possible moves too, which means a whole lot more data outputs.
Obviously, this doesn't mean making Sophy is "easy".
And you'll still need to be a research company to train it because you still need a few spare hundred-thousand $ supercomputers laying around.
But nothing Sophy does couldn't have been done 5+ years ago.
(And it's not going to be your friend, and likely won't be a particularly enjoyable experience if it was your B-spec driver either.)