- 8,721
The AI most likely won't have any kind of learning capability of its own when it makes the game. PD could harvest replay data to keep updating things in the background and issue patches to the AI's knowledgebase. Maybe they could also utilise "the cloud" to make it almost concurrent.
It'll be a completed, fixed, trained model that will easily run on any hardware the same as any other procedural AI model does. It won't need to be a learning AI to appear adaptive to the conditions. I don't believe the AI was learning during the races themselves, but the experience was folded in between races to amuse the human racers and test the researcher's approach.
The reasons for the hardcore hardware in the interface are twofold:
1. It's a pure research prototype - no optimisation, no integration; it's all hacks.
2. It's reproducing and processing a video stream, not just pure game data (because of prior research for self driving applications).
Also it's easier just to use a computer you have to hand, even if it is overkill, rather than design and fab and program a bespoke ASIC, just for the purposes of the demonstration. A bit like using a whole computer for your NAS instead of a bespoke compact package, even if Sony are paying.
In any case, they would need to start the learning from scratch, but they've probably done that hundreds of times already. They have time to tweak all of those things to make it look more natural. And they can decide whether to keep it visually based or translate to pure game data so it can be used for opponents without the need for external hardware.
It'll be a completed, fixed, trained model that will easily run on any hardware the same as any other procedural AI model does. It won't need to be a learning AI to appear adaptive to the conditions. I don't believe the AI was learning during the races themselves, but the experience was folded in between races to amuse the human racers and test the researcher's approach.
The reasons for the hardcore hardware in the interface are twofold:
1. It's a pure research prototype - no optimisation, no integration; it's all hacks.
2. It's reproducing and processing a video stream, not just pure game data (because of prior research for self driving applications).
Also it's easier just to use a computer you have to hand, even if it is overkill, rather than design and fab and program a bespoke ASIC, just for the purposes of the demonstration. A bit like using a whole computer for your NAS instead of a bespoke compact package, even if Sony are paying.
Ironically the erratic steering is because 10Hz is too slow. They should use a delay instead, or mixed rates of control application and information feedback, probably both. Simple limits on input speeds would make it look better, but it won't be as competitive on track.Sometimes Sophy made very sharp and very discrete wheel movements, looks very unrealistic and arcade-like. Sophy's speed of turning wheel is too fast imo. People arms have inertia. And at first we (people) need to see result of our sudden sharp wheel movement, and only after that we can make correction of the turning to the same or opposite side. Constant 10 Hz for Sophy is a very fast brain, need to correct.
And Sophy needs some additional kilograms in car weight and less grip tyres to make his/her/its actions smoother
In any case, they would need to start the learning from scratch, but they've probably done that hundreds of times already. They have time to tweak all of those things to make it look more natural. And they can decide whether to keep it visually based or translate to pure game data so it can be used for opponents without the need for external hardware.