A lot of what was said about machine learning a year ago has been shown to remarkably undersell its immediate and short term potential, across all applications.
There have been some decisive steps forward in pure research regarding how to design, measure and teach these AI forms for interesting and impressive emergent behaviours. Following that, new models, new hardware, new software etc. have vastly improved AI development outside of pure research.
How things like language models and image generation have improved in the last 12 months is pretty mind boggling. Scary, even, for some. Something something singularity...
I guess it stands to reason that on the back of that progress, PD / Sony found some way to translate the image / video based initial versions of Sophy to just as effectively use the internal game state data.
Perhaps machine learning helped them find the optimised data presentation in the first place. Even better, all that original video based work could then be reused for the efficient internal model, so no need to start from scratch with it.
As always we temper our expectations, but AI does look very different now compared to how it did a year ago.