- 11,753
- Marin County
Enhancing Photorealism Enhancement
I saw this and actually was going to post it. It's very interesting. I noticed they were fairly selective about what footage they show, and the viewpoint is always in motion. There is an inherent smudginess I associate with machine learning produced imagery and I think motion makes that effect less noticeable. For instance, the process does wonders to make the road and vegetation look more life like in motion, but I wonder if they would look strange sitting still. There's also a bit of uncanny valley going on, particularly in the first clip. The footage from 6:15 on is less compelling and I think that's likely because the data set was a lot less useful for that imagery. I also don't love how unspecific the results are - it totally eliminates the feeling of a particular place (or rather substitutes the feeling of another place) though I appreciate this was more of an academic proof of concept.
It's pretty clear though that using this process selectively (with individual buffer layers applied in post) delivers some startlingly good results and I'd love to see it integrated further...especially in regards to near-impossible things to model like complex vegetation. Ultimately I wonder if ray tracing could produce a similar or better looking result, with higher accuracy, more intentionality, and less "noise" than an approach like this...at a cost of a lot more computing power required. Perhaps you could parse out the vegetation (or better yet, render a simpler proxy version of it, and do the same) in the g buffer and then apply machine learning in a far more straight forward way (like the object gets a species ID and the machine learning has a large tagged dataset of photos of plant species) ...now that seems like it could produce spectacular results.