This depends on the method they are using, the asset, and how it will be seen.
Most hardware versions I have seen are basically improved displacement mapping ( real polygons = better detail ), or they just "smooth" stuff with more polys. And anything you can do on hardware can be done with software. So if they could "automate it" they already could ( it would only hurt speed ).
Regardless, the model would need to made with the tessellation in mind.
The tessellation system they use would not need to vary based on UV mapping style. It would either work for both, or fail for both. UV stretching is UV stretching.
There are many way to do tessellation. You can start with a low res model and "add" detail based on some sort of map ( think displacement map or a high field map that adds polygons ). Or you can have a high res model and reduce the polycount dynamically. Or you can just subdivide the thing to make it look smoother near the camera (I think they did this or some hybrid of it )
Also, every GPU uses triangles, it a lot faster mathematically. So they are using triangles as well. The mesh they show in the "pictures" are not converted to games meshes yet (and some of the are WIP shots).
It's also not really a hierarchy ( that is more how traditional LODs work ), but more of a mathematical formula.
I use to work in this industry ( freelance but I did get a few full time offers ), and was developing my own software, a never finished voxel based rendering engine, and a game asset texturing too, also never finished.
I'm pretty sure GPUs can deal with quads; that is you can define and render quads through a graphics API, but obviously I don't know what the API / GPU then does with that information. It's perfectly possible it "translates" it to triangles somehow, so I'll take your point. Besides, I read a bit more and the method I linked to applies to general meshes, as suspected.
PD are clearly using a hybrid subdivision and simplification system; i.e. the mesh resolution scaling is bidirectional from the actual "default" mesh. The subdivision process (one of many tessellation approaches) is pretty trivial to understand, and most people have focused on that alone in the discussion of PD's implementation of "tessellation". What people generally miss is the "adaptive / progressive mesh" aspect they proudly demo'd in that trailer and in screen grabs on their website.
The
paper I linked to describes how a progressive mesh works (for one particular implementation, but it describes the challenges overall), with extra complication in the form of view-dependency, which is probably a good idea these days. There is a
later paper that describes improvements in the dynamic process (now using geomorphs to smooth transitions), specifically for terrain (2D, special case), and an
even later one that deals with parallelisation of the task (there is a lot of cross-dependency in vertex, edge and face lists that make them difficult to modify in parallel), crucially
without tessellation hardware.
That latter paper describes the
additional data required over a traditional "indexed triangle list"; that extra data consists of a vertex hierarchy (precomputed) to actually facilitate the core processes (vertex splitting and edge collapsing) involved in maintaining a
progressive mesh (this part is crucial to comprehend) - this is precomputed by half-collapsing edges in the default mesh and reordering vertex indices to produce new faces (the "hierarchy" is a tree / "forest" structure storing which vertex begets which vertices in the next "level", forming which faces, for each collapse - naturally the process is reversible). That hierarchy informs which splits and collapses are "legal", in terms of maintaining visual quality and mesh integrity, for the current state of the progressive mesh
For this reason, they also store a "vertex state" texture (whether a vertex is "collapsed" or "split", etc.) at run time to facilitate the splitting and collapsing to be able to be run simultaneously in parallel (via the legality checks), collision free. They also double-buffer the vertex list for separate rendering / update duties, and stretching the process out over several frames if necessary. Remember: a hierarchy is a mathematical construct; that is, the hierarchy itself is formally defined in those papers in terms of set theory.
Other approaches forgo that requirement for a pre-computed vertex hierarchy (e.g. vertex clustering), but yield poorer results in the simplified meshes and have different run-time requirements also. Thinking back to PD's
video, it seems that the refinement is performed using morphs, which hides the splits and collapses somewhat; however, the hierarchy can clearly be seen, as collapses and splits won't occur until there is the correct "environment" around e.g. a given face to do so; that results in "islands" of coarse faces in fine-faced surroundings (for comparison's sake, watch any of the videos for the second paper I linked to:
the terrain one).
So my point was that it should just be a case of running the precomputation on the existing "traditional" meshes (the GT5 carryovers) to store that additional data. A mesh is a mesh, converting between storage formats should be trivial, and adding extra information about its structure equally so. Plus, all the car models in GT5 were made with the PS4 in mind. I suspect the "tessellation" pipeline would be slightly different on PS4 due to the dedicated tessellation hardware, but there is still the requirement for the progressive mesh processing (which would be handled by stream processors). Obviously, the PS3 does it all on the SPUs.