Cars in GT6 that has PS4-Ready graphics.

  • Thread starter syntex123
  • 664 comments
  • 57,060 views
Yes. This is actually not a very new thing. Even without tesselation, games commonly reduce the "Level of Detail" of distant objects. Tesselation is more about interpolating polygons in between what's actually modeled, not reducing poly count, although I guess tesselation can also be used to reduce the LoD more gradually, so that we don't get "LoD-pops".

I imagine the full poly model of any (premium) car in GT5 or 6 are only really loaded when you've paused a replay and started the photo mode, or when using photo travel. Poly count has very diminishing returns, especially on non-organic things like cars. This is why I don't really want a higher poly count in GT7. It'll just increase development time even more, and make it harder to hit stable 60fps framerates. Put the resources saved on polygons into better lighting and atmospheric (including weather) effects instead.
Indeed.

Typical LoD systems are usually discrete, whereas the new system on some of the Premium cars is "practically continuous" (down to the vertex level).
There are in fact two technologies at play: adaptive tessellation, as we all know, and progressive meshes - although it's likely they are linked in the renderer and data structure (mesh "format").


The tessellation subdivides the highest-quality mesh to add definition to details according to pre-made primitives applied to the car when it was made. Namely: different classes of circles (which automatically generalise to any conic section), straight edges, most often (along with their revolutions, extrusions etc.). This is extra content per car, just stored more efficiently.

The progressive mesh just defines a model in terms of its lowest LoD (tens of faces, see PD's press images), then the file streams out a list of vertices to add, and edges to move and add, generating new faces (triangles, or quads in PD's case) in order to gradually recover the highest detail mesh that the modeler authored. This is mostly a matter of data representation, so any mesh can be converted to work "progressively", but topological issues in the mesh can get in the way, causing ugly transitions (most likely only applies to the current Standard cars).

A typical discrete LoD system stores the mesh as separate models with highest detail, then half resolution, then half resolution again etc. Something like the geometric equivalent of "mip levels" in textures. Anyone familiar with geometric series should know that this quickly approaches double the memory usage of just the highest detail mesh.

For each LoD level having half the detail, total memory for the mesh scales as follows:
Code:
n  s     f
-----------
1  1.00  1/1
2  1.50  1/2
3  1.75  1/4
4  1.88  1/8
5  1.94  1/16
6  1.97  1/32
n is number of levels in discrete LoD system, s is sum of mesh data relative to highest LoD; f is the fraction of data added at this level.

You'd rarely see fewer than three levels using this method, but other scales can be used: e.g. full resolution and 10% only -> 10% extra memory required. But this exacerbates another problem to do with pixel / polygon resolution ratio - in that respect, more levels are so much better.

With the discrete system, the game has to load all the meshes and simply switches display between them according to any pre-determined set of criteria.

With the progressive system, the game loads the full mesh once, and traverses through the data strucutre to enable / disable certain faces to reach a particular detail level.


Ignoring the visual effect of transitions, because both methods can be jarring and both can be near-perfect, depending on the implementation, what typically happens with the discrete method is you either have too little detail on screen for the pixel-footprint of the mesh, or too much, wasting performance - GT5 and GT6 make this compromise in different directions during a race. The progressive system can more effectively match the polygon resolution with the screen pixel resolution, because it's so much more scalable, effectively having as many "detail levels" as it has polygons.

So the progressive mesh representation alone allows for more memory to do other cool things, and for the polygon budget to be very finely controlled from moment to moment (with an overhead in manipulating the mesh structure) - and it scales more or less continuously over a very wide range.

The tessellation minimises a lot of the fiddly detail stuff by using "vector" primitives and real-time subdivision, reducing the size and complexity of the meshes also (streaming).


All of this means that the next-gen cars will be very highly detailed, but also incredibly scalable to suit any high-stress situation. The non-tessellated cars, assuming any persist, will simply not be as detailed close-up; the performance aspects will be the same, i.e. extremely scalable.

That's a very powerful feature to have, indeed. 👍
 
Wow ... you are talking about performance and the game you were comparing few pages back runs at 30fps.
Actually I was talking about car models, those who wish to distract from that keep banging on about 30fps, something that doesn't actually have a bearing on the models.


Moreover there is barely 1% of game that goes less than 50fps. GT5P, GT5 and some instance of GT6 where there are lots of cars jam packed with weather or particular section of track
Citation required for the 1%.


Looking at their first PS3 game. They were in a league of their own and remember PS3 has a gimped GPU. PD are proficient in gfx and getting best out of a console :bowdown:
And yet at the time the PS3 was released PD were saying how it would be the platform to allow them to do everything they couldn't on the PS2, which is why I am personally cautious (oh and GT5P was easily the best of the three PS3 title in terms of frame rate and stability, rather telling that things got worse for the next title and still didn't get back up to the same level for the one after that).

I've also acknowledged that PD did indeed get the most out of the PS and PS2, its the compromises on the PS3 that I have an issue with, which makes me suspect that you haven't bothered to actually read what I have posted in this thread and just jumped straight to defense mode.
 
More cars, yes. On the second part, 800 standards, vacuum cleaner engine sounds, broken camber, ride height and general tuning, aero and more, say hello.

The things you mentioned have nothing to do with car modeling/details/quality!

And the 800 standard cars already exist which also has nothing to do making way more (premium) cars in great quality every other game has ;)

And its better to have 450 beautiful cars plus 800 bad looking ones than having only the 450!
 
The things you mentioned have nothing to do with car modeling/details/quality!

And the 800 standard cars already exist which also has nothing to do making way more (premium) cars in great quality every other game has ;)

And its better to have 450 beautiful cars plus 800 bad looking ones than having only the 450!
I see. So when you said, "PD goes with more cars + attention to detail and accuracy", the latter part only refers to the 1/3 of the car inventory that still qualifies as having attention to detail and accuracy. Makes sense.
 
Not a thread about standards, we have enough of those already, lets get it back on track please guys.
 
I see. So when you said, "PD goes with more cars + attention to detail and accuracy", the latter part only refers to the 1/3 of the car inventory that still qualifies as having attention to detail and accuracy. Makes sense.
As that 1/3 of the car inventory means 450 cars (which are more than every other game has (except for forza4 maybe? I dont really know..)) - Yes, PD ARE making quality cars without being low in quantity ;)
 
So I guess my request to get the thread back on track was missed.

So let's try that again, back on track.

Not doing so will be seen as an AUP violation.
 
Turns out I was wrong.

No specific need to use vectors or any other kind of parametric system to define tessellation regions. The techniques have existed in the CAD world for some time. For example, you can quite easily identify what are known as "tessellation edges" (via local functions, e.g. angle between faces), and from that you can even develop a sophisticated algorithm that works very much like the vertex-folding of progressive mesh generation, only with excellent results in cleanly removing warts and holes ("topology simplification").

One example can be found here, the first entry to the 1997-1998 section. It's basically one method that delivers both the progressive rendering of increased detail, and the identification of edges that ought to be subdivided to go beyond the resolution of the stored information. The very kind of thing that is exhibited in the "tessellated Premiums". I imagine such subdividsion would be more limited than a parametric approach, but it might work very well for certain objects.

It implies that automated methods could in fact be used to convert the pre-existing Premiums to work like the "new" tessellated Premiums.


Note the date of the paper I linked to. This applies to all areas of graphics: things like shaders are written decades in advance of their being used in real-time "computer game" applications. As such, getting them to work is often "simply" a matter of copying the researchers' "pseudo-code", and voi là.

The hard part is stripping it of its layers of physical authenticity and still getting visually representative results; a sort of "artistic optimisation" in addition to the base "machine optimisation" of the data packaging and instruction calls for a given piece of processor hardware. That's the real magic.
 
So, basically the pre existing Premium cars that aren't as recent as the newest premiums, which are shown to use "Adaptive Tesselation" can be converted to the standard of the "Adaptive Tesselation" premium cars through some sort of automated method?
 
So, basically the pre existing Premium cars that aren't as recent as the newest premiums, which are shown to use "Adaptive Tesselation" can be converted to the standard of the "Adaptive Tesselation" premium cars through some sort of automated method?
The potential appears to be there, yes. I'd expect the tools would need to be supervised by skilled eyes, and those tools still need to actually be developed and tested (not something PD is shy of doing, it would seem).
 
I know it's not about tracks, but I was thinking Mid-Field looks PS4 ready. On my TV I think it's the best looking track in the game, in terms of low jaggies, and really smooth textures with clearer high detail than others. I don't know jack about the technical part but it appears a level higher to me, even on the PS3, in overall quality and detail.
 
Actually I was talking about car models, those who wish to distract from that keep banging on about 30fps, something that doesn't actually have a bearing on the models.



Citation required for the 1%.



And yet at the time the PS3 was released PD were saying how it would be the platform to allow them to do everything they couldn't on the PS2, which is why I am personally cautious (oh and GT5P was easily the best of the three PS3 title in terms of frame rate and stability, rather telling that things got worse for the next title and still didn't get back up to the same level for the one after that).

I've also acknowledged that PD did indeed get the most out of the PS and PS2, its the compromises on the PS3 that I have an issue with, which makes me suspect that you haven't bothered to actually read what I have posted in this thread and just jumped straight to defense mode.


The video you posted only drops below 50 in changeable weather condition. London track is always 60 even in 1080P. So as I said it depends on specific track conditions, situation where it drops. In that 4mins videos it goes below 50 for few secs. 1080P mode add too many pixels for PS3 though and it depends on the player which ever he prefers. But anyways my point is that even on PS3 they were manage to full such gfx and performance is an amazing feet. Which bodes well for PS4 as is a lot easier for developers.
 
The video you posted only drops below 50 in changeable weather condition. London track is always 60 even in 1080P. So as I said it depends on specific track conditions, situation where it drops. In that 4mins videos it goes below 50 for few secs.
You really shouldn't make claims that are so easily shown to be false.

The first minute of the video (the entire section of the rain) is well below 50fps, that is not a few seconds, that is just over 20% of the video. Its actually 1:30 into the 4:50 video before it even hits 60fps (30% of the video). Keep in mind those are not the total time it spends below those frame rates, that's how much of the video elapses before it hits those frame rates, the time spent below those rates in total is higher still).

Based on that your claim (that you have still not substantiated) of 'barely 1%' would seem to be rather wide of the mark.

Oh and you are also forgetting that the video is actually biased in PD's favour being that its hood-cam, change that to cockpit view and the additional demands that places makes matters worse.

BTW - London:

London sub 50.jpg


42 isn't 60. Once again please do not make things up (we have a bit in the AUP about that)

That's two factual claims you have made in one paragraph, both of which are easily proven to be false, so you either didn't watch the video or think I'm stupid enough to not have watched the video. Which ever it is, its a form of behavior that has no place here. If you can't support your point without having to make things up, then your time here at GTP may well be very short.

1080P mode add too many pixels for PS3 though and it depends on the player which ever he prefers.
Its not true 1080p for a start (if it was the hit would be even greater) and these are compromises that PD willingly made in the game.

I also find it odd that you are suggesting that if a player might prefer rain to be on and wants to use the cockpit cam then its almost their fault it drops below 60fps?

PD chose 'shiney' over a locked frame rate, you seem quite happy with the idea that it can fluctuate wildly from the 20s to 60s, many others are not.

But anyways my point is that even on PS3 they were manage to full such gfx and performance is an amazing feet. Which bodes well for PS4 as is a lot easier for developers.
I totally disagree, and I know I'm not alone in this.

This is not just about how easy a platform is to develop for, its about the choices that PD have made across the PS3 generation, for me the main one being an unlocked frame rate. What galls even more over that is that we have an unlocked frame rate with inconsistent models. I could almost understand if it were unlocked because all the models were premium, yet you can get a grid with half of them standard and still have a frame rate that's heading up and down. That, for me, is a lose/lose situation. I don't get a locked frame rate (which is a must for a racing title, be it 30fps in an arcade title or 60fps in a sim) and I don't even get the models being of a consistent standard.
 
Last edited:
You know, sometimes locked 30fps is actually better than a framerate that tumbles around anywhere between 35 and 55. Variable framerates make the game look choppier than it is, and it introduces screen tearing, and it also makes the input lag variable. Locked at 30, the game appears smoother because the motion is always at the same framerate without dips or peaks, we get no screen tearing, and the input lag becomes predictable instead of variable.

In my opinion, if you can't have your game run at 60fps 90% of the time, with the remaining 10% being above 50, you might as well lock it to 30.
 
You know, sometimes locked 30fps is actually better than a framerate that tumbles around anywhere between 35 and 55. Variable framerates make the game look choppier than it is, and it introduces screen tearing, and it also makes the input lag variable. Locked at 30, the game appears smoother because the motion is always at the same framerate without dips or peaks, we get no screen tearing, and the input lag becomes predictable instead of variable.

In my opinion, if you can't have your game run at 60fps 90% of the time, with the remaining 10% being above 50, you might as well lock it to 30.
I agree (as I've consistently said) a locked frame rate is critical for me. Going from the PS2 to the PS3 the variable frame rate and the tearing that accompanies GT5 and GT6 have been the biggest issue for me.

It may be that I am able to pick up on it more than some, but its distracting to a point (was also an issue with the Ferrari Challenge titles as well - Monza's first corner was a slideshow in that) that is basically game breaking for me.

Now a bit more 'on-topic', DC released the replay update yesterday which allows the car models to be shown in the environment far better that static Photomode shots or video captured just from the gameplay cameras.

So for discussion in that regard here's an example, a short extract from a Scottish point-to-point location, an Audi RS5 in the rain.

 
You know, sometimes locked 30fps is actually better than a framerate that tumbles around anywhere between 35 and 55. Variable framerates make the game look choppier than it is, and it introduces screen tearing, and it also makes the input lag variable. Locked at 30, the game appears smoother because the motion is always at the same framerate without dips or peaks, we get no screen tearing, and the input lag becomes predictable instead of variable.

In my opinion, if you can't have your game run at 60fps 90% of the time, with the remaining 10% being above 50, you might as well lock it to 30.
There are better ways to do it. Lock to 60 Hz and tear the first late frame (which you likely won't see), then scale the resolution (if it's draw-limited), and / or the polygon count (if it's transform limited) for the next frame to make sure it's not late again. Bring the resolution and LoD back up once the deltas are sufficiently positive again. The slowdown in GT5 / GT6 is primarily due to polygon count, but the weather / particle effects also cause problems (especially in combination).


I described this to you before, but an even better way is to avoid a locked frame rate in the first place; have the frame be displayed on-screen only once it's ready. The difference between 50 and 60 Hz is less than 4 ms; 40 to 60 Hz is about 8 ms. 60 Hz itself is 16 ms between frames, so a 50% stutter maximum in GT6's case - much better than v-sync stutter (100%), and no torn frames at all. The hardware exists (i.e. nVidia's G-sync, marketing link); I've personally never tried it, but it's supposed to be quite impressive. Ask yourself this: why are 60 or 30 Hz the only two options?

Once that sort of thing gets going, why stop at whole frames? Why not individual pixels? The perceived increase in response will be off the scale.
 
If you had displays that supported changing the refresh rate on the fly, that would be fine. The reason for locking it at either 30 or 60 is to ensure a smooth flow of frames without tearing on a 60hz display. If the display can be set to any refresh rate, it naturally doesn't really matter which framerate you aim for. I'm not sure how others have it, but I think I might be very sensitive to screen tearing. When I'm with friends, I could feel like the amount of tearing was driving me insane, while my friends barely even noticed even when they focused on it.

However, I still want the game to supply frames at a steady rate rather than highly variable, because there are still things like input lag to consider. Some games require more precise timing than others, so the importance of this may vary from game to game.

Just refreshing parts of the screen sounds cool, but I have no idea how far into the future that is, so I'm not sure how relevant it is for a PS4 game. I can totally see that there are some areas of the screen that don't need the same level of response or accuracy. Does the cockpit need to update as often as the environment you can see through the window, for example?
 
I'm not sure how others have it, but I think I might be very sensitive to screen tearing. When I'm with friends, I could feel like the amount of tearing was driving me insane, while my friends barely even noticed even when they focused on it.
Same here. Playing GTA5 on the PS4 with a friend I was moaning about the tearing and he just couldn't see it at all. Made me feel quite odd.
 
If you had displays that supported changing the refresh rate on the fly, that would be fine. The reason for locking it at either 30 or 60 is to ensure a smooth flow of frames without tearing on a 60hz display. If the display can be set to any refresh rate, it naturally doesn't really matter which framerate you aim for. I'm not sure how others have it, but I think I might be very sensitive to screen tearing. When I'm with friends, I could feel like the amount of tearing was driving me insane, while my friends barely even noticed even when they focused on it.

I'm quite sensitive to screen tearing, but I prefer it over lower framerates, generally. I hate stutter, too.

However, I still want the game to supply frames at a steady rate rather than highly variable, because there are still things like input lag to consider. Some games require more precise timing than others, so the importance of this may vary from game to game.
For a console game running on a modern TV, input lag can often be mostly determined by the display, not the game.

Having a multiple frame delay (buffering in the TV for "effects") is massive compared with the 16 millisecond single-frame delay you get from the renderer at 60 Hz, and the I/O ops on the console also tend not to exceed that delay. So the "game" mode some TVs come with is very important, and it's questionable how many people are aware of the issue.

Going to 30 Hz increases input lag dramatically, whereas screen tearing does not affect it at all, unless the tear line has made it beyond (approx.) the middle of the screen, or consistently wipes across it - still implicitly less than the lag at 30 Hz, in line with the numbers I posted previously (but applied to the portion of the screen actually having been updated this frame).

A tear that propogates over the very top of the screen and then quickly recedes (e.g. as detail is scaled to compensate) is not an issue, in terms of input / feedback.

Just refreshing parts of the screen sounds cool, but I have no idea how far into the future that is, so I'm not sure how relevant it is for a PS4 game. I can totally see that there are some areas of the screen that don't need the same level of response or accuracy. Does the cockpit need to update as often as the environment you can see through the window, for example?
Yes, these are the kinds of things you can do, along with special renderer specific tricks like warping previous images and re-rendering only the parts of the image's underlying makeup that change, e.g. lighting in the cockpit applied as a projection / overlay. Asynchronous rendering of different components already happens a lot - the shadows in the PS3 Assassin's Creed games obviously "walked" at a rate of 1-2 seconds, not 30 Hz.

Since true graphics-led screen refreshing isn't going to happen on consoles, because of TVs, the only real solution is scalable detail. On the other hand, the raster process itself could preferentially draw the "active" portion of the screen first. This would make tears perhaps far more obvious (good candidate for warping; actually that applies to any tear), but the major benefit is that if the whole frame ends up being late, the more important information gets through first.
 
Going to 30 fps wouldn't increase lag as dramatically if the processing time done by the TV is very high. I mean if a TV spent introduced a lag of 50 ms, you'd have 16+50 ms on a 60fps game, and 33+50 ms on a 30fps game, right?

There's also a possibility of a particular game buffering pre-rendered frames in order to keep framerates more stable, in case some frames take more time to render than others. If you aim for a high framerate that you want to keep stable, you might use two buffered frames, which would add 33 ms to your "lag chain" if the game was running at 60fps and just barely managing to keep it at that high a framerate. If the game instead aimed for 45-ish fps (with 35-ish in a worst case scenario) and then capped the framerate to 30, you might not need any prerendered frames buffered at all, because you'd never risk going below 30 anyway. That would reduce lag a bit too.

I'm not actually sure how common this is in console games, but I know the "default" setting for pre-rendered frames in my nvidia control panel settings used to be either 2 or 3 frames, until the default was changed to "application controlled". I would generally set this to 1.
 
Going to 30 fps wouldn't increase lag as dramatically if the processing time done by the TV is very high. I mean if a TV spent introduced a lag of 50 ms, you'd have 16+50 ms on a 60fps game, and 33+50 ms on a 30fps game, right?

If the TV has multiple frame delay, and you increase the time between frames, the lag is doubled on that basis. If the TV processing is "constant" regardless of framerate, then no. But the rendering lag is still doubled.

There's also a possibility of a particular game buffering pre-rendered frames in order to keep framerates more stable, in case some frames take more time to render than others. If you aim for a high framerate that you want to keep stable, you might use two buffered frames, which would add 33 ms to your "lag chain" if the game was running at 60fps and just barely managing to keep it at that high a framerate. If the game instead aimed for 45-ish fps (with 35-ish in a worst case scenario) and then capped the framerate to 30, you might not need any prerendered frames buffered at all, because you'd never risk going below 30 anyway. That would reduce lag a bit too.

Triple buffering is a thing, yes - it requires more memory, and is more common with PCs, plus it typically adds another frame of latency in practice. I don't think consoles use it, because the hardware isn't a variable and so specific framerates are easier to "target".

It's actually not a solution to the response (lag) and timeliness of the rendering process - i.e. stick with double buffering (in "page flipping" guise) to minimise frame latency and scale the performance to stay within that alloted latency window.

I'm not actually sure how common this is in console games, but I know the "default" setting for pre-rendered frames in my nvidia control panel settings used to be either 2 or 3 frames, until the default was changed to "application controlled". I would generally set this to 1.

Yes, that's this double / triple buffering of the screen. Setting it to 1 frees up some VRAM, but tearing will be hideous. See here, and bear in mind that data read and write for display purposes is actually serial: some gets processed each cycle, whilst the rest is left waiting for the next clock cycle - the display's read-out location can easily cross over the graphics' write location because it moves at its own rate.
 
I actually set it to 1 to reduce input lag, not to free up vram(i tend to get video cards with more vram than what is common for a particular gpu :P), but I also usually reduce my game settings to a level where I get a stable 60fps.
 
I actually set it to 1 to reduce input lag, not to free up vram(i tend to get video cards with more vram than what is common for a particular gpu :P), but I also usually reduce my game settings to a level where I get a stable 60fps.
There shouldn't be any additional lag with page-flipping, so if double-buffering is implemented this way in the games you play (API / hardware dependent, possibly), then it'd make sense to use it to reduce tearing. :)

Using a second buffer can in theory (API / hardware dependent) increase your framerate by a modest amount, because you can safely start writing the next frame whilst the old one is being read out to the display. You can of course write the new one into the same buffer the old one is still being read from (many game developers did this for "special effects" back in the day), but I don't think that's default behaviour, and it can add even more tearing if things get out of sync.
 
Triple buffering is a separate option that you can use independently of the pre-rendered frames option, so I'm not sure if they are the same thing.
 
Triple buffering is a separate option that you can use independently of the pre-rendered frames option, so I'm not sure if they are the same thing.
Having actually looked it up this time, it seems it's to do with the CPU preparing frames, nothing to do with actual rendering, despite its name (It's a DirectX API thing, queueing instructions through the various layers of software to the hardware.)

It's effectively keeping a buffer of "frame information" before sending it to the GPU in time for rendering. So if your CPU is struggling to ready and feed the graphical portions of the "game states" at frame intervals to the GPU, perhaps it helps provide "smoothness" and extra fps, but it's obvious how it contributes to input lag.

How this kind of thing is handled on consoles will be game-specific, and with the fixed hardware advantage (and fewer software layers to battle through / pure game-oriented layers), there is probably little need to buffer anything in that regard, as everything can be very tightly scheduled.
 
Hm, yeah. The actual description in the control panel seems to indicate the same thing, that frames are prepared for rendering, but not actually rendered yet.

However, it does also specifically state that increasing the value might give you smoother gameplay at lower framerates, and that you should reduce it if you experience lag from input devices. I guess when the data that is going to be rendered is sent to the gpu for preparation, the system can't really change its mind anymore, so effectively you gain input lag equal to however much time is spent on rendering each frame, multiplied by the value of this setting.

If you're interested, here is the exact wording:
nvidiacp.png
 
Nonsense. With the bonnet camera, or hood camera, you appear as if you sit IN the drivers seat, unlike the cockpit view, where it appears as if you sit on the back seat of the driver.

Good for you. Enjoy the very limited view.

It's also possible to change the cockpit view to 'Narrow' or 'Narrowest' which helps eliminate the "back seat" view. I think I've settled on the 'Narrow' cockpit view as default, but I'm still torn on which camera is the true GT view as seen by the developers when driving with a wheel. There's evidence that even Kaz uses the 'Normal' "bumper cam" view (as seen in his self titled documentary), but it seems like a waste to ignore the detailed cockpit view that PD has taken the time to illustrate.

With the cockpit you get a good sense of blind spots and obviously the actual interior of the car, but the Normal view really gives you a liberating open air feel (which unfortunately gives the impression of sitting on the grill with no hood in front).

Tough call.
 
It's also possible to change the cockpit view to 'Narrow' or 'Narrowest' which helps eliminate the "back seat" view. I think I've settled on the 'Narrow' cockpit view as default, but I'm still torn on which camera is the true GT view as seen by the developers when driving with a wheel. There's evidence that even Kaz uses the 'Normal' "bumper cam" view (as seen in his self titled documentary), but it seems like a waste to ignore the detailed cockpit view that PD has taken the time to illustrate.

With the cockpit you get a good sense of blind spots and obviously the actual interior of the car, but the Normal view really gives you a liberating open air feel (which unfortunately gives the impression of sitting on the grill with no hood in front).

Tough call.
One would hope that GT7 moves into the same realm as PC sims with full adjustability of cockpit seating position and Field of View.
 
Back