GT5 going to be in 3D ?

  • Thread starter SCUD77
  • 100 comments
  • 8,246 views
I mean are there buffers or predictions that just don't work anymore? Does something like MPEG compression happen in that information is saved and reused without the need to recaclculate it from scratch becuase it's accurate enough right now for a normal view, but jumping to rapidly changing viewpoints causes everything to have to be recalculated everytime?

I always figured something like this might account for framerate drops in rapidly changing situations like taking a turn where a LOT of stuff changes a LOT fast vs going in a straight line where a lot of stuff is pretty close to what it was the last frame a lot of the time...

Frame drops and tearing are caused by CPU / GPU inability to render a full scene inside of 1/60th of a sec, 1/30th, etc (whatever the game's standard frame rate is).

The time it takes to 'copy' a scene to the front buffer is inconsequential as it's just a blit, which is extremely fast! Scene rendering time can suffer for a variety of changing reasons: Number of vertices, textures, reflections and shadow mapping and a variety of other rendering effects and techniques used at any particular time in games. Methods such as LoD, backface culling, texture streaming and using cube maps for reflections are used to simplify what would otherwise be very CPU / GPU intensive routines. The game engine does the work on whatever is thrown at it. A good engine cannot make up for "bad" modeling... In a game like GT5 it is time intensive to test every possibility of car + track!

It'll be interesting to see how that ties in with delay introduced by a displays processing...
No big deal. Many HDTV's response time is at n thousandths (single digits - e.g 2ms Samsung 55-inch UN55B8000) with a 120Hz or 240Hz refresh Vs game frame rate of 60fps (0.0167s). 3D HDTV's guaranteed to do this. The agreed 3D standard mandates that games must run at 60fps (i.e. to assure 30fps per eye) and 1080p output (not frame buffer resolution), but of course we already know that GT5P and the GT Academy demo drops frames...
 
Last edited:
The time it takes to 'copy' a scene to the front buffer is inconsequential as it's just a blit, which is extremely fast!

You wouldn't do any such copy at all although you might render to a buffer other than the backbuffer for reasons of post processing.
 
Does anyone know for a fact that the stereoscopic stuff demo'd by sony so far (GT5P and wipeout, or anything else) has been rendered with alternating viewpoints at half frame rate to each eye?
You can see this in action here... 30fps per eye, both displayed simultaneously on a special polarized TV.

......

I don't see where it says anything about rendering alternating frames, and I was under the impression that such tv systems were independent of the source framerate and would alternate the left/right frames at much higher frequency
 
Trackmania Nations Forever had a 3D Glasses option. It sucked.

But these new 3D Glasses are much better than the old ones. So the only problem is rendering, and there's no way the PS3 can render two full resolution GT5 frames at once. Maybe if we hook two PS3's together :).
 
I don't see where it says anything about rendering alternating frames, and I was under the impression that such tv systems were independent of the source framerate and would alternate the left/right frames at much higher frequency
That particular display is a prototype. Once the TV has the image frames, it can process & display at a higher refresh yes, but that doesn't affect console output. The video shows where swapped frames appear simultaneously. The lens put over the camera and taken off it then just shows a single 'eye' view.

Ask Andy Oliver at Blitz games for confirmation on the 3D frame rate standard.

Stonemonkey
FSAA is not a copy or a blit. And the front buffer is never a destination.
Those are your terms. Front buffer is what I intend to output, back buffer = original scene. FSAA = blit back buffer to front buffer whilst scaling down to get AA. You might do differently...
 
OK, back buffer to another back buffer then and front buffer as display buffer if it suits you and the rest of the computer graphics world - better. :sly: I don't like my HUD with AA.

Well, draw the HUD to the final backbuffer after the AA filter.
 
I'm not sitting in my living room wiv a plastic wheel and 3d glasses. The men in the White coats will take me away.
 
I'm not sitting in my living room wiv a plastic wheel and 3d glasses. The men in the White coats will take me away.

You forgot the car seat, crash helmet, overalls with fireproof unders.

Wonder if we'l get support for smell generator for oil/gas and burning rubber.
 
just saw avatar in 3d, spectacular, .... hear sony is bringing out 3d televisons and a 3d firmware update for the ps3, let me see my current tv is 4 years old, time for new one

gt5 in 1080p 3d, ftw....:D
 
Frame drops and tearing are caused by CPU / GPU inability to render a full scene inside of 1/60th of a sec, 1/30th, etc (whatever the game's standard frame rate is).

The time it takes to 'copy' a scene to the front buffer is inconsequential as it's just a blit, which is extremely fast! Scene rendering time can suffer for a variety of changing reasons: Number of vertices, textures, reflections and shadow mapping and a variety of other rendering effects and techniques used at any particular time in games. Methods such as LoD, backface culling, texture streaming and using cube maps for reflections are used to simplify what would otherwise be very CPU / GPU intensive routines. The game engine does the work on whatever is thrown at it. A good engine cannot make up for "bad" modeling... In a game like GT5 it is time intensive to test every possibility of car + track!

I wasn't talking about copy time, I was just wondering if in rendering a scene there are shortcuts used that depend on the camera being relatively near where it was before...

For instance are there algorythms that predict where the camera will be, and then if thats true bypass certain calculations becuase the calculations from teh frame before are still accurate enough (perhaps no recalculating the reflection map if your pov has change less than x). Perhaps a prediction routine that pages certain textures out to free up ram if the cameras movement indicates that texture will not likely be on screen in the next frame.

The sort of thing that can only work if the camera acts in a predicatable stable pattern, not jumping back and forth from two points of view every frame?

For instance lests say that the there is a check for the cameras current change over the last few frames and based on that predicts at what distance certain things will become "noticeably" wrong (sort of like anistropy). So it doesn't recalculate the distance, shadow map or reflection map of objects over x units away from the camera because the change is inconsequential and the accuracy of the varialbe holding that position doesn't warrant a recalculation.

For the vast majority of time, the camera behaves predicatbly and smoothly enough the algorythm works well, cycles are saved and no one is the wiser. However when sudden unexpected things happen, rapid change in screen elements, less common camera movements etc less or no cycles can be saved, slowdown occurs.

If that sort of shortcut is taken in graphical processing (and I can't imainge it's not although graphics is no my world so I don't know) then it would seem rapidly shifting camera angle between two points would nullify a lot of that kind of optimization and make rendering stereoscopic views noteably more than twice the work.

No big deal. Many HDTV's response time is at n thousandths (single digits - e.g 2ms Samsung 55-inch UN55B8000) with a 120Hz or 240Hz refresh Vs game frame rate of 60fps (0.0167s). 3D HDTV's guaranteed to do this. The agreed 3D standard mandates that games must run at 60fps (i.e. to assure 30fps per eye) and 1080p output (not frame buffer resolution), but of course we already know that GT5P and the GT Academy demo drops frames...

I think currently it's not much a of a deal anymore, but it wasn't long ago that signal processing could introduce very visible lag, up to a second on some displays I believe. I am not talking about pixel response time, I am talking about the scaler processing the image in the display.
 
I wasn't talking about copy time, I was just wondering if in rendering a scene there are shortcuts used that depend on the camera being relatively near where it was before...

For instance are there algorythms that predict where the camera will be, and then if thats true bypass certain calculations becuase the calculations from teh frame before are still accurate enough (perhaps no recalculating the reflection map if your pov has change less than x). Perhaps a prediction routine that pages certain textures out to free up ram if the cameras movement indicates that texture will not likely be on screen in the next frame.

The sort of thing that can only work if the camera acts in a predicatable stable pattern, not jumping back and forth from two points of view every frame?

For instance lests say that the there is a check for the cameras current change over the last few frames and based on that predicts at what distance certain things will become "noticeably" wrong (sort of like anistropy). So it doesn't recalculate the distance, shadow map or reflection map of objects over x units away from the camera because the change is inconsequential and the accuracy of the varialbe holding that position doesn't warrant a recalculation.

For the vast majority of time, the camera behaves predicatbly and smoothly enough the algorythm works well, cycles are saved and no one is the wiser. However when sudden unexpected things happen, rapid change in screen elements, less common camera movements etc less or no cycles can be saved, slowdown occurs.

Yes, there are methods used to reduce workload based on the predictable continuity in some games, especially in racing games where you can only go one way or another along a track.

The movement of the camera for stereoscopic is not very much though and there's not much difference to what is actually visible in each view, the switching is also not unexpected, it's perfectly predictable.

Environment maps and shadow maps are not camera dependent and while they are maybe not always updated at the refresh, the calculations for applying them which are camera dependent should be done every frame so no difference there for mono or stereo.
 
Yes, there are methods used to reduce workload based on the predictable continuity in some games, especially in racing games where you can only go one way or another along a track.

The movement of the camera for stereoscopic is not very much though and there's not much difference to what is actually visible in each view, the switching is also not unexpected, it's perfectly predictable.

Environment maps and shadow maps are not camera dependent and while they are maybe not always updated at the refresh, the calculations for applying them which are camera dependent should be done every frame so no difference there for mono or stereo.

Perhaps my examples were bad as to what things wouldn't be recalculated, but I do have to wonder whether, predictable or not, the change of camera (small as it is, it's probably still greather than the average change of camera location between frames of normal action) doesn't mean things that could be left in memory don't have to get erased and recalculated a noteable amount of times.

I mean just for things like the hood and dashboard... when only having a single view rendered, you can pretty much garauntee very little change in a lot of it. But with two views, you have to keep bouncing back and forth calculating everything for each viewpoint. And as predictable as that jumping back and forth is, you still either need double the space to store everything (one for each view so you can keep persistance of the data in case you want to just use what was figured for the last frame rather than recalculating) or you need to dump all the data for each view somewhere ot make room for the next view and then get it back again.

Another (probably bad, but hopefully illustrates what I am getting at) example: Just the dash. Let's say certain parts are not visible due to point of view, but it's close enough to the camera that each eye results in different things not being visible (be it because the wheel is in the way, a hand is in the way, a the curvature of the dash blocks something etc).

In a single view world, you do all your calculations for your current view, figure out what can't be seen and don't calculate that stuff, leave all those values in memory (obviously sans the things you can't see and thus didn't calculate and render) and on the next pass, you can decide what needs to be recalculated, everything else you just use the value left in memory.

Now for two views, chances are none or few of the calculations are going to be suitable from the first view for the second view (especially on close things like the dashboard). So you have to dump everything out of memory from the first view to make room for the calculated second view. Unless you weren't using more than half the memory to store these values in single view mode you will not be able to store all the values for a second view without displacing your stored values from the first view.

This means when you come back to render the first view on the next frame, you don't have the benefit of using what's left in memory from your last pass, as it's not there anymore.

I suppose you could check what things from the first view are actually still useable in the second view if you really wanted and try to optomize that way, but then it seems like it's getting pretty complicated and you are essentially optimizing twice for two different systems.

So Frame 1 left eye, you figure out pixel group ABC is not visible becuase it's blocked by the wheel and hand and so you don't render those. Then you calculate lighting and reflections and save those in area DEF.

Now if you only have to render one view, next frame you just check whether steering angle has changed notabley and if not, you don't bother rechecked what can't be scene, you know the hand and wheel are still blocking the same pixels. Again, leave those out of the rendering. Also you calculate that the angle of the car in game is the same as last frame and POV is the same, and no new shadows are being cast, you just use the exact same lighting and reflections from the last frame.

But when you have to do a Frame 1 right eye all of a sudden... you have to check what can't be seen again, and probably ahve to overwrite the info from Frame 1 left eye (again unless you just have a lot of memory left over which seems unlikely in any of todays games). You have to recalculate all reflections for that eye and again, overwrite what was in memory for left eye.

Now Frame 2 left eye comes along... you have nothing left from Frame 1 left eye even if 90% of the rendered scene is the exact same. No optimization is possible now.

Basically unless you really calculate everything possible every pass and don't reuse values from the previous pass to optimize out as much work as possible, I don't see how rendering from two views can not be more taxing than rendering from the same view twice as often...

See what I am saying?
 
Last edited:
Could the delay for gt5 be because sony want to flagship ps3d with gt5 ?.
Its all confirmed that next year the ps3 is going to do 3d and gt5 has been shown at ces running in the format, just an idea..

its most deffinatly going to be in HD lolz
 
It doesn't matter that there might only be small movement in parts of the scene such as the dash, any movement at all and the geometry/lighting all have to be re calculated. The only thing I can really think of being re-used but updated in the way you suggest is possibly in scene sorting.

A car moving forward at 200mph travels at something like 90 m/s, at 60fps that would be 1500 cm/frame, the movement of switching from eye to eye is something like 5 cm/frame which would be the equvalent of updating the frames while skidding sideways at less than 1 mph
 
It doesn't matter that there might only be small movement in parts of the scene such as the dash, any movement at all and the geometry/lighting all have to be re calculated. The only thing I can really think of being re-used but updated in the way you suggest is possibly in scene sorting.

A car moving forward at 200mph travels at something like 90 m/s, at 60fps that would be 1500 cm/frame, the movement of switching from eye to eye is something like 5 cm/frame which would be the equvalent of updating the frames while skidding sideways at less than 1 mph

Really? Everything gets recalculated if there is any change? There is no optimization to reuse old information under certain circumstances?

Like I said graphics processing isn't my strong suite so if that's how it is, that's how it is.
 
Really? Everything gets recalculated if there is any change? There is no optimization to reuse old information under certain circumstances?

Like I said graphics processing isn't my strong suite so if that's how it is, that's how it is.

You could maybe re-use and modify sort lists and maybe the same for some occlusion but as for geometry, it'd be more work and probably slower and use up a lot of memory trying to keep track of it than just re-calculating everything.

EDIT: They will have all sorts of tricks being used that I have no idea about but that's something I'm pretty sure of.
 
You could maybe re-use and modify sort lists and maybe the same for some occlusion but as for geometry, it'd be more work and probably slower and use up a lot of memory trying to keep track of it than just re-calculating everything.

EDIT: They will have all sorts of tricks being used that I have no idea about but that's something I'm pretty sure of.

Well I guess it's all academic then... as I said graphics is not something I got much into but everything I have read from those who do the fancy stuff seems to imply there is as much optimization and code trickery in graphics as there is in program coding if not more, and it has always seemed to me that a lot of the trickery comes from knowing exactly how something is going to happen and being able to rely on that to skip, bypass or reuse code.
 
You could maybe re-use and modify sort lists and maybe the same for some occlusion but as for geometry, it'd be more work and probably slower and use up a lot of memory trying to keep track of it than just re-calculating everything.

EDIT: They will have all sorts of tricks being used that I have no idea about but that's something I'm pretty sure of.

Tricks indeed. There's something called "Tiling" you should read up on. It effectively splits the screen into multiple chunks (tiles). Forza 3 uses it for example - 2 tiles, while Resident Evil 5 uses 6 tiles I believe. It's a technique that's been used for many years now. Be 100% certain "the scene" is recalculated every frame though - it has to be, even if frame buffers are put together differently.

Oh and looks like the shutter based glasses will be linked to the new line of 3D Bravia HDTV's that are coming out this year - not the PS3. CES 2010 provides new information & clarification. 👍 The GT5P 3D prototype was definitely building 2 independent frames though (at 30fps per eye) and switching between them quickly. I couldn't see a GT5 update or GT6 game being done with 3D on the PS3 any differently.

@ Devedander. Keep in mind that GT5 will have car interior views / panning and head tracking. That should offer some insight into how camera movements & angles really aren't important, what's in the scene - is. As I've said before here, camera positioning is insignificant compared to other things the game will be doing and the camera has to be 'positioned for every scene' anyway. Texture streaming (think you referred to this in one of your posts) is an often used trick. The PS3's southbridge hooks directly into one of the CELL's 2 I/O interfaces and the CELL can read data simultaneously from the Blu-ray player and HDD. That means data can be temporarily loaded from the BD to the PS3's HDD based system cache (2GB) & then streamed into memory at the same time that data is streamed direct from the BD to memory.
 
Last edited:
Back