Reduced os memory footprint

  • Thread starter gtbillyboy
  • 38 comments
  • 3,623 views
This might be the solution for jagged shadow textures.

Yeah, probably this, but I wouldn't suspect much else. From my estimates based on Prologue, they MIGHT be able to squeeze another car onto the grid with that much extra power for the game, but it'd be close.

Hopefully they'll just use it for higher-res shadows, as seen in the TGS demos (which were probably running this lower-use OS anyway), and that'll banish jaggies to the past once and for all. The in-game AA is fine, and I doubt that it's enough to enable rain on it's own (though if rain was already half squeezed in, it would give it a lot more power to work with, and make it a lot better). Maybe they could use it for something like better smoke or particle effects? Or some more addition effects (or a higher LOD) in replays or something.
 
This might be the solution for jagged shadow textures.

More memory will not help out for the jagged shadows. Shadows are not textures stored in memory, they are calculated in real-time. So you'd rather need more cpu/gpu cycles than memory.

I'll try to stop the smart-ass-ness now:)
 
Surely this is a bit too late to have a big impact on GT5? I would guess only tiny things could be tweaked instead of big things like adding new cars...unless PD knew about this and have planned ahead.
 
More memory will not help out for the jagged shadows. Shadows are not textures stored in memory, they are calculated in real-time. So you'd rather need more cpu/gpu cycles than memory.

I'll try to stop the smart-ass-ness now:)

That's correct, but remember what the shadows are projected as, and what they're projected from.

Also consider the possibility of some re-allocation; what may previously have had to be generated, could now be pre-made and stored.
 
More memory will not help out for the jagged shadows. Shadows are not textures stored in memory, they are calculated in real-time. So you'd rather need more cpu/gpu cycles than memory.

I'll try to stop the smart-ass-ness now:)

That's correct, but remember what the shadows are projected as, and what they're projected from.

Also consider the possibility of some re-allocation; what may previously have had to be generated, could now be pre-made and stored.

Wrong, and Wrong. I'll make this simple for the people trying to out-smartass the guy who was right in the first place.

Everything rendered on your screen is stored in memory.
The shadows used in GT5's cockpit view are clearly shadow maps because they are low-quality and have no anti-aliasing.
Ray-traced shadows would take too much processing power and memory to achieve in a game engine.
Shadow maps are usually the low quality option in 3D content creation but for games they are what is most commonly used.
Shadow maps ARE textures.
The way shadow maps are calculated is complicated but let me break it down for you:
A shadow map is created by rendering the scene from the view of the light source and checking whether a pixel is visible from this view by comparing it to a z-buffer (which is stored as a texture).
An important part of the quality of the shadow map is the quality at which the picture is rendered from the lights point of view. Since this AND the depth map both need to be stored in memory, they are directly affected by how much is available. The more memory, the high quality the shadow map can be rendered at, because the depth maps can be rendered and stored at higher resolutions.
A shadow map is calculated for every frame of the render.
In a game this means that the shadow map must be recalculated for every frame. In most games this is 30Hz or 60Hz.
Since the shadow map is rendered outside of the actual rendering you have a game that runs at 60fps but shadow maps that update 30 times a second, at which point you will have a more jittery shadow map in comparison to the rest of the game.
I've seen this happen many times in many games.
Anti-aliasing shadows again means more memory, since a much higher resolution render of the shadow map needs to be done in order to sample the images.

I hope this was enough smart-ass-ness for you.
 
If the rumors about PS4 are right, there is no real competion in next-gen graphics if same gpu technology is not used in MS console.
 
Wrong, and Wrong. I'll make this simple for the people trying to out-smartass the guy who was right in the first place.
...
I hope this was enough smart-ass-ness for you.

That was a big explanation but I dont see how this proves jagged shadows are caused by lack of memory.

Improving the shadows would of course mean using a bigger shadow map(higher res, more precision). And hence allocating some more memory is needed. But the extra memory would be nothing compared to the computational payload you add.

I think you can have a pretty good shadow map being about 2MB (1000x1000pixels, only 'cockpit map' is needed, 2 bytes precision). Simply finding some extra MB's can drastically enlarge the quality here.

From the computational point of view, you have to calculate this larger , more precise map, and then apply this larger, more precise map to the scene. Dont think Gt's graphic engine has a lot of resources left for this.

In other words, to me it seems memory is not the bottle neck in drawing higher quality shadows, but cpu/gpu is.

PS: (I know semantics...)But a shadow map is NOT a texture. A shadow map is a block of memory containing depth values. A texture contains visual data, colors...
 
That was a big explanation but I dont see how this proves jagged shadows are caused by lack of memory.

Improving the shadows would of course mean using a bigger shadow map(higher res, more precision). And hence allocating some more memory is needed. But the extra memory would be nothing compared to the computational payload you add.

I think you can have a pretty good shadow map being about 2MB (1000x1000pixels, only 'cockpit map' is needed, 2 bytes precision). Simply finding some extra MB's can drastically enlarge the quality here.

From the computational point of view, you have to calculate this larger , more precise map, and then apply this larger, more precise map to the scene. Dont think Gt's graphic engine has a lot of resources left for this.

In other words, to me it seems memory is not the bottle neck in drawing higher quality shadows, but cpu/gpu is.

PS: (I know semantics...)But a shadow map is NOT a texture. A shadow map is a block of memory containing depth values. A texture contains visual data, colors...

I wasn't alluding to semantics. A shadow map is a texture.
I don't know quite how else to explain this to you, if my explanation wasn't enough.
I don't mean to sound condescending or patronising but it seems as thought you don't know quite as much as you think you do on this matter.
Firstly, a texture does not simply contain visual data.
A texture can be used for an entire plethora of activities. You can create a bump map or a normal map (i am sure you have heard of these) bump maps and normal maps give no colour information to the model whatsoever. They only imbue the model with topological information that the render engine uses in its calculations of light and shadow. How do you account for this in your idea of a texture?
There exist specular maps, which are textures, that again contain no colour information but are completely greyscale and are used to indicate the specular intensity of various areas of the model.
I could go on but the list is long and I don't have time.
I can point you to here which talks about a shadow map being redrawn every frame: http://www.technology.scee.net/files/presentations/acgirussia/Case_Studies_ACGI_09.pdf

My problem with what you said is not whether GT5 is bottlenecked by the CPU or Memory.
The problem is that a Shadow map IS a texture, albeit one created by the rendering engine, it is a texture nonetheless.
There is no block of memory containing depth values, that is called a z-buffer, or a depth map, depending on how the engine handles depth calculation.
And much to your dismay, that too is a texture.
Also, what you probably don't know is that developers have access to all these maps from their engine. They can rendered directly to a file so that during development they can see how their shadows are doing.
 
If the rumors about PS4 are right, there is no real competion in next-gen graphics if same gpu technology is not used in MS console.

What rumors are you talking about? Care to explain, I'm kinda lost here.
 
I wasn't alluding to semantics. A shadow map is a texture....

Dude, I think your missing his point. that the extra memory wont really help with the shadows.

Since to get better shadows you need more calculations better filtering... which would be on the cpu/gpu side of things not memory. Which he said in the first place.
 
What rumors are you talking about? Care to explain, I'm kinda lost here.

Rumor says PS4 GPU will have same architechture as dreamcast which reduces GPU load quite well. Essentially GPU has to only calculate things that you see on screen. No doupt there will be shared internal memory like 360 has now.
There has not been any CPU rumors yet but it wont be a cell, that is for sure(cell program was shut down by toshiba and ibm) Cell is just a bad choice for CPU for a gaming system to use. Cell will likely continue to have a nice market in physics and chemistry modelling etc.
 
Dude, I think your missing his point. that the extra memory wont really help with the shadows.

Since to get better shadows you need more calculations better filtering... which would be on the cpu/gpu side of things not memory. Which he said in the first place.

What he says!
+
You may call shadow maps textures, but I dont.

EDIT:
found this on wikipedia about shadow maps: "This depth map is often stored as a texture in graphics memory."
Lets say this explains the confusion :)
 
Last edited:
Rumor says PS4 GPU will have same architechture as dreamcast which reduces GPU load quite well. Essentially GPU has to only calculate things that you see on screen. No doupt there will be shared internal memory like 360 has now.
There has not been any CPU rumors yet but it wont be a cell, that is for sure(cell program was shut down by toshiba and ibm) Cell is just a bad choice for CPU for a gaming system to use. Cell will likely continue to have a nice market in physics and chemistry modelling etc.

Ohh alright, thanks for explaining it to me 👍
 
Raitziger, all thati heard about PS4 has nothing to do with what you said, but i don't look for news like this. But i hear that the PS4's processor will have a architechture like Cell but with more cores.
But as i said, i don't look for news about this, i could be wrong.
 
Hmmm... i see. But, where you take that they will left the cell archtechture ?
I really doubt they will left it. The delevolpers are getting used with Cell and take more and more of it, so a powerfull version of a processor that is already powerfull seems to be a very wise choise.
 
Oh the possibilities... Reverse lights, skidmarks, Dirt on the cars, drivers picking their noses etc. (Things that don't matter basically)

I don't think it will change much. Maybe with more memory at hand it would'nt get so confused when you get several cars on screen. Stabilising the framerate i feel would be the best use of the extra memory. Although with Polyphony Digital being an inhouse developer rather than a third party developer, they will have been notified of the changes well in advance so the extra RAM may already have been built into their engine. The time trial demo is not the full engine of course, lack of AI cars, weather and damage (And some say reduced graphics aswell - which is possible) etc.

Maybe they could improve the anti-aliasing, but anyone thinking there will be absolutely no jaggies is just kidding themselves. There will be imperfections in this game, that is almost certain.
 
To me this sounds more like PD given a little bit of fresh air to optimize the engine a bit further and make sure it's as smooth as it can be instead of adding useless stuff. Perfect the current thing, have a good grid on track with weather etc :P
 
The time trial demo is not the full engine of course, lack of AI cars, weather and damage (And some say reduced graphics aswell - which is possible) etc.

I'm 100% certain that the demo is graphically weaker than GT5P. There's no smoke at all, barely any dirt, the trees are just blank, there's no grass at all (all I see is a coat of green), etc. The demo was simply made to the simplest level graphically so that it would be small in size (explains the 200mb size).
 
Yes, rofajole, no smoke and dirt is really missing, even GT PSP have more smoke effects than this demo. And at the TGS demo the smoke effects was in and it looks pretty good, even better than in GT5P.
But i really hope that they put backfire and sparkles in the final version, back fire and sparkles just add a so much to the driving experience(but i'm getting hopeless, until now no one video seen shows backfire or something like that). *-*
 
Whether or not increased memory will allow for reduced jaggies and better shadows really depends on what walls GT is hitting in the hardware and how it deals with them.

AA, shadows and textures in general are the major video RAM sinks in PC gaming. I know because I have the 320mb version of the 8800GTS and I'm always hitting the cards RAM limit :P So it may be that GT is hitting a wall in that department and increased ram at their disposal may allow for better shadows or AA.
 
Since the demo bottomed to 52 frames only due to a single additional car on the track (the ghost), I really don't think they need more RAM than they need more computational power. I'm also very worried how are they going to maintaine the promised 60 fps when they are already falling behind in a 2 car demo lacking proper environment and particles.
We know on gt5p when you actually see the framerates drop drastically to levels that take away from the "real driving experience".
Therefore I can't imagine seeing 30-40 vehicles in NASCAR and actually having stable 60fps in full grid races (16 cars).
As to shadow jaggies I reckon they won't be fixed.
 
Back