Yes, there are methods used to reduce workload based on the predictable continuity in some games, especially in racing games where you can only go one way or another along a track.
The movement of the camera for stereoscopic is not very much though and there's not much difference to what is actually visible in each view, the switching is also not unexpected, it's perfectly predictable.
Environment maps and shadow maps are not camera dependent and while they are maybe not always updated at the refresh, the calculations for applying them which are camera dependent should be done every frame so no difference there for mono or stereo.
Perhaps my examples were bad as to what things wouldn't be recalculated, but I do have to wonder whether, predictable or not, the change of camera (small as it is, it's probably still greather than the average change of camera location between frames of normal action) doesn't mean things that could be left in memory don't have to get erased and recalculated a noteable amount of times.
I mean just for things like the hood and dashboard... when only having a single view rendered, you can pretty much garauntee very little change in a lot of it. But with two views, you have to keep bouncing back and forth calculating everything for each viewpoint. And as predictable as that jumping back and forth is, you still either need double the space to store everything (one for each view so you can keep persistance of the data in case you want to just use what was figured for the last frame rather than recalculating) or you need to dump all the data for each view somewhere ot make room for the next view and then get it back again.
Another (probably bad, but hopefully illustrates what I am getting at) example: Just the dash. Let's say certain parts are not visible due to point of view, but it's close enough to the camera that each eye results in different things not being visible (be it because the wheel is in the way, a hand is in the way, a the curvature of the dash blocks something etc).
In a single view world, you do all your calculations for your current view, figure out what can't be seen and don't calculate that stuff, leave all those values in memory (obviously sans the things you can't see and thus didn't calculate and render) and on the next pass, you can decide what needs to be recalculated, everything else you just use the value left in memory.
Now for two views, chances are none or few of the calculations are going to be suitable from the first view for the second view (especially on close things like the dashboard). So you have to dump everything out of memory from the first view to make room for the calculated second view. Unless you weren't using more than half the memory to store these values in single view mode you will not be able to store all the values for a second view without displacing your stored values from the first view.
This means when you come back to render the first view on the next frame, you don't have the benefit of using what's left in memory from your last pass, as it's not there anymore.
I suppose you could check what things from the first view are actually still useable in the second view if you really wanted and try to optomize that way, but then it seems like it's getting pretty complicated and you are essentially optimizing twice for two different systems.
So Frame 1 left eye, you figure out pixel group ABC is not visible becuase it's blocked by the wheel and hand and so you don't render those. Then you calculate lighting and reflections and save those in area DEF.
Now if you only have to render one view, next frame you just check whether steering angle has changed notabley and if not, you don't bother rechecked what can't be scene, you know the hand and wheel are still blocking the same pixels. Again, leave those out of the rendering. Also you calculate that the angle of the car in game is the same as last frame and POV is the same, and no new shadows are being cast, you just use the exact same lighting and reflections from the last frame.
But when you have to do a Frame 1 right eye all of a sudden... you have to check what can't be seen again, and probably ahve to overwrite the info from Frame 1 left eye (again unless you just have a lot of memory left over which seems unlikely in any of todays games). You have to recalculate all reflections for that eye and again, overwrite what was in memory for left eye.
Now Frame 2 left eye comes along... you have nothing left from Frame 1 left eye even if 90% of the rendered scene is the exact same. No optimization is possible now.
Basically unless you really calculate everything possible every pass and don't reuse values from the previous pass to optimize out as much work as possible, I don't see how rendering from two views can not be more taxing than rendering from the same view twice as often...
See what I am saying?