Rumor: Microsoft cancels E3 post press conference rountable with media.
We'll see if MS have a response to the news before/during E3. So far, Xbox One rumors are not positive.
"Xbox One being underclocked to deal with overheating issues:"
http://www.gamingvlog.net/2013/06/05/xbox-one-being-underclocked-to-deal-with-overheating-issues/
"TRUTHFACT: MS having eSRAM yield problems on Xbox One, RUMOR: downclocking GPU:"
http://www.neogaf.com/forum/showthread.php?t=576869
How come during the event, in which the console was on almost the whole time, the console wasn't having troubles? Or did it not start encountering problems until they started running games? Hmm..
How come during the event, in which the console was on almost the whole time, the console wasn't having troubles? Or did it not start encountering problems until they started running games? Hmm..
Hopefully this doesn't mean 5 years of hardware failure...
Edit - Also, what are "yield problems?"
Originally Posted by Piggus:
Underclocking the GPU wouldn't really make sense to me if they're having problems with the eSRAM. I could be wrong, but yields usually have to do with the number of working chips coming off the assembly line and not their thermal performance. If the GPU is getting too hot and that is causing problems for the eSRAM (which seems pretty unlikely to me) then yeah, underclocking could mitigate that a bit. But it really just sounds like the eSRAM is too complex (that's A LOT of transistors) and therefore is more difficult to produce. Sony ran into the same issue on PS3 with CELL and they ended up disabling one of the SPUs since many chips came off the line with at least one non-working SPU.
If MS is indeed having trouble with just the eSRAM then it could definitely lead to shortages. There isn't anything on the APU that they could disable as far as I know that would improve yields.
Think this way.
--
Sony's APU is 3 billion transistors. MS's is 5 billion. And more complex because of added parts.
Because they are all in one part, one part ****ed up means the whole APU will be affected by it. And since 25% of APU's die is dedicated for ESRAM (in my opinion this is not smart at all. With this many transistors MS should have just outright put more GPU power), the heat issue can be quite real.
and since ESRAM's bandwidth is directly correlated with GPU's speed (lower GPU speed and lower the ESRAM bandwidth therefore lower the thermal issue) seems plausible, but I still don't think MS can be THAT stupid to figure out this in the blueprint stage.
If they did then this is colossal **** UP.
come to think of it the yield for this chip must be horrible.
Originally Posted by Y2Kev:
I read the OP but I can't find the explanation. Why would an esram issue cause a gpu downgrade?
There are many theories. IMO, the more I think about this one, the more it makes sense, and I generally discount a lot of what is read on B3D:
Originally Posted by Brad Grenz:
It's been suggested on Beyond3D that the SRAM array may literally be too large for signals to travel the physical distance in time to be considered valid in a single clock cycle. No one has ever made a pool of SRAM this large before. IBM uses EDRAM for their large CPU caches. Maybe that's in part because they were not sure SRAM could scale effectively do to such issues. If correct, that means the normal measures you'd take to improve yields (additional cooling, deactivating defective regions, even increasing production runs) would not be effective. But lowering the clock would give a signal more time to travel through a wire.
This explanation would require one to believe that either MS had originally designed the eSRAM pool on a razor's edge, with maximum transistor density in mind, and the design failed, either because of leakage/thermal/yield reasons, or they designed the pool with a smaller process in mind, and yields sucked, so they had to shift to a larger process.
If this turns out to be true, the story of this misadventure will be very very interesting.
Basically they create a large circular silicon wafer and put the conducting pathways for the (processor or memory) chip on it. (http://www.geek.com/chips/from-sand-...s-made-832492/ if you want to know how it works)
On one of these wafers they can fit many chips which they then cut out (like baking a pizza and cutting it into identical slices)
There will however be defects on the wafer, the chips that occupy the areas that have the defects on them are either useless or can be used as lower end parts by disabling part of the chip
(like if a gpu has 18 compute units you could only use 16 if the defect is on one of the others, ps4 already does this by only using 18 out of 20 cus so they can have two for redundancy to improve yields)
Not all of them come out as well as the others and some can handle more voltage than others, so they can be higher clocked.
Now the problem is that when you have really large chips.
The larger the chip the bigger the chance that each chip will have a defect. (exponentially much so)
If you have 10 defects on your wafer but you fit 100 small dies on the wafer then only at most 10 , so ten percent, will be throwaway or salvaged as low end part, but if you only fit 10 on there then it's likely that many of them may have one of those defects.
You could easily have to throw away 30-50 percent or even all of them.
Xbox one uses APU which is a really big die that needs to house the cpu, gpu AND in xbox one's case also the ESRAM which takes up a huge amount of transistors and therefor physical space on the chip.
They end up with little hardware power yet a huge (5 billion transistor) die, so they have low yields.
Normally low end hardware only takes a tiny little die so yields are good, but for some reason that either MS engineers or suits/beancounters can only know they decided to go for this huge ass APU with esram.
From my limited knowledge Sony ended up with a lot more bang for their buck... they put their die space into a bit more gpu power and didn't design their apu around needing esram (since they didn't cheap out on the vram, which is a collection of seperate chips that are embedded on a PCB and connected to the APU through a memory bus)
Meanwhile MS seems to have thrown the baby out with the bathwater.
It was only on for about an hour at the event. And that's if it was actually a working console and not just a simulation.
With Xbox One you can game offline for up to 24 hours on your primary console, or one hour if you are logged on to a separate console accessing your library. Offline gaming is not possible after these prescribed times until you re-establish a connection, but you can still watch live TV and enjoy Blu-ray and DVD movies.
Give your games to friends: Xbox One is designed so game publishers can enable you to give your disc-based games to your friends. There are no fees charged as part of these transfers. There are two requirements: you can only give them to people who have been on your friends list for at least 30 days and each game can only be given once.
Usually those post-conference roundtables are for spilling all the gritty details on how the systems really work. If this is true, it could end up being MS trying to avoid giving away controversial plans that can be widely considered to be anti-consumer or face tough questions about DRM, overheating problems, privacy concerns, and others they'd rather not have to address right away.
Setting themselves up for a potential DDoS attackMicrosoft at it's best, what are they trying to achieve with all those restrictions ?
Setting themselves up for a potential DDoS attack. Imagine if that happened on launch day...
Talk about victim of bad timing, they put this BS up as soon as the Washington Post puts up an article saying that the NSA is data mining MS servers.
"Oh, Microsoft won't spy on you, but the NSA will!"
http://www.washingtonpost.com/inves...0c0da8-cebf-11e2-8845-d970ccb04497_story.html
Is the XBone still a videogame console? Or is it a DVR and DVD/BluRay player that can also function as a client for an online gaming network?