- 3,420
ATI
ATI developer relations manager Richard Huddy waxed technical about Microsoft's upcoming Xbox 360 and ATI's role in the machine. The 360 will be using ATI's Xenos graphics processor, and Huddy's job is to chat with potential developers and then help them develop the tools that best use Xenos' capabilities.
With regard to the console's architecture, Huddy says, "It's way better than I would have expected at this point in the history of 3D graphics." He sees the unified pipeline, rather than segregated pixel and vertex engines, giving the 360 a huge advantage in accessible processing power.
Huddy goes on to shed some light on backward compatibility. Each Xbox game is written with specific Xbox hardware in mind, and the 360's move to PowerPCs and ATI graphics doesn't jibe with the Xbox's Intel chips and Nvidia graphics processors. To add to the difficulty, the 360 wasn't designed for backward compatibility early on in development.
To solve this, Microsoft has implemented the use of emulator programs that will let the Xbox 360 play Xbox games. According to Huddy, "emulating the CPU isn't really a difficult task. ...the real bottlenecks in the emulation are GPU calls--calls made specifically by games to the Nvidia hardware in a certain way. General GPU instructions are easy to convert: an instruction to draw a triangle in a certain way will be pretty generic. However, it's the odd cases--the proprietary routines--that will cause hassle." Once complete, the Xbox emulators could come preloaded on the unit's hard drive or be downloadable via Xbox Live.
Huddy also dispels the notion that the PlayStation 3's higher graphics clock speed (550MHz versus the 360's 500MHz) means the Sony console will outperform the Xbox 360. He believes that its ATI's unified pipeline will make the biggest difference between the Xbox 360 and PS3. ATI archrival Nvidia, which is providing the RSX graphics processor for the PS3, has chosen not to go the route of a unified pipeline.
"This time around, [Nvidia doesn't] have the architecture, and we do. So they have to knock it and say it isn't worthwhile. But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."
sony
Impress PC Watch: Will the PS3's backward compatibility with the PlayStation and PlayStation 2 be done through hardware?
Ken Kutaragi: It will be done through a combination of hardware and software. We can do it with software alone, but it's important to make it as close to perfect as possible. Third-party developers sometimes do things that are unimaginable. For example, there are cases where their games run, but not according to the console's specifications. There are times when games pass through our tests, but are written in ways that make us say, "What in the world is this code?!" We need to support backward compatibility towards those kinds of games as well, so trying to create compatibility by software alone is difficult. There are things that will be required by hardware. However, with the powers of [a machine like] the PS3, some parts can be handled by hardware, and some parts by software.
IPW: What about the endian (byte order) when emulating CPU codes with software?
KK: The Cell is bi-endian (has the ability to switch between usage of big endian and little endian ordering), so there are no problems.
IPW: The Xbox 360's backward compatibility will be done by software, since [there is] no other choice since they don't manufacture their own chips...
KK: The current Xbox will become antiquated once the new machine comes out this November. When that happens, the Xbox will be killing itself. The only way to avoid that is to support 100 percent compatibility from its [Xbox 360's] launch date, but Microsoft won't be able to commit to that. It's technically difficult.
IPW: The most surprising thing about the PS3's architecture is that its graphics are not processed by the Cell. Why didn't you make a Cell-based GPU?
KK: The Cell's seven Synergistic Processor Elements (SPE) can be used for graphics. In fact, some of the demos at E3 were running without a graphics processor, with all the renderings done with just the Cell. However, that kind of usage is a real waste. There are a lot of other things that should be done with the Cell. One of our ideas was to equip two Cell chips and to use one as a GPU, but we concluded that there were differences between the Cell to be used as a computer chip and as a shader, since a shader should be graphics-specific. The Cell has an architecture where it can do anything, although its SPE can be used to handle things such as displacement mapping. Prior to PS3, real-time rendered 3D graphics might have looked real, but they weren't actually calculated in a fully 3D environment. But that was OK for screen resolutions up until now. Even as of the current time, most of the games for the Xbox 360 use that kind of 3D. However, we want to realize fully calculated 3D graphics in fully 3D environments. In order to do that, we need to share the data between the CPU and GPU as much as possible. That's why we adopted this architecture. We want to make all the floating-point calculations including their rounded numbers the same, and we've been able to make it almost identical. So as a result, the CPU and GPU can use their calculated figures bidirectionally.
IPW: We were predicting that eDRAM was going to be used for the graphics memory, but after hearing that the PS3 will support the use of two HDTVs, we understood why it wasn't being used.
KK: Fundamentally, the GPU can run without graphics memory since it can use Redwood (the high-speed interface between Cell and the RSX GPU) and YDRAM (the code name for XDR DRAM). YDRAM is unified memory. However, there's still the question of whether the [bandwidth and cycle time] should be wasted by accessing the memory that's located far away when processing the graphics or using the shader. And there's also no reason to use up the Cell's memory bandwidth for normal graphics processes. The shader does a lot of calculations of its own, so it will require its own memory. A lot of VRAM will especially be required to control two HDTV screens in full resolution (1920x1080 pixels). For that, eDRAM is no good. eDRAM was good for the PS2, but for two HDTV screens, it's not enough. If we tried to fit enough volume of eDRAM [to support two HDTV screens] onto a 200-by-300-millimeter chip, there won't be enough room for the logics, and we'd have had to cut down on the number of shaders. It's better to use the logics in full, and to add on a lot of shaders.
IPW: First of all, why did you select Nvidia as your GPU vendor?
KK: Up until now, we've worked with Toshiba [for] our computer entertainment graphics. But this time, we've teamed with Nvidia, since we're making an actual computer. Nvidia has been thoroughly pursuing PC graphics, and with their programmable shader, they're even trying to do what Intel's processors have been doing. Nvidia keeps pursuing processor capabilities and functions because [Nvidia chief scientist] David Kirk and other developers come from all areas of the computer industry. They sometimes overdo things, but their corporate culture is very similar to ours. Sony and Nvidia have agreed that our goal will be to pursue [development of] a programmable processor as far as we can. I get a lot of opportunity to talk to Nvidia CEO Jen-Hsun [Huang] and David, and we talk about making the ideal GPU. When we say "ideal," we mean a processor that goes beyond any currently existing processor. Nvidia keeps on going into that direction, and in that sense, they share our vision. We share the same road map as well, as they are actually influenced by our [hardware] architecture. We know each other's spirits and we want to do the same thing, so that's why [Sony] teamed with Nvidia. The other reason is that consumers are starting to use fixed-pixel displays, such as LCD screens. When fixed-pixel devices become the default, it will be the age when TVs and PCs will merge, so we want to support everything perfectly. Aside from backward compatibility to, we also want to support anything from legacy graphics to the latest shader. We want to do resolutions higher than WSXGA (1680x1050 pixels). In those kinds of cases, it's better to bring everything from Nvidia rather than for us to create [a build] from scratch.
IPW: Microsoft decided to use a unified-shader GPU by ATI for its Xbox 360. Isn't unified shader more cutting edge when it comes to programming?
KK: The vertex shader and pixel shader are unified in ATI's architecture, and it looks good at one glance, but I think it will have some difficulties. For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled. Reality is different from what's painted on canvas. If we're taking a realistic look at efficiency, I think Nvidia's approach is superior.
the big comparsion
KK: The vertex shader and pixel shader are unified in ATI's architecture, and it looks good at one glance, but I think it will have some difficulties. For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled. Reality is different from what's painted on canvas. If we're taking a realistic look at efficiency, I think Nvidia's approach is superior.
This time around, [Nvidia doesn't] have the architecture, and we do. So they have to knock it and say it isn't worthwhile. But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."
Now while both are assuming, Ken is alot more professional and logical with his appeal. Instead of sounding professional, ATI sounded like kids, "the ps3 doesnt have what we have so it's gonna be weaker and slower."
ATI developer relations manager Richard Huddy waxed technical about Microsoft's upcoming Xbox 360 and ATI's role in the machine. The 360 will be using ATI's Xenos graphics processor, and Huddy's job is to chat with potential developers and then help them develop the tools that best use Xenos' capabilities.
With regard to the console's architecture, Huddy says, "It's way better than I would have expected at this point in the history of 3D graphics." He sees the unified pipeline, rather than segregated pixel and vertex engines, giving the 360 a huge advantage in accessible processing power.
Huddy goes on to shed some light on backward compatibility. Each Xbox game is written with specific Xbox hardware in mind, and the 360's move to PowerPCs and ATI graphics doesn't jibe with the Xbox's Intel chips and Nvidia graphics processors. To add to the difficulty, the 360 wasn't designed for backward compatibility early on in development.
To solve this, Microsoft has implemented the use of emulator programs that will let the Xbox 360 play Xbox games. According to Huddy, "emulating the CPU isn't really a difficult task. ...the real bottlenecks in the emulation are GPU calls--calls made specifically by games to the Nvidia hardware in a certain way. General GPU instructions are easy to convert: an instruction to draw a triangle in a certain way will be pretty generic. However, it's the odd cases--the proprietary routines--that will cause hassle." Once complete, the Xbox emulators could come preloaded on the unit's hard drive or be downloadable via Xbox Live.
Huddy also dispels the notion that the PlayStation 3's higher graphics clock speed (550MHz versus the 360's 500MHz) means the Sony console will outperform the Xbox 360. He believes that its ATI's unified pipeline will make the biggest difference between the Xbox 360 and PS3. ATI archrival Nvidia, which is providing the RSX graphics processor for the PS3, has chosen not to go the route of a unified pipeline.
"This time around, [Nvidia doesn't] have the architecture, and we do. So they have to knock it and say it isn't worthwhile. But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."
sony
Impress PC Watch: Will the PS3's backward compatibility with the PlayStation and PlayStation 2 be done through hardware?
Ken Kutaragi: It will be done through a combination of hardware and software. We can do it with software alone, but it's important to make it as close to perfect as possible. Third-party developers sometimes do things that are unimaginable. For example, there are cases where their games run, but not according to the console's specifications. There are times when games pass through our tests, but are written in ways that make us say, "What in the world is this code?!" We need to support backward compatibility towards those kinds of games as well, so trying to create compatibility by software alone is difficult. There are things that will be required by hardware. However, with the powers of [a machine like] the PS3, some parts can be handled by hardware, and some parts by software.
IPW: What about the endian (byte order) when emulating CPU codes with software?
KK: The Cell is bi-endian (has the ability to switch between usage of big endian and little endian ordering), so there are no problems.
IPW: The Xbox 360's backward compatibility will be done by software, since [there is] no other choice since they don't manufacture their own chips...
KK: The current Xbox will become antiquated once the new machine comes out this November. When that happens, the Xbox will be killing itself. The only way to avoid that is to support 100 percent compatibility from its [Xbox 360's] launch date, but Microsoft won't be able to commit to that. It's technically difficult.
IPW: The most surprising thing about the PS3's architecture is that its graphics are not processed by the Cell. Why didn't you make a Cell-based GPU?
KK: The Cell's seven Synergistic Processor Elements (SPE) can be used for graphics. In fact, some of the demos at E3 were running without a graphics processor, with all the renderings done with just the Cell. However, that kind of usage is a real waste. There are a lot of other things that should be done with the Cell. One of our ideas was to equip two Cell chips and to use one as a GPU, but we concluded that there were differences between the Cell to be used as a computer chip and as a shader, since a shader should be graphics-specific. The Cell has an architecture where it can do anything, although its SPE can be used to handle things such as displacement mapping. Prior to PS3, real-time rendered 3D graphics might have looked real, but they weren't actually calculated in a fully 3D environment. But that was OK for screen resolutions up until now. Even as of the current time, most of the games for the Xbox 360 use that kind of 3D. However, we want to realize fully calculated 3D graphics in fully 3D environments. In order to do that, we need to share the data between the CPU and GPU as much as possible. That's why we adopted this architecture. We want to make all the floating-point calculations including their rounded numbers the same, and we've been able to make it almost identical. So as a result, the CPU and GPU can use their calculated figures bidirectionally.
IPW: We were predicting that eDRAM was going to be used for the graphics memory, but after hearing that the PS3 will support the use of two HDTVs, we understood why it wasn't being used.
KK: Fundamentally, the GPU can run without graphics memory since it can use Redwood (the high-speed interface between Cell and the RSX GPU) and YDRAM (the code name for XDR DRAM). YDRAM is unified memory. However, there's still the question of whether the [bandwidth and cycle time] should be wasted by accessing the memory that's located far away when processing the graphics or using the shader. And there's also no reason to use up the Cell's memory bandwidth for normal graphics processes. The shader does a lot of calculations of its own, so it will require its own memory. A lot of VRAM will especially be required to control two HDTV screens in full resolution (1920x1080 pixels). For that, eDRAM is no good. eDRAM was good for the PS2, but for two HDTV screens, it's not enough. If we tried to fit enough volume of eDRAM [to support two HDTV screens] onto a 200-by-300-millimeter chip, there won't be enough room for the logics, and we'd have had to cut down on the number of shaders. It's better to use the logics in full, and to add on a lot of shaders.
IPW: First of all, why did you select Nvidia as your GPU vendor?
KK: Up until now, we've worked with Toshiba [for] our computer entertainment graphics. But this time, we've teamed with Nvidia, since we're making an actual computer. Nvidia has been thoroughly pursuing PC graphics, and with their programmable shader, they're even trying to do what Intel's processors have been doing. Nvidia keeps pursuing processor capabilities and functions because [Nvidia chief scientist] David Kirk and other developers come from all areas of the computer industry. They sometimes overdo things, but their corporate culture is very similar to ours. Sony and Nvidia have agreed that our goal will be to pursue [development of] a programmable processor as far as we can. I get a lot of opportunity to talk to Nvidia CEO Jen-Hsun [Huang] and David, and we talk about making the ideal GPU. When we say "ideal," we mean a processor that goes beyond any currently existing processor. Nvidia keeps on going into that direction, and in that sense, they share our vision. We share the same road map as well, as they are actually influenced by our [hardware] architecture. We know each other's spirits and we want to do the same thing, so that's why [Sony] teamed with Nvidia. The other reason is that consumers are starting to use fixed-pixel displays, such as LCD screens. When fixed-pixel devices become the default, it will be the age when TVs and PCs will merge, so we want to support everything perfectly. Aside from backward compatibility to, we also want to support anything from legacy graphics to the latest shader. We want to do resolutions higher than WSXGA (1680x1050 pixels). In those kinds of cases, it's better to bring everything from Nvidia rather than for us to create [a build] from scratch.
IPW: Microsoft decided to use a unified-shader GPU by ATI for its Xbox 360. Isn't unified shader more cutting edge when it comes to programming?
KK: The vertex shader and pixel shader are unified in ATI's architecture, and it looks good at one glance, but I think it will have some difficulties. For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled. Reality is different from what's painted on canvas. If we're taking a realistic look at efficiency, I think Nvidia's approach is superior.
the big comparsion
KK: The vertex shader and pixel shader are unified in ATI's architecture, and it looks good at one glance, but I think it will have some difficulties. For example, some question where will the results from the vertex processing be placed, and how will it be sent to the shader for pixel processing. If one point gets clogged, everything is going to get stalled. Reality is different from what's painted on canvas. If we're taking a realistic look at efficiency, I think Nvidia's approach is superior.
This time around, [Nvidia doesn't] have the architecture, and we do. So they have to knock it and say it isn't worthwhile. But in the future, they'll market themselves out of this corner, claiming that they've cracked how to do it best. But RSX isn't unified, and this is why I think PS3 will almost certainly be slower and less powerful."
Now while both are assuming, Ken is alot more professional and logical with his appeal. Instead of sounding professional, ATI sounded like kids, "the ps3 doesnt have what we have so it's gonna be weaker and slower."