Coprocessors are also part of the architecture, and they will differ between Xbox and PS.
What "coprocessors," and what real world situation do you think will occur that will cause them to eliminate performance differences caused by a more powerful GPU and CPU? What "bottlenecks" is the PS5 going to overcome because of these "coprocessors" that the XBox will not be able to, assuming the XBox has the bottlenecks to begin with? Because the biggest bottleneck for PC hardware that has become much more prevalent since the PS4/Xbone came out is boost frequency and the subsequent performance losses from heat, and that is one that Sony seems more concerned about than Microsoft is.
The only "coprocessors" I've seen mentioned for the PS5 so far are integrated into the SSD's I/O controller, which does explain the higher performance of it compared to the Xbox SeX's presumably more "off the shelf" integration; but if I built a game console with a 3800x and a 2080 hooked up to a SATA-6 SSD it
is going to perform better than a 3700/2070 Super with an M.2. The differences aren't going to be big, but the the SSD speed isn't going to do much of anything to offset them regardless.
So they don't use the exact same architecture. They do use the same CPU/GPU platform, but there is more to it than that.
Not
terribly much more in an industry that has been using x86 boxes for 7 years now; so much so that you generally can even do a pretty good guess for what design AMD started with when they were designing their custom solutions for the console makers (Cerny has even said there's a chance that there will be a PC equivalent to the PS5 GPU on the market when the PS5 launches). Last console generation the most "exotic" thing either of them did was Microsoft sticking some eSRAM on the GPU for the XBone, but the PS4 still walked it because it was still basically the same hardware with better specs.
On a side note.
Based on what is here.
Less CU cores, but higher freq.
Xbox has Higher CU but lower freq.
Wonders if under max load on both consoles would equal about the same.
Box would have to lower it's TF due to heat and PS increases because it has better heat availability?
Traditionally, it has been the other way around. That is, the higher clocked GPU of the PS5 would be more sensitive to heat than the slower clocked but "higher" specced one in the XBox. AMD GPUs have been space heaters for a while now, so hopefully Big Navi has solved some of that somewhat and it won't be that big of a deal.
Keep in mind that even though Microsoft didn't say the system has variable clock rates like Sony did, doesn't necessarily mean it will run full clocks all the time even when there's no load. I'm speculating here, but I'd
guess what Microsoft is doing (since they didn't outright say that the clocks would vary) is specifying the clock rates that both will go to; whereas Sony is relying on boost frequency for the performance and specs that they gave. Basically the XBox will run hotter but hold onto the clocks on both (and lower them when there's no load), whereas the PS5 will boost to a certain speed but pare back more as temperatures increase. So long as the PS5's cooling solution is acceptable for the design, there's not really an definite advantage to either solution. For Zen 2 in particular, there's been a lot of argument since it debuted about whether using PBO provides more performance rather than a traditional manual overclock.