• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] PlayStation 5 Pro Hands-On: 11 Games Tested, PSSR and RT Upgrades Revealed, Developers Interviewed!

winjer

Gold Member
It can request the information required for a new frame sooner, but that doesn't mean the CPU is ready to deliver that information. The question was whether the rate at which the CPU is able to generate the required information increases at lower rendering resolutions.

The amount of draw calls do not depend on resolution. It depends on the amount of unique objects, it's materials and meshes. And how these are batched.
 

FireFly

Member
The amount of draw calls do not depend on resolution. It depends on the amount of unique objects, it's materials and meshes. And how these are batched.
Yes, in which case the mechanism PaintTinJr is describing does not help to alleviate CPU bottlenecks. (Where the GPU is waiting on the CPU).
 

PaintTinJr

Member
It can request the information required for a new frame sooner, but that doesn't mean the CPU is ready to deliver that information. The question was whether the rate at which the CPU is able to generate the required information increases at lower rendering resolutions.
The CPU's main core gets blocked less when the GPU has smaller workloads because the GPU returns control more consistently and more frequently, which in turn gives the CPU more time slices to react and prepare work, putting less strain on caches and a general reduction in latency per task, providing headroom to generate more smaller workloads per second, effectively yielding an 'increase at lower rendering resolutions'
 
Last edited:

PaintTinJr

Member
Yes, in which case the mechanism PaintTinJr is describing does not help to alleviate CPU bottlenecks. (Where the GPU is waiting on the CPU).
The number of workloads a main CPU core on a console can issue in a complex graphics rendering situation - as are modern AAA games on modern console - has a general inversely proportional relationship with the size and complexity of the workloads for the GPU.
 
Yon can ignore my posts.

Pro (just like PS4/PS5/Xbox S, X) is build of almost retail PC parts. And Pro can only be compared to PS5/SX or something more powerful (PC) to see where it lands in performance and image quality.

And I was interested in Pro before Sony fucked up pricing in Europe, I like to have the best possible experience even on consoles. Believe it or not, I'm not PS Pro hater, I just want to bring some logic because some PS fans are making this thing much more powerful than it really is.

PC is my platform of choice but I still play like 50/50 on PS5 as well thanks to physical versions of games.
This mf thinks he's pcmr batman lmao 🤣
 

FireFly

Member
The CPU's main core gets blocked less when the GPU has smaller workloads because the GPU returns control more consistently and more frequently, which in turn gives the CPU more time slices to react and prepare work, putting less strain on caches and a general reduction in latency per task, providing headroom to generate more smaller workloads per second, effectively yielding an 'increase at lower rendering resolutions'
It seems to me that this can be true whether or not the capacity of the CPU to generate more draw calls is increased.

My question was specifically whether the CPU workload of generating a draw call is reduced at lower resolutions. Since if not, then lowering resolution will not reduce the time the GPU spends waiting for the CPU, in "CPU limited" scenarios.
The number of workloads a main CPU core on a console can issue in a complex graphics rendering situation - as are modern AAA games on modern console - has a general inversely proportional relationship with the size and complexity of the workloads for the GPU.
That can be true, and it can also be true that reducing a given GPU workload does not reduce the corresponding CPU workload.
 

S0ULZB0URNE

Member
Yon can ignore my posts.

Pro (just like PS4/PS5/Xbox S, X) is build of almost retail PC parts. And Pro can only be compared to PS5/SX or something more powerful (PC) to see where it lands in performance and image quality.

And I was interested in Pro before Sony fucked up pricing in Europe, I like to have the best possible experience even on consoles. Believe it or not, I'm not PS Pro hater, I just want to bring some logic because some PS fans are making this thing much more powerful than it really is.

PC is my platform of choice but I still play like 50/50 on PS5 as well thanks to physical versions of games.
Hmm..

749fd16a-e9f4-47ac-9a9d-958d437a642b_text.gif
 

Gaiff

SBI’s Resident Gaslighter
The CPU's main core gets blocked less when the GPU has smaller workloads because the GPU returns control more consistently and more frequently, which in turn gives the CPU more time slices to react and prepare work, putting less strain on caches and a general reduction in latency per task, providing headroom to generate more smaller workloads per second, effectively yielding an 'increase at lower rendering resolutions'
Well, yes, but doesn’t that describe alleviating a GPU bottleneck by reducing the resolution?

What we’re asking is, if the CPU cannot issue enough draw calls to keep the GPU busy(CPU limitation), how will reducing the resolution of those draws help? Sure, the GPU will complete them faster, but the CPU won’t issue them any faster, no?
It seems to me that this can be true whether or not the capacity of the CPU to generate more draw calls is increased.

My question was specifically whether the CPU workload of generating a draw call is reduced at lower resolutions. Since if not, then lowering resolution will not reduce the time the GPU spends waiting for the CPU, in "CPU limited" scenarios.

That can be true, and it can also be true that reducing a given GPU workload does not reduce the corresponding CPU workload.
This. I’m probably missing something, but PaintTinJr seems to always describe a scenario where the GPU is the limiting factor and the CPU is waiting on it to finish before issuing more draw calls.
 

RJMacready73

Simps for Amouranth
Almost lost my 80 year old sweet neighbor over it


My very healthy and very unvaccinated 35 year old son got it this summer and spent two weeks in the hospital with a few days in the ICU and was the sickest he has ever been

Sorry to hear chap it's no joke having a sick child in hospital no matter their age, has he changed his mind r.e. vaccination?

Anyways not to derail, the quality difference that's actually made me go hmmm I might just purchase this has been F1, the Pro and Base version are night and day difference, a proper leap in graphical quality, pity it's F1 and I have zero interest in that game, I need to see me some better examples
 
Last edited:
Sorry to hear chap it's no joke having a sick child in hospital no matter their age, has he changed his mind r.e. vaccination?

Anyways not to derail, the quality difference that's actually made me go hmmm I might just purchase this has been F1, the Pro and Base version are night and day difference, a proper leap in graphical quality, pity it's F1 and I have zero interest in that game, I need to see me some better examples
Nahh he feels he wants to let his immune system do the heavy lifting and no matter my beliefs I always let him make his own decisions.

And I feel the more footage we get of the pro the more people are going to want one and I can’t wait until we see new games built trying to take full advantage of the pro
 

jroc74

Phone reception is more important to me than human rights
what I saw with my eyes on PS4 Pro was a convincingly native 1800p looking image in Deus Ex.
I saw a convincingly 4k looking image in Rocket League.
both with smoother performance as well.

so the subjective difference for the PS4 Pro, at least at first, was pretty damn dramatic.
1080p to 4k or near 4k subjective image quality. of course over time it seemed like Devs used CBR less and less as the Pro was the minority of users, so the base version got the majority of attention, and the pro version just got a quick and dirty resolution increase in many cases.
I mean...using specific numbers just proves my point tho.
 

Dorfdad

Gold Member
In regards to Dragon's Dogma 2, has the performance been improved with the latest patch or no? I read it has, but not sure I entirely believe it. Only thing stopping me from getting it. Same for FF Rebirth.

I am absolutely skipping the PS5 Pro until there is a price drop. Feels like my base PS5 will be just fine until it dies (knock on wood) and I have no inclination to upgrade at that price point.
Yes there is a patch and it helps a lot in bigger areas still not 60 but 50’s on base ps5 which feels better than previous.

They did mention a Pro feature as well so hoping for that locked 60
 

Det

Member
Yon can ignore my posts.

Pro (just like PS4/PS5/Xbox S, X) is build of almost retail PC parts..

“One of the exciting aspects of console hardware design is that we have freedom with regards to what we put in the console," Cerny begins. "Or to put that differently, we’re not trying to build a low-cost PC, and we aren’t bound by any particular standards. So if we have a brainstorm that audio can become much more immersive and dimensional if there’s a dedicated unit that’s capable of complex math, then we can do that. Or if the future feels like high-speed SSDs rather than HDDs, we can put an end-to-end system in the console – everything from the flash dies to the software interfaces that the game creators use – and get 100% adoption.

I like to think that occasionally we’re even showing the way for the larger industry, and that our efforts end up benefiting those gaming on PC as well.

It's the average Steam PC that's a low-powered version of the PS5, not the other way around.
The PlayStation is AMD's main client for GPUs, it is the PC that uses what is developed with Sony.
 
Last edited:

Dorfdad

Gold Member
Nahh he feels he wants to let his immune system do the heavy lifting and no matter my beliefs I always let him make his own decisions.

And I feel the more footage we get of the pro the more people are going to want one and I can’t wait until we see new games built trying to take full advantage of the pro
Sadly I don’t think outside of Sony we’re going to see on game built with the pro in mind. Best bet would be some tacked on “extras” which will be nice but developers are notorious for not putting the effort into niche configurations.

I’m still at a loss why they couldn’t just put in a slightly larger cpu of the same variant to give 30% boost in CPU which would have allowed almost all games to reach 60fps the 10% in the pro isn’t even going to be felt for most titles.
 

Bojji

Member
“One of the exciting aspects of console hardware design is that we have freedom with regards to what we put in the console," Cerny begins. "Or to put that differently, we’re not trying to build a low-cost PC, and we aren’t bound by any particular standards. So if we have a brainstorm that audio can become much more immersive and dimensional if there’s a dedicated unit that’s capable of complex math, then we can do that. Or if the future feels like high-speed SSDs rather than HDDs, we can put an end-to-end system in the console – everything from the flash dies to the software interfaces that the game creators use – and get 100% adoption.

I like to think that occasionally we’re even showing the way for the larger industry, and that our efforts end up benefiting those gaming on PC as well.

It's the average Steam PC that's a low-powered version of the PS5, not the other way around.
The PlayStation is AMD's main client for GPUs, it is the PC that uses what is developed with Sony.

So MS used RDNA2 that was developed by sony too?
 

ap_puff

Member

Bojji

Member
I don't even know why you're discussing this with Bojji Bojji

They can't even tell the difference between 30fps and 60fps without a frame counter. Lol.


I can tell you for sure that this is not 60fps locked. Framerate between 31-59 looks like shit and without framerate counter you won't know what it is.

Yeah... "Allergic to spreading bullshit", no wonder dudes got a stick up his ass, he's sniffing his own farts and getting hives

Creating thread named "Monster Hunter Wilds Stage Demo - 60 FPS on PS5" without any proof for that is not spreading bullshit at all...
 
Last edited:

Shin-Ra

Junior Member
I really hope you're in the minority here. Having more and better performing choices across the board is always a good thing and should never be frowned upon. Securing 4k like image quality at 60fps is the ultimate purpose of the Pro (which is what Cerny was talking about). But there will inevitably be gamers, like myself, who will be curious about what the new hardware can achieve at 30fps from a pure fidelity/vfx standpoint. Don't forget, while Sony discovered ~75% preferred performance mode, that leaves 25% who favored highest fidelity at the expense of framerate. 25% is a significant population and more than enough to justify developers continuing to pursue the limits of 30fps visuals in addition to the higher performance modes.
There should always be a maximised 30fps mode and option to remove the 30fps cap so that PS6 and beyond can play them at 60fps+ with VRR.

It’s clear not every developer will (or can) manually update their game years later for the latest hardware so more flexibility needs to be built in from the start.
 

PaintTinJr

Member
It seems to me that this can be true whether or not the capacity of the CPU to generate more draw calls is increased.

My question was specifically whether the CPU workload of generating a draw call is reduced at lower resolutions. Since if not, then lowering resolution will not reduce the time the GPU spends waiting for the CPU, in "CPU limited" scenarios.

That can be true, and it can also be true that reducing a given GPU workload does not reduce the corresponding CPU workload.
It feels like you are arguing semantics when the result is that - on console particularly the PS5 Pro which is the context where concern for CPU bound was DF's and their acolytes' narrative - the Pro's near identical CPU setup can double locked PS5 game frame-rates with higher fidelity at lower or equal native resolutions because the GPU bottleneck being alleviate increase efficiency and headroom on the Pro's main CPU core because like all CPU primary core's and their slave GPU, they need to run in lock-step for key parts of rendering.
 
Last edited:

FireFly

Member
It feels like you are arguing semantics when the result is that - on console particularly the PS5 Pro which is the context where concern for CPU bound was DF's and their acolytes' narrative - the Pro's near identical CPU setup can double locked PS5 game frame-rates with higher fidelity at lower or equal native resolutions because the GPU bottleneck being alleviate increase efficiency and headroom on the Pro's main CPU core because like all CPU primary core's and their slave GPU, they need to run in lock-step for keep parts of rendering.
Your original claim, as I understand it, was that no finished title made by competent developers that runs at 30 FPS on the base PS5 would be CPU limited on the Pro.

And the mechanism you provided for this was PSSR and its capability to 'dial down' CPU bottlenecking.

But now it seems that what you mean by this is that the frame rate of already GPU limited titles can be doubled on the Pro, as the CPU works more efficiently. That's fine, but leaves open the question of what happens to titles that are 30 FPS (or under 60 locked), because at faster frame rates the GPU would be waiting on the CPU.

If PSSR doesn't help in this case, presumably you would have to say that no competent developer would make such a title? (To be clear the vast majority of PS5 titles are already capable of running at 60 FPS, so I am not claiming this is a major issue for the Pro).
 

PaintTinJr

Member
Well, yes, but doesn’t that describe alleviating a GPU bottleneck by reducing the resolution?

What we’re asking is, if the CPU cannot issue enough draw calls to keep the GPU busy(CPU limitation), how will reducing the resolution of those draws help? Sure, the GPU will complete them faster, but the CPU won’t issue them any faster, no?

This. I’m probably missing something, but PaintTinJr seems to always describe a scenario where the GPU is the limiting factor and the CPU is waiting on it to finish before issuing more draw calls.
As a PCMR I was sure you would understand why some people drop £1000 or more between motherboard, CPU and memory and understand how that setup in single core workloads was still different to some budget Pentium Gold setup below £200 that was overclocking to the same or higher frequency.

CPU terminology used to use the terms wimpy and brawny, although convergence has made the former of those terms somewhat redundant because almost all cores are brawny in today's wimpier CPUs,


so the last part where you are 'missing something' is that all CPU main cores in all system are waiting on the GPU to finish the active task, but the amount of waiting is more critical as the CPUs become wimpier because they don't have the massive cache's with the lowest latency esram/edram or the most efficient designed northbridges strategies like a ringbus or the best pre-empting, cache prediction, etc, etc or backed with a memory controller with the most bandwidth and channels to replenish caches, so as the whole hierarchy is optimised to use the least number of clock cycles and most bandwidth to service any given (sustained) random or volume task as quick as possible.
 

PaintTinJr

Member
Your original claim, as I understand it, was that no finished title made by competent developers that runs at 30 FPS on the base PS5 would be CPU limited on the Pro.

And the mechanism you provided for this was PSSR and its capability to 'dial down' CPU bottlenecking.

But now it seems that what you mean by this is that the frame rate of already GPU limited titles can be doubled on the Pro, as the CPU works more efficiently. That's fine, but leaves open the question of what happens to titles that are 30 FPS (or under 60 locked), because at faster frame rates the GPU would be waiting on the CPU.

If PSSR doesn't help in this case, presumably you would have to say that no competent developer would make such a title? (To be clear the vast majority of PS5 titles are already capable of running at 60 FPS, so I am not claiming this is a major issue for the Pro).
I was originally replying to Zathalus' dismounting comment on the Pro's CPU.

And yes, with so much extra headroom on the Pro's GPU, even more when lowering resolution and using PSSR, the base PS5 code would need to be failing to hit a consistent 30fps for a Pro version not to scale to 60fps, but even then, if the CPU usage on a base PS5 was eating its full bandwidth quota, then the PS5 would still have it covered by having more bandwidth and more effective use of its bandwidth - IIRC the leaked specs correctly.
 

FireFly

Member
I was originally replying to Zathalus' dismounting comment on the Pro's CPU.

And yes, with so much extra headroom on the Pro's GPU, even more when lowering resolution and using PSSR, the base PS5 code would need to be failing to hit a consistent 30fps for a Pro version not to scale to 60fps, but even then, if the CPU usage on a base PS5 was eating its full bandwidth quota, then the PS5 would still have it covered by having more bandwidth and more effective use of its bandwidth - IIRC the leaked specs correctly.
Zathalus was talking about games like BG3 where GPU is waiting on the CPU at higher frame rates. What should we expect from the Pro in these cases?
 

Zathalus

Member
Zathalus was talking about games like BG3 where GPU is waiting on the CPU at higher frame rates. What should we expect from the Pro in these cases?
If Dragons Dogma 2 is anything to go by, not much of a difference. In the example tested with the Pro the outer city area drops to a low of 53fps, testing with a XSX with the latest patch (and graphics turned down) shows the game hitting the high 40s in the same area. A 3600 goes up to 47fps in a slightly more demanding area (a bit further into the main city hub).

Thankfully, such scenarios are few and far between, and certainly not the norm.
 

PaintTinJr

Member
Zathalus was talking about games like BG3 where GPU is waiting on the CPU at higher frame rates. What should we expect from the Pro in these cases?
Can you point me to the info where that is proven to be the case, and not just assumed - as I haven't been paying any attention to that game or its performance.
 

FireFly

Member
Can you point me to the info where that is proven to be the case, and not just assumed - as I haven't been paying any attention to that game or its performance.
Well, at launch NX Gamer found in his Act 3 benchmark that a 6800 was averaging 40 FPS when equipped with a 5600X at 1440p, with low GPU utilization. That compares with 31 FPS on the PS5 over the same run. Performance increased a further 50% when a 5800X3D was used, albeit with a 3080 instead. Alex has done apples to apples GPU testing in the city and found the same 4090 performed around half as fast on a 3600 system compared with a 12900K one, though both were showing drops below 60.

Subsequent patches have improved CPU performance however.

 
Last edited:

PaintTinJr

Member
Well, at launch NX Gamer found in his Act 3 benchmark that a 6800 was averaging 40 FPS when equipped with a 5600X at 1440p, with low GPU utilization. That compares with 31 FPS on the PS5 over the same run. Performance increased a further 50% when a 5800X3D was used, albeit with a 3080 instead. Alex has done apples to apples GPU testing in the city and found the same 4090 performed around half as fast on a 3600 system compared with a 12900K one, though both were showing drops below 60.

Subsequent patches have improved CPU performance however.


The low GPU utilisation doesn't tell us much, other than the the batches complete quickly.

IIRC the same is true of all games that are GPU cache bound like VR that use forward render or forward render plus graphics rendering solutions. And would be the same if someone was using an old rendering API that was draw call bound to single core CPU such as DX10 or Opengl 3.1 or earlier.

It would be interesting to see if lowering resolution to 720p increased performance and how well the game worked using Linux and Proton on identical hardware, because Proton would alleviate a lot of old API inefficiency in the remapping to Vulkan if that was an issue.
 
Last edited:

Bojji

Member
The low GPU utilisation doesn't tell us much, other than the the batches complete quickly.

IIRC the same is true of all games that are GPU cache bound like VR that use forward render or forward render plus graphics rendering solutions. And would be the same if someone was using an old rendering API that was draw call bound to single core CPU such as DX10 or Opengl 3.1 or earlier.

It would be interesting to see if lowering resolution to 720p increased performance and how well the game worked using Linux and Proton on identical hardware, because Proton would alleviate a lot of old API inefficiency in the remapping to Vulkan if that was an issue.

Game is obviously cpu limited yet you still deny it. SMH...
 

FireFly

Member
The low GPU utilisation doesn't tell us much, other than the the batches complete quickly.

IIRC the same is true of all games that are GPU cache bound like VR that use forward render or forward render plus graphics rendering solutions. And would be the same if someone was using an old rendering API that was draw call bound to single core CPU such as DX10 or Opengl 3.1 or earlier.

It would be interesting to see if lowering resolution to 720p increased performance and how well the game worked using Linux and Proton on identical hardware, because Proton would alleviate a lot of old API inefficiency in the remapping to Vulkan if that was an issue.
The reason for thinking the Act 3 city sections are CPU limited is that the performance scales based on the CPU, rather than GPU, such that the 4090/12900K and 3080/5800X3D systems both perform at ~60 FPS in the referenced benchmarks. While elsewhere in the game, the 4090 is much faster.
 

PaintTinJr

Member
The reason for thinking the Act 3 city sections are CPU limited is that the performance scales based on the CPU, rather than GPU, such that the 4090/12900K and 3080/5800X3D systems both perform at ~60 FPS in the referenced benchmarks. While elsewhere in the game, the 4090 is much faster.
But the GPU cache sizes and specs are quite similar between those two cards and a big step up from a RX6800, as is the VRAM spec at GDDR6 (X)
 
Last edited:

Bojji

Member
At what resolution? IF they drop the resolution to 720p and the left hand system increases frame-rate is that CPU limited? Or Draw Call limited/API limited in your opinion?

I think it already is low resolution to keep GPU out of equation, Here is detailed video about this:

 

Gaiff

SBI’s Resident Gaslighter
At what resolution? IF they drop the resolution to 720p and the left hand system increases frame-rate is that CPU limited? Or Draw Call limited/API limited in your opinion?
Your argument is that PSSR will help alleviate CPU bottlenecks. If you need to drop to 720p, which obviously won’t happen on the Pro, how will it be of any help?
 

PaintTinJr

Member
I think it already is low resolution to keep GPU out of equation, Here is detailed video about this:


At 6:58 in the video you have the leftside weak CPU system matching and outperforming the rightside temporarily, which in a CPU limited game would and should never happen, so that just makes it look like something is really amiss with the code either a garbage collector for RAM or VRAM is kicking in sporadically and crippling performance, or something else, like rendering something without all textures loaded and the system silently raises an error state for every rendered frame with that asset and silently tanks GPU performance.

But that 3secs at 6:58 in your DF video shows it can't be a CPU bottleneck.
 

Gaiff

SBI’s Resident Gaslighter
At 6:58 in the video you have the leftside weak CPU system matching and outperforming the rightside temporarily, which in a CPU limited game would and should never happen, so that just makes it look like something is really amiss with the code either a garbage collector for RAM or VRAM is kicking in sporadically and crippling performance, or something else, like rendering something without all textures loaded and the system silently raises an error state for every rendered frame with that asset and silently tanks GPU performance.

But that 3secs at 6:58 in your DF video shows it can't be a CPU bottleneck.
They’re both 12900Ks at 6:58, mate. It’s just that one side is using more summons to further stress the CPU.
 

PaintTinJr

Member
They’re both 12900Ks at 6:58, mate. It’s just that one side is using more summons to further stress the CPU.
Then the many random/unexpected summons having that percentage impact on framerate would suggest that the issue is all in the drawcalls being issued, and looks very much like the draw call list is being constructed in single threaded mode on the primary CPU core and that's also backed up by the 3600 vs 12900k graphs having a constant displacement from each other and following the same graph shape - and if multi core draw list construction was in play then those graph shapes would differ and diverge because of how much stronger the 12900K as a multicore CPU is compare to the 3600.
 
Last edited:

Crayon

Member
I think I get what PaintTin is saying and he should please correct me if I'm on the wrong track.

Lowering GPU workload does expose CPU bottlenecks, but those are being somewhat mitigated in a way that's not that obvious because the CPU is getting more "turns" as the GPU completes frames faster and hands it back to the CPU more times a second.

The CPU gets more actions per second in absolute terms, even if it's ratio of turns with the GPU stays the same and will still surface a cpu bottlenecks.

How much that mitigates bottlenecking, idk. Might be a miniscule amount and that's why we never heard of it. But I can see it working that way. Again, if I'm getting the idea.
 

FireFly

Member
At what resolution? IF they drop the resolution to 720p and the left hand system increases frame-rate is that CPU limited? Or Draw Call limited/API limited in your opinion?
This video shows the 3070 running the Act 3 city on a 3070/3600 at DLSS Performance on low (720p) and DLAA on high. So I believe that meets your criteria? And the results are that going from 1440p to 720p increases performance by 19%.



Edit: Though I assume the fixed cost of DLAA is higher due to the higher output resolution.
 
Last edited:

PaintTinJr

Member
This video shows the 3070 running the Act 3 city on a 3070/3600 at DLSS Performance on low (720p) and DLAA on high. So I believe that meets your criteria? And the results are that going from 1440p to 720p increases performance by 19%.



Edit: Though I assume the fixed cost of DLAA is higher due to the higher output resolution.

No that showed that with a 4090 an Ryzen 3600 in another video by DF at 1080p (?) we were at under 35fps and the same CPU paired with just a 3070 and using DLSS is getting closer to 60fps with patch 3 and 45fps with with release code showing how the CPU does perform better with a lighter workload
 

FireFly

Member
No that showed that with a 4090 an Ryzen 3600 in another video by DF at 1080p (?) we were at under 35fps and the same CPU paired with just a 3070 and using DLSS is getting closer to 60fps with patch 3 and 45fps with with release code showing how the CPU does perform better with a lighter workload
Alex was stress testing the game, so it makes sense that his FPS was lower. However the point is that the effect of lowering the resolution even to 720p while decreasing settings to their minimum and likely lowering the DLSS cost, increased performance by only 19%.

What more evidence do you need to see to conclude that the game is largely CPU limited in the city in Act 3? And moreover think about the hypothetical case where no performance patches had been released. What do you think the Pro's chances of hitting a locked 60 would have been in that case?
 

ap_puff

Member
You clearly don't get how CPU bottlenecks work. I recommend this video:


Imagine using a benchmarks video that doesn't actually talk about what is bottlenecking a CPU and telling someone else that they dont know how CPU bottlenecks work. The actual workload is what determines what causes CPU bottlenecks, the CPU can bottleneck from a variety of things ranging from cache misses, a lack of compute throughput, idle time waiting for other processes to complete, branch prediction misses...etc. Waiting for I/o is a huge reason for CPU bottlenecking which is why the 5800x3d is ~20% faster than the 5800x in games despite clocking lower. And as I said when optimizing for fixed spec hardware it's possible to mask I/o latency since you can profile your program and hit the problem spots easily.
 
Last edited:
Top Bottom