• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Digital Foundry] PlayStation 5 Pro Hands-On: 11 Games Tested, PSSR and RT Upgrades Revealed, Developers Interviewed!

Any proof to console outperforming 3600? In theory everything you said is true but in practice we rarely see any of it in action.

0WAZEtj.jpeg
fdflk3y.jpeg


This is completely CPU limited scenario ^



I think I prefer softer fur of PSSR, but yeah AF difference is apparent as well.

If fucks me up, I have x16 forced in control panel of the driver since at least 2009 but here we are in 2024 on a console that has almost 600GB/s of memory bandwidth AF is still a problem...
The ryzen 3600 is certainly not enough to match PS5 CPU performance. It has less CPUs cores and it also have to work a lot harder to make up for the lack of decompression chip. If the game is built around the PS5 decompression chip (like the TLOU1 remake) the 3600 cannot run such port well.



Dips below 60fps, almost 100% full CPU usage due to decompression resulting in stuttering.

I cannot understand why Digital Foundry keeps using 3600 in their PS5 comparisons. They mislead people because that CPU is not up to the job.
 
Last edited:

Little Chicken

Gold Member
Alex doesn't get it, and neither do some of the PC owners here who say the same type of thing he just said. Console enthusiasts and especially console casuals, 90% of the time, just care about their consoles and they already are entrenched into that ecosystem. They understand your reasons why you don't like it, but they just don't care.

Constantly telling them to abandon everything and go to PC is like a door-to-door salesman handing out a pamphlet.

When Alex says this:

"If you own a PS5 and want more, you should buy a PC, because the PS5 Pro still has the same CPU"

He should instead simply say this:

"If you own a PS5 and want more, you should just wait on the Playstation 6, because the PS5 Pro still has the same CPU"

And they will instantly understand your perspective.
Why should anyone have to edit themselves for fear of annoying some old Freegunner.

Perhaps the Freegunner should just grow up and use common sense instead.
 

PandaOk

Member
You can't replicate 90% of the stuff PC can do on Pro so this comparison is failed from the start, PC offers much more value.

You NEED Dual Sense to play PS5 games.

You NEED K&M to play PC games

You DON'T NEED controller on PC, it's optional if you like it.
I honestly don’t know but the thought of playing AAA console games with a keyboard/mouse sounds a bit cumbersome to put it mildly? I remember trying God of War PC and a 3D Sonic game with KB+M. Most console gamers would be looking to play those games that play best with a controller.

Plus the economic/value proposition between console/pc is a bit overblown, isn’t it? 95% of PC exclusives run on extremely low end specs just fine. Heck they’re targeted for that. Why does someone need to spend 2-4x a console to build a PC that offers the ability to play those AAA games, sometimes years later, or with shoddy ports by comparison? They can keep a very low end PC to play exclusives on just fine?
 
Last edited:

Rudius

Member
Oh god, not crappy PC builds again. Please don't recommend people buy a 3600 in 2024 for the love of god, unless you really hate PC gaming and want to give people terrible advice on purpose.
Or worse: a 3070 that is "just like a Pro", until you need more than 8GB of VRAM...
 

Bojji

Member
Everyone knows you're salty, this isn't new information.

:messenger_heart:

Maybe once you upgrade your GPU you'll start to feel more comfortable.

Oh I will when there will be any need for this. So far GPU I own is fine for vast majority of games.

The ryzen 3600 is certainly not enough to match PS5 CPU performance. It has less CPUs cores and it also have to work a lot harder to make up for the lack of decompression chip. If the game is built around the PS5 decompression chip (like the TLOU1 remake) the 3600 cannot run such port well.



Dips below 60fps, almost 100% full CPU usage due to decompression resulting in stuttering.

I cannot understand why Digital Foundry keeps using 3600 in their PS5 comparisons. They mislead people because that CPU is not up to the job.


Last of Us was (and still is in many ways) terrible port that shouldn't be released in this state.

It was patched several times, I think performance of that 3600 is not too bad (3600X is like 2% faster) now? Stable 60fps is achieved.

cSVm9op.jpeg
 
Last edited:
:messenger_heart:



Oh I will when there will be any need for this. So far GPU I own is fine for vast majority of games.



Last of Us was (and still is in many ways) terrible port that shouldn't be released in this state.

It was patched several times, I think performance of that 3600 is not too bad (3600X is like 2% faster) now? Stable 60fps is achieved.

cSVm9op.jpeg
Dude, that russian site is worthless. Do you really think they tested 30 CPUs with a combination of 30 GPUs :p? They are probably calculating / estimating CPU performance.

TLOU Remake runs very well on my PC, but I have an 8 core Ryzen 7800X3D, which is powerful enough to make up for the lack of decompression chip the PS5 uses. SW Outlaws and Warhammer are also CPU intensive and the Ryzen 3600 cannot deliver 60fps in these modern PS5 ports that take full advantage of the PS5 hardware.
 
Last edited:

Bojji

Member
Dude, that russian site is worthless. Do you really think they tested 25 CPUs with a combination of 25 GPUs :p?

TLOU Remake runs very well on my PC, but I have an 8 core Ryzen 7800X3D, which is powerful enough to make up for the lack of decompression chip the PS5 uses. SW Outlaws and Warhammer are also CPU intensive and the Ryzen 3600 cannot deliver 60fps in these games.



tkrA2fB.jpeg


PS5 drops to 30 something fps in WH

MXPnnNa.jpeg
 
Last edited:
:messenger_heart:



Oh I will when there will be any need for this. So far GPU I own is fine for vast majority of games.



Last of Us was (and still is in many ways) terrible port that shouldn't be released in this state.

It was patched several times, I think performance of that 3600 is not too bad (3600X is like 2% faster) now? Stable 60fps is achieved.

cSVm9op.jpeg
With a 4090 at 1080p ?
 

CloudShiner

Member
DF are a joke. Early this year 'teeth' Leadbetter was still refusing to admit Pro was even coming, and as recently as a few months they were taking any opportunity and any angle to dump on it. Whether price, specs, performance, or need for it, they were there doing their best. Now they're experiencing what it's like to own a bicycle that only works in reverse, and to have feathers stuck in their throat.

I'm enjoying it, because teeth Leadbetter has been even more anti this hardware than Battaglia, and now he has to introduce features, reviews and analysis that don't fit with his internal pre-conceptions.
 

Sw0pDiller

Banned
The ps5 pro gpu is comparable with the radeon xt7700 but with the higher memory bandwith of the 7900xt. These cards play rather modern games on 1080p between 80 and 90 fps. Ps5 pro must be able to pull these games easely above 60 right? But that kind of power is not yet visible? Is there still a big power reserve?
 

Fafalada

Fafracer forever
Steam has native dualsense support and it will handle everything for you provided the game has any form of native controller support.
Steam does - and GeForce Now does (amazingly well too) - but nothing else supports it natively.
Actually - on Windows the worst thing isn't even controller compatibility (those extra apps aren't so hard to install) it's the abysmally broken device handling. If you have a 'built-in' controller (like the Deck or other handhelds) a lot of games will literally refuse to use the other connected controllers.
The best comedy of errors of this is GamePass - I would launch a 1st party Microsoft title, using a Microsoft XBoxOne controller and 9 times out of 10 - it just doesn't work. This then requires another app that hacks the shit out of driver registry to 'convince' the system that the one true controller is the one I'm holding in my hands and not the other connected devices - and no - the OS provides zero-facility to 'select' an active controller, and most software (especially basically all MS 1st party titles) doesn't know what to do with multiple controllers at all.

There are worse things than above too - certain Need For Speed titles that decide that you're using a wheel and are completely broken out of the box if you connect more than one input device(any input device) also come to mind...
 
Last edited:

sachos

Member
The regular PS5’s 40Hz mode gets around 35fps and is between 1800-2160p using settings comparable to Ultra. I think with a 45% boost to the GPU and using PSSR, it can get close to 60. Might have to do something like 1620p upscaled to 4K.
I think 1620p is too high since the 7700XT has 1% lows of 57 FPS at 1440p native Ultra. You need 68fps (1/(0.0166-0.002)) to get a locked 60 taking into account PSSR cost, they would either need to drop settings or resolutions, 1080p seems enough. It would be more than enough for a perfect locked 40hz mode though.
 

BbMajor7th

Member
When the best looking game is still probably something that released in 2018...we just need to admit graphics are not going the be the biggest increment going forward.

Even when PS6 releases, we're getting longer cross gen periods as well. Those fully exclusive games are done even if they don't release on PC.

I'll be buying a PS5 pro but you can bet your ass i'm not upgrading to a PS6 day one if it releases close to 800€ here in europe
At 300% zoom, sure, I can see where the extra money is going; sitting three meters away from my TV? As with last gen's pro consoles, much of the value hinges on whether developers will go the extra mile for a minority of the total player base. I think PS4 Pro was 14% at a much lower price point. PS5 Pro will likely account for less than 10%, possibly closer to 5% in overall uptake. I got a PS4 Pro, but I'm in no hurry for this mid-gen refresh, not when the only game I really think could benefit from it, Dragon's Dogma 2, is still struggling to hit 60.
 

Bojji

Member
Dips below 60fps and poor frame time graph. I believe this game is also VRAM limited (3070 only has 8GB VRAM). The PS5 has solid 60fps in this game and no stutters.

It's poor GPU to do this comparison because of VRAM amount. Game could be vram or CPU bound in some places.

Other than that massive difference (up to 50%) after some patches, only proves that port was a fucking mess at launch.
 

PaintTinJr

Member
No you claimed that the DF narrative was that the Pro won’t be able to hit 60fps in games. The actual narrative is that nothing about the Pro would allow a game to hit 60fps should the title be limited on the base PS5 to 30fps due to the CPU. BG3 in act 3 for example will struggle to hit 40fps, even on the Pro.

Should any game come out that is heavily CPU limited for some reason (be it poor coding or actually using the CPU to the max) then the Pro won’t magically make it a 60 fps game. That is factually true. It is also factually true that these CPU intensive games are a tiny minority of games. GTA6 might be one of those, but we can’t really say with certainty.
Your goal post shifting premise is just more narrative concern IMHO, because they've already shown Spidey 2 with 30fps CPU usage running at the same density with better graphics running on a Pro looking far better and at 60fps.

Short of incompetent development, no game that can hit 30fps and be suitable for mastering on the PS5 OG - ie gone gold - will be CPU limited on the Pro when PSSR can quite literally dial down CPU bottlenecking from draw calls by lower native fidelity and pushing beyond with GPU only PSSR
 

Zathalus

Member
Your goal post shifting premise is just more narrative concern IMHO, because they've already shown Spidey 2 with 30fps CPU usage running at the same density with better graphics running on a Pro looking far better and at 60fps.

Short of incompetent development, no game that can hit 30fps and be suitable for mastering on the PS5 OG - ie gone gold - will be CPU limited on the Pro when PSSR can quite literally dial down CPU bottlenecking from draw calls by lower native fidelity and pushing beyond with GPU only PSSR
Spider-Man 2 has a mode called Performance Pro that is the regular Performance mode with PSSR applied on top of it. It’s not the quality mode running at 60fps. Related to the CPU question, Dragons Dogma 2 has seemingly eliminated all the GPU drops on the Pro but is still dropping well under a stable 60 in the CPU heavy area of the main city hub.

PSSR is great, but it won’t magically double your available CPU power.
 

Gaiff

SBI’s Resident Gaslighter
PSSR can quite literally dial down CPU bottlenecking from draw calls by lower native fidelity and pushing beyond with GPU only PSSR
PSSR can diminish CPU bottlenecks? How will it reduce draw calls? I thought the impact was strictly on the resolution which usually has nothing to do with CPU performance.
 
Last edited:

Ogbert

Member
My fundamental issue is that all these games already look great and run brilliantly.

I just don’t see the compelling difference.
 

ap_puff

Member
I think people comparing the console CPU to desktop CPUs aren't doing so very accurately, anyone who has used a ryzen CPU realizes they scale extremely well with memory bandwidth and latency, and the consoles have much much more bandwidth compared to on desktop due to having GDDR, and due to console being fixed-spec there are plenty of ways to hide latency by pr-fetching data. So devs that bother to optimize for consoles probably won't run into meaningful CPU bottlenecks that hold them back from 60fps.
 

PaintTinJr

Member
PSSR can diminish CPU bottlenecks? How will it reduce draw calls? I thought the impact was strictly on the resolution which usually has nothing to do with CPU performance.
The lower the resolution the sooner draw calls complete, which are all issued by the primary CPU core to the GPU - even though multithreading is typically used to assemble the draw call list that gets sent.

By the primary core being stuck in that blocking transfer for less time it has longer between each frame's drawlist transfers being issued to do more coordinated work with the other CPU cores and more work with the GPU.

It is a network flows and bin packing algorithm problem, smaller durations for critical path workloads and smaller workloads allow for better batching opportunities, which impacts latencies. And just lowering the native resolution massively reduces contention for unified memory which in turn reduces CPU and GPU cache misses, with the gains being multifactor, because - as has been observed from the attempted rises in resolution rendering in games at 4K instead of 1080p- it isn't a simple linear increase multiplier in resources required.

As a simple example by lowering native resolution on PS5 Pro for fidelity mode, you eliminate ffillrate as a rendering bottleneck meaning that a shift from 36 CUs to 52CUs actually lets workloads complete in less GPU clock cycles and in turn lowers the clock cycle count a drawcall is active for, which is the same as lowering draw calls
 

Gaiff

SBI’s Resident Gaslighter
The lower the resolution the sooner draw calls complete, which are all issued by the primary CPU core to the GPU - even though multithreading is typically used to assemble the draw call list that gets sent.

By the primary core being stuck in that blocking transfer for less time it has longer between each frame's drawlist transfers being issued to do more coordinated work with the other CPU cores and more work with the GPU.

It is a network flows and bin packing algorithm problem, smaller durations for critical path workloads and smaller workloads allow for better batching opportunities, which impacts latencies. And just lowering the native resolution massively reduces contention for unified memory which in turn reduces CPU and GPU cache misses, with the gains being multifactor, because - as has been observed from the attempted rises in resolution rendering in games at 4K instead of 1080p- it isn't a simple linear increase multiplier in resources required.

As a simple example by lowering native resolution on PS5 Pro for fidelity mode, you eliminate ffillrate as a rendering bottleneck meaning that a shift from 36 CUs to 52CUs actually lets workloads complete in less GPU clock cycles and in turn lowers the clock cycle count a drawcall is active for, which is the same as lowering draw calls
Is that unique to consoles? Because changing resolutions on PCs has almost no impact on CPU bottlenecks. It was my understanding that draw calls was purely the number of objects and changing the resolutions has no impact on that.
 
Last edited:

PaintTinJr

Member
Is that unique to consoles? Because changing resolutions on PCs has almost no impact on CPU bottlenecks. It was my understanding that draw calls was purely the number of objects and changing the resolutions has no impact on that.
It isn't unique to consoles, but the typical discrete memory pools for CPU and GPU on PC by default place the bottleneck at the PCIe buss which is serviced better by a faster primary CPU core with massive L1, L2, L3 caches that experiences least latency reacting to the PCIe bus becoming clear to send the next set of GPU work, but at bigger resolution reductions even PC games see increased frame-rates, but these are probably at a level reduction that PCMR wouldn't try, because at 720p-900p native from 1440p and then needing DLSS; most PCMR with more than a xx60 would consider the game trash on their hardware if that's what it cost to get a locked 60fps, 90 or 120, and would sooner replace their CPU, motherboard and memory of their PC to not go below 1080p native.

On PC disabling the ability for the GPU to pre compute frames would give a better baseline for performance with a bottleneck, so that when reducing resolution you'd easily see where the resolution changes impacted the bottleneck, rather than the bottleneck be hidden by pre-computed frames that aren't real unique frame workloads directly under the control of the CPU.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
It isn't unique to consoles, but the typical discrete memory pools for CPU and GPU on PC by default place the bottleneck at the PCIe buss which is serviced better by a faster primary CPU core with massive L1, L2, L3 caches that experiences least latency reacting to the PCIe bus becoming clear to send the next set of GPU work, but at bigger resolution reductions even PC games see increased frame-rates, but these are probably at a level reduction that PCMR wouldn't try, because at 720p-900p native from 1440p and then needing DLSS; most PCMR with more than a xx60 would consider the game trash on their hardware if that's what it cost to get a locked 60fps, 90 or 120, and would sooner replace their CPU, motherboard and memory of their PC to not go below 1080p native.

On PC disabling the ability for the GPU to pre compute frames would give a better baseline for performance with a bottleneck, so that when reducing resolution you'd easily see where the resolution changes impacted the bottleneck, rather than the bottleneck be hidden by pre-computed frames that aren't real unique frame workloads directly under the control of the CPU.
Yeah, so if you need to drop down the resolution to those levels to see a difference, how will PSSR help the CPU in any capacity? It’s won’t be upscaling from 720p, that’s for sure.
 
Last edited:

FireFly

Member
The lower the resolution the sooner draw calls complete, which are all issued by the primary CPU core to the GPU - even though multithreading is typically used to assemble the draw call list that gets sent.
Do you mean that draw calls require less CPU-time to generate at lower resolutions? If so, can you provide a source for this?
 

Bojji

Member
I think people comparing the console CPU to desktop CPUs aren't doing so very accurately, anyone who has used a ryzen CPU realizes they scale extremely well with memory bandwidth and latency, and the consoles have much much more bandwidth compared to on desktop due to having GDDR, and due to console being fixed-spec there are plenty of ways to hide latency by pr-fetching data. So devs that bother to optimize for consoles probably won't run into meaningful CPU bottlenecks that hold them back from 60fps.

I don't know if it works that way. We have 4800S (Xbox CPU) on PC that has large GDDR6 pool and it seems that CPUs prefer low latency of DDR over massive bandwidth of GDDR6 (that can have some advantages sometimes).

qwIv2wK.jpeg
xNP2npG.jpeg
SeskvOF.jpeg

UF0m5rk.jpeg


 

winjer

Gold Member
The lower the resolution the sooner draw calls complete, which are all issued by the primary CPU core to the GPU - even though multithreading is typically used to assemble the draw call list that gets sent.

By the primary core being stuck in that blocking transfer for less time it has longer between each frame's drawlist transfers being issued to do more coordinated work with the other CPU cores and more work with the GPU.

It is a network flows and bin packing algorithm problem, smaller durations for critical path workloads and smaller workloads allow for better batching opportunities, which impacts latencies. And just lowering the native resolution massively reduces contention for unified memory which in turn reduces CPU and GPU cache misses, with the gains being multifactor, because - as has been observed from the attempted rises in resolution rendering in games at 4K instead of 1080p- it isn't a simple linear increase multiplier in resources required.

As a simple example by lowering native resolution on PS5 Pro for fidelity mode, you eliminate ffillrate as a rendering bottleneck meaning that a shift from 36 CUs to 52CUs actually lets workloads complete in less GPU clock cycles and in turn lowers the clock cycle count a drawcall is active for, which is the same as lowering draw calls

How did you figure that out?
As I understand, draw calls are commands the CPU sends to draw objects to the GPU.
But these are not limited by screen resolution. Even if there are more pixels to shade and rasterize, the amount of draw calls remain the same.
The only way this can be true, is if a game has an LOD system that is tied to screen resolution. So higher resolution will call higher detail lods, that will have more objects and mor complexity.
 

Cakeboxer

Member
That moment when you realize this is the new sales pitch between hardware iterations.

FF7R-2.jpg
The thing is, when you have to compare two (frozen) pictures next to each other, how will you notice anything while playing (in motion) without direct comparison? You can't, PS2-->PS3 times are over.
 
Last edited:

PaintTinJr

Member
Yeah, so if you need to drop down the resolution to those levels to see a difference, how will PSSR help the CPU in any capacity? It’s won’t be upscaling from 720p, that’s for sure.
I was talking about PC there at 720p-900p. resolution to answer your question

On consoles with unified memory like 360, X1, PS4, PS5 with hardware design for the specific purpose of high end gaming we see improvements much more gradually, and going by the history of DRS which was pioneered in games like Wipeout on PS3 we even see the impact much sooner on console even with discrete memory.
 

Gaiff

SBI’s Resident Gaslighter
How did you figure that out?
As I understand, draw calls are commands the CPU sends to draw objects to the GPU.
But these are not limited by screen resolution. Even if there are more pixels to shade and rasterize, the amount of draw calls remain the same.
The only way this can be true, is if a game has an LOD system that is tied to screen resolution. So higher resolution will call higher detail lods, that will have more objects and mor complexity.
This was my understanding as well.
 

Dunker99

Member
Bojji Bojji Any chance you could move your PC chat to another thread so this one can be used to discuss, you know, PS5 Pro and PSSR? We get it, you love your PC and don’t want to buy a PS5Pro. Why keep posting in here and derailing everything? It’s frustrating for us console peasants to come into Pro discussion threads, only for lots of the discussion to be about PC benchmarks and builds etc.
 

winjer

Gold Member
This was my understanding as well.

From my knowledge of hardware, reading on UE and Unity documentation, and even using ChatGPU, GeminiAi and BraveAI, they all say the same thing, the amount of draw calls is dependent on the amount of unique objects, that the CPU commands the GPU to draw. Not to the render resolution.
 
Last edited:

PaintTinJr

Member
Do you mean that draw calls require less CPU-time to generate at lower resolutions? If so, can you provide a source for this?
No, we are talking about a two way conversation between the CPU and GPU in which the CPU doesn't regain control until it gets the equivalent GPU ACK message from supplying a workload.

The less time it takes a GPU to return control - because of a lighter workload - the more efficient the CPU is to facilitate more coordinated work.
 
Last edited:

Bojji

Member
Bojji Bojji Any chance you could move your PC chat to another thread so this one can be used to discuss, you know, PS5 Pro and PSSR? We get it, you love your PC and don’t want to buy a PS5Pro. Why keep posting in here and derailing everything? It’s frustrating for us console peasants to come into Pro discussion threads, only for lots of the discussion to be about PC benchmarks and builds etc.

Yon can ignore my posts.

Pro (just like PS4/PS5/Xbox S, X) is build of almost retail PC parts. And Pro can only be compared to PS5/SX or something more powerful (PC) to see where it lands in performance and image quality.

And I was interested in Pro before Sony fucked up pricing in Europe, I like to have the best possible experience even on consoles. Believe it or not, I'm not PS Pro hater, I just want to bring some logic because some PS fans are making this thing much more powerful than it really is.

PC is my platform of choice but I still play like 50/50 on PS5 as well thanks to physical versions of games.
 

PaintTinJr

Member
From my knowledge of hardware, reading on UE and Unity documentation, and even using ChatGPU, GeminiAi and BraveAI, they all say the same thing, the amount of draw calls is dependent on the amount of unique objects, that the CPU commands the GPU to draw. Not to the render resolution.
That's true, but the same drawcall is still active until it completes. So finishing sooner was semantically the same as issuing less work, was what I said in my first follow up,, but in hindsight I wouldn't have described it that way in my first message if I wasn't rushing when I wrote that :) originally.
 

winjer

Gold Member
That's true, but the same drawcall is still active until it completes. So finishing sooner was semantically the same as issuing less work, was what I said in my first follow up,, but in hindsight I wouldn't have described it that way in my first message if I wasn't rushing when I wrote that :) originally.

Another point to make about draw calls, is that with combining material and meshes for static objects, using object instancing, batching commands, using command lists, and with low level APIs, such as DX12 and Vulkan, draw calls are rarely a bottleneck for CPU performance.
I takes a lot of screw up from a dev team, to have draw calls being a CPU bottleneck in 2024.
 

FireFly

Member
No, we are talking about a two way conversation between the CPU and GPU in which the CPU doesn't regain control until it gets the equivalent GPU ACK message from supplying a workload.

The less time it takes a GPU to return control - because of a lighter workload - the more efficient the CPU is to facilitate more coordinated work.
That's the situation where the CPU is waiting on the GPU. But what about the situation where the GPU is waiting on the CPU for a new draw call? Is the CPU able to issue that call any faster (and reduce the "waiting time") at lower resolutions?
 

winjer

Gold Member
That's the situation where the CPU is waiting on the GPU. But what about the situation where the GPU is waiting on the CPU for a new draw call? Is the CPU able to issue that call any faster (and reduce the "waiting time") at lower resolutions?

The GPU can't draw an object, it it does not have the command describing that object. Including meshes and materials.
So if the GP has not received a new batch of draw calls, it just waits.
 

FireFly

Member
The GPU can't draw an object, it it does not have the command describing that object. Including meshes and materials.
So if the GP has not received a new batch of draw calls, it just waits.
I understand that. My question was about whether the waiting time is less at lower resolutions. Because if not, then lowering the resolution does not change the nature of the bottleneck.
 

winjer

Gold Member
I understand that. My question was about whether the waiting time is less at lower resolutions. Because if not, then lowering the resolution does not change the nature of the bottleneck.

If the GPU finishes rendering a frame sooner, it can receive new commands to draw a new frame sooner. So it can request the commands for a new frame sooner.
 

FireFly

Member
If the GPU finishes rendering a frame sooner, it can receive new commands to draw a new frame sooner. So it can request the commands for a new frame sooner.
It can request the information required for a new frame sooner, but that doesn't mean the CPU is ready to deliver that information. The question was whether the rate at which the CPU is able to generate the required information increases at lower rendering resolutions.
 
Top Bottom