• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[DF] Ratchet and Clank: Rift Apart - PC Review - Cutting Edge Tech But Lacking In Polish

PaintTinJr

Member
Are you saying PC developers will have to figure out a way to take advantage of all this cutting-edge hardware because just telling people to buy more expensive stuff isn't gonna work anymore? Imagine that.

I'm glad. SSDs have been a thing for over a decade and it took fucking consoles to give PC developers a kick in the ass to start using what they've had for a long time.
No, I think with rebar being a win for games as a recent necessity, and games like Star Citizen having train sections where asset streaming is tanking frame-rate, while CPU and GPU remain half stressed is going to result in either an unwanted hardware solution - like APUs - or a software compromise where texture streaming is going to be capped and supplemented with procedural textures, and live with the reality that a £500 console yields superior texture quality to PC, and because texturing is so much of the signal on screen, overall IQ of consoles can't be surpassed.

Personally, I think the APU hardware solution will win out to keep the rainbow chasing hardware market going.
 
Last edited:

RoadHazard

Gold Member
Its not a drastically faster ssd though, the 2230 ranges only about as high as around 3000mps. and the bandwidth limits it to almost sd card levels

Well, the PS4 HDD does around 50MB/s. Could this game run from it? Yes, with very low settings (like what we're seeing on the Deck) and with very long portal "load screens". Which takes away much of the spectacle of the game and makes it a much worse experience.
 

Zathalus

Member
A game that doesn't even come close to fully utilizing the PS5 SSD and IO. We'll see!
It doesn't fully utilise PC SSDs either. So your point? There is benchmarking software out there that stresses the SSD via DirectStorage way more then this does.

People are claiming PC is in trouble based on a single buggy port that was ported in about 6 months using a brand new API. That comes off as agenda driven to me.
 
Last edited:

PaintTinJr

Member
It doesn't fully utilise PC SSDs either. So your point? There is benchmarking software out there that stresses the SSD via DirectStorage way more then this does.

People are claiming PC is in trouble based on a single buggy port that was ported in about 6 months using a brand new API. That comes off as agenda driven to me.
It completely overburdens the PC's ability to check-in data and stream it to the GPU to be ready to render JIT., unlike PS5 IO complex which will get stressed and optimised further, and the PC version has had more work and not to a strict timeline for launch, so has no excuses other than the hardware and OS solutions like directstorage currently failing to measure up.

edit:
To your agenda comment, it has been obvious for a long time that unified memory is a big win for software design, heterogeneous compute, data redundancy and computational latency. At some point in time, PCs will ultimately hit a wall with the discrete CPU and GPU, and get those wins too, maybe PC has another decade to put it off, but rebar suggests that PCIE improvements are slowing, and IMHO indicate that unified RAM on high-end PC is less than 5years away.
 
Last edited:
I like this timestamp.

God Of War Magic GIF by Santa Monica Studio
 

Zathalus

Member
It completely overburdens the PC's ability to check-in data and stream it to the GPU to be ready to render JIT., unlike PS5 IO complex which will get stressed and optimised further, and the PC version has had more work and not to a strict timeline for launch, so has no excuses other than the hardware and OS solutions like directstorage currently failing to measure up.
Nixxes only finished porting Miles Morales in November last year and DirectStorage 1.1 only got released in November as well. Ratchet and Clank actually uses 1.2 and that only got released in April. So very little time to port and work with the new API.

This game doesn't even break 1Gbps transfer speed when loading either, that overburdens nothing. You can stress test DirectStorage with a far higher data throughput via the following:

 

jroc74

Phone reception is more important to me than human rights
All I will say is Cerny was right to put more emphasis in the I/O when designing the console.

I would not be shocked if for the PS5 Pro they dont even touch the SSD. Not the size but the speed. Going from 5.5 to 7GB/s dont even seem worth it or necessary.


Textures play a huge part in the visuals of a game on any GPU, and with open worlds and larger data being the trend, the ability to stream faster than any other device because of check-in, and then deference in unified ram to eliminate RAM to VRAM transfers is going to be such a big problem for PC, IMO far more than optimising a very capable CPIU and GPU with to the metal software solutions, especially if a launch title is already making the PC brute force solution inferior in some aspects.
I remember this coming up in next gen speculation threads....and it was a searing hot topic. One of many....lol.
 

ChiefDada

Gold Member
Nixxes only finished porting Miles Morales in November last year and DirectStorage 1.1 only got released in November as well. Ratchet and Clank actually uses 1.2 and that only got released in April. So very little time to port and work with the new API.

This game doesn't even break 1Gbps transfer speed when loading either, that overburdens nothing. You can stress test DirectStorage with a far higher data throughput via the following:


I thought all the tech analysis from DF and others made it clear that drive speed isn't the issue?
 

PaintTinJr

Member
Nixxes only finished porting Miles Morales in November last year and DirectStorage 1.1 only got released in November as well. Ratchet and Clank actually uses 1.2 and that only got released in April. So very little time to port and work with the new API.

This game doesn't even break 1Gbps transfer speed when loading either, that overburdens nothing. You can stress test DirectStorage with a far higher data throughput via the following:

You are conflating transfer speeds with check-in. Grabbing one large file and throwing it into RAM at high continuous transfer speed isn't typical of game data comprising of lots of medium to tiny assets as small as kilobytes.

The ability to use directstorage at a basic level to effectively randomly load/stream resources, CRC test them, byte align them or pass them through streamReaders to check them in, and further prepare them for GPU check-in isn't going to have as many options to get wrong and improve as you are suggesting. The problem is just the PC process uses the CPU, and the IO complex solution bypasses CPU latency and DMAs direct from the IO complex ESRAM into unified RAM.
 

SmokSmog

Member
I can confirm that 10GB 3080 is running out of vram at v.high textures. Underutilized GPU and terrible framerate.

v.high textures. = at least 16GB GPU
high = at least 10GB
8GB GPUs should aim for medium textures 😂 ( brand new 4060/Ti users 🤣).
 

Kataploom

Gold Member
No, I think with rebar being a win for games as a recent necessity, and games like Star Citizen having train sections where asset streaming is tanking frame-rate, while CPU and GPU remain half stressed is going to result in either an unwanted hardware solution - like APUs - or a software compromise where texture streaming is going to be capped and supplemented with procedural textures, and live with the reality that a £500 console yields superior texture quality to PC, and because texturing is so much of the signal on screen, overall IQ of consoles can't be surpassed.

Personally, I think the APU hardware solution will win out to keep the rainbow chasing hardware market going.
Do we even have power for that?

Afaik that's way too heave on GPU side and we already have games going way low on internal resolution.
 

b0uncyfr0

Member
I dont know how Alex tested vram usage but others are confirming what i believe to be true. 8gb vram isnt enough for High Textures

 
Last edited:

SmokSmog

Member
I dont know how Alex tested vram usage but others are confirming what i believe to be true. 8gb vram isnt enough for High Textures


You need more than 12GB vram for everything pimped to the max.
10GB 3080 can do 4K DLSSQ with textures at high ( not very high) max settings + no RT.
 
Last edited:

Zathalus

Member
You are conflating transfer speeds with check-in. Grabbing one large file and throwing it into RAM at high continuous transfer speed isn't typical of game data comprising of lots of medium to tiny assets as small as kilobytes.

The ability to use directstorage at a basic level to effectively randomly load/stream resources, CRC test them, byte align them or pass them through streamReaders to check them in, and further prepare them for GPU check-in isn't going to have as many options to get wrong and improve as you are suggesting. The problem is just the PC process uses the CPU, and the IO complex solution bypasses CPU latency and DMAs direct from the IO complex ESRAM into unified RAM.
That test doesn't just do one large file. It is 512 models and 2560 textures of different sizes that get transferred. It amounts to almost 9GB getting transferred in a third of a second.

BulkLoadDemo-timeline.png


You can also read the API documentation yourself, it is far more complex then just simple decompression:


Getting to grips with a new API like this while porting a game not initially designed for PC in around 6 months is not easy. It took years for developers to start feeling comfortable with DX12 and Vulkan.

You are taking a 3% loading difference on the very first game that is utilising this technology and assuming this is the best a PC with DirectStorage can do. A game that is so buggy that it frequently crashes, lacks basic features such as properly implemented screen space reflections and had to be patched in the first week to implement something incredibly basic as anisotropic filtering.
 
What they are (purposefully?) missing since the first day of Cerny's speech it's that it's not only about bandwith. It's also about low latencies of the whole I/O pipeline. PS5 can load assets while a frame is being rendered! so with latency easily<16.6ms from the moment it's requested to the moment the asset is fully ready (decompressed into the right format) into VRAM.

Cerny has being patiently explaining all that but those PCMR people just dont wan't to understand that their 5K $ machine is totally outmatched in that area versus PS5 because PC has so many bottlenecks due to its long history.
 

Zathalus

Member
What they are (purposefully?) missing since the first day of Cerny's speech it's that it's not only about bandwith. It's also about low latencies of the whole I/O pipeline. PS5 can load assets while a frame is being rendered! so with latency easily<16.6ms from the moment it's requested to the moment the asset is fully ready (decompressed into the right format) into VRAM.
So does DirectStorage with GPU decompression. You don't have to believe me, just refer to AMDs presentation at GDC this year on the matter:


Specifically the Pix capture on pages 64-68. Everything is done in milliseconds.

Or if you have time, you can watch the presentation:

 

Fess

Member
So the usual story with a PC port...wait 6mths after launch for the inevitable patches that are required to bring them upto par.
It’s not like Ratchet is unpatched on PS5. Give Nixxes some credit, they actually already got an important patch out when DF was doing this video, day 2 patch I guess, fixed the blocky RT shadows among other things. I bet they’ll have more patches out within a week.
Just chill for a bit. I’m ready to jump in for a replay.
 

Zathalus

Member
It’s not like Ratchet is unpatched on PS5. Give Nixxes some credit, they actually already got an important patch out when DF was doing this video, day 2 patch I guess, fixed the blocky RT shadows among other things. I bet they’ll have more patches out within a week.
Just chill for a bit. I’m ready to jump in for a replay.
Same, I'm looking forward to playing it again, but I can wait a bit.

It's going to be a long wait though as BG3, Starfield, and Spider-Man 2 is going to hold my attention for a while.
 

Fess

Member
Same, I'm looking forward to playing it again, but I can wait a bit.

It's going to be a long wait though as BG3, Starfield, and Spider-Man 2 is going to hold my attention for a while.
I plan to do a replay when I see that I’ll get an upgrade without any obvious downgrades.

The 1 second longer load times in rifts don’t bother me, wouldn’t have noticed it without DF side-by-side clip.
But I would like to see the static RT shadows fixed.
 

PaintTinJr

Member
That test doesn't just do one large file. It is 512 models and 2560 textures of different sizes that get transferred. It amounts to almost 9GB getting transferred in a third of a second.

BulkLoadDemo-timeline.png


You can also read the API documentation yourself, it is far more complex then just simple decompression:


Getting to grips with a new API like this while porting a game not initially designed for PC in around 6 months is not easy. It took years for developers to start feeling comfortable with DX12 and Vulkan.

You are taking a 3% loading difference on the very first game that is utilising this technology and assuming this is the best a PC with DirectStorage can do. A game that is so buggy that it frequently crashes, lacks basic features such as properly implemented screen space reflections and had to be patched in the first week to implement something incredibly basic as anisotropic filtering.
Your graph and your numbers still sound like you are focused on the 9GBs part, when it is the question of how long does it take to do the equivalent of nanite on ps5 - Epic's numbers were 2ms to supply a 100 or 200MB continuously IIRC.

Judging by your graph the DirectStorage overhead is 8.05ms just to reach the beginning of a transfer, and despite the large data transfer/check-in for 1/3sec, all that white space looks like critical path inefficiency, unless I'm mistaken.

My assertion is that the critical path on PC with the current OS fixes like DirectStorage and discrete memory for CPU and GPU thrashing PCIe busses in game random streaming will have real problems matching the IO complex's critical path efficiency.
 

Zathalus

Member
Your graph and your numbers still sound like you are focused on the 9GBs part, when it is the question of how long does it take to do the equivalent of nanite on ps5 - Epic's numbers were 2ms to supply a 100 or 200MB continuously IIRC.

Judging by your graph the DirectStorage overhead is 8.05ms just to reach the beginning of a transfer, and despite the large data transfer/check-in for 1/3sec, all that white space looks like critical path inefficiency, unless I'm mistaken.

My assertion is that the critical path on PC with the current OS fixes like DirectStorage and discrete memory for CPU and GPU thrashing PCIe busses in game random streaming will have real problems matching the IO complex's critical path efficiency.

Refer to AMDs presentation at GDC this year on the matter:


Specifically the Pix capture on pages 64-68. Everything is done in milliseconds.

Or if you have time, you can watch the presentation:

 
Last edited:

Gojiira

Member
So it's not a port disaster (cough last of us cough), had a few bugs that are already fixed patch 1 day after and likely more to come in a month. So, i mean, Nixxes are not too bad, not gods of ports but they're putting in the work, this must not have been an easy port for them.

Will be interesting to revisit the performances in a month or so.
Tbh Im not sure why PC players expect perfect ports, these games are made only with PS5 in mind so theres always going to be issues when porting to hardware as varied as PC especially when porting is only a recent thing for Sony. At LEAST they do patch them though and like you say put the work in but yeah
 

Senua

Gold Member
Tbh Im not sure why PC players expect perfect ports, these games are made only with PS5 in mind so theres always going to be issues when porting to hardware as varied as PC especially when porting is only a recent thing for Sony. At LEAST they do patch them though and like you say put the work in but yeah
They just need to stop releasing them before they are ready, but then that goes for more than just Sony's pc ports.
 

Ivan

Member
It doesn't fully utilise PC SSDs either. So your point? There is benchmarking software out there that stresses the SSD via DirectStorage way more then this does.

People are claiming PC is in trouble based on a single buggy port that was ported in about 6 months using a brand new API. That comes off as agenda driven to me.
That's why real world scenarios are important and not synthetic benchmarks, like Tim Sweeney said back then. It CAN'T fully utilize it on PC because complete architecture is full of bottlenecks and Cerny's PS5 presentaion explained that really well. Just like speed of copying one big file has nothing to do with game loading performance, especially now.

If drive speed was the only problem, PC would fly over this, but as you can see - there's much more to it even with direct storage we just got and this is a PS5 launch title, imagine a truly next gen game like some new cyberpunk, GTA 6 or anything really complex which would use PS5s i/o close to maximum non stop.

Honestly, I think PS5 I/O system is MADE for games like Star Citizen, it would make their life much easier.

I wonder how expensive would it be to implement PS5-like i/o controller with all needed on pc motherboards...could it work that way in the future? I guess it is a wider problem in pc hardware industry and not that easy to solve.
 
Last edited:

Zathalus

Member
That's why real world scenarios are important and not synthetic benchmarks, like Tim Sweeney said back then. It CAN'T fully utilize it on PC because complete architecture is full of bottlenecks and Cerny's PS5 presentaion explained that really well. Just like speed of copying one big file has nothing to do with game loading performance, especially now.

If drive speed was the only problem, PC would fly over this, but as you can see - there's much more to it even with direct storage we just got and this is a PS5 launch title, imagine a truly next gen game like some new cyberpunk, GTA 6 or anything really complex which would use PS5s i/o close to maximum non stop.

Honestly, I think PS5 I/O system is MADE for games like Star Citizen, it would make their life much easier.
I've already addressed this repeatedly, but DirectStorage does exactly what the PS5 does. It eliminates CPU overhead, streamlines file access, and offloads compression onto hardware. There is an entire GDC presentation by AMD about it.

Ratchet and Clank is a title that came out within the first year of the PS5 but the developers had years to work with and optimize for the PS5 hardware, in comparison Nixxes has had half a year to port the game and get used to working with DirectStorage. It shows as well, the game released with broken AF of all things, why assume everything is bug free and working perfectly?
 
I find it rather funny how DF are so quick to point out the short comings of the PC version (I wonder why) there's plenty of bugs in the PS5 version too
 

PaintTinJr

Member
Refer to AMDs presentation at GDC this year on the matter:


Specifically the Pix capture on pages 64-68. Everything is done in milliseconds.

Or if you have time, you can watch the presentation:


From start to finish of random loading, is there a graph - like your last one - that shows small 100MB transfers at just 2ms? And using what mainstream PC hardware to achieve that - in the scenario Alex's high-end PC would do, even if niche by cost.

That is all that really matters in this discussion for you to prove me wrong, because if the real check-in latency is 5x greater than the IO complex when changing data per frame, or every few frames, then that is exactly the problem I'm talking about that PC's current brute force strategy isn't fixing.
 

Ivan

Member
I've already addressed this repeatedly, but DirectStorage does exactly what the PS5 does. It eliminates CPU overhead, streamlines file access, and offloads compression onto hardware. There is an entire GDC presentation by AMD about it.

Ratchet and Clank is a title that came out within the first year of the PS5 but the developers had years to work with and optimize for the PS5 hardware, in comparison Nixxes has had half a year to port the game and get used to working with DirectStorage. It shows as well, the game released with broken AF of all things, why assume everything is bug free and working perfectly?
It is far from the same thing, the idea might be the same and they're working with what they have on pc, but it is far from the same solution.
 

Bojji

Member
I've already addressed this repeatedly, but DirectStorage does exactly what the PS5 does. It eliminates CPU overhead, streamlines file access, and offloads compression onto hardware. There is an entire GDC presentation by AMD about it.

Ratchet and Clank is a title that came out within the first year of the PS5 but the developers had years to work with and optimize for the PS5 hardware, in comparison Nixxes has had half a year to port the game and get used to working with DirectStorage. It shows as well, the game released with broken AF of all things, why assume everything is bug free and working perfectly?

Direct storage isn't magic, it takes away GPU resources. It won't be as good as hardware made for this task, in Ratchet it hits GPUs hard and in this game there is PERFORMANCE difference between high and very high textures while in most games it isn't (as long as there is enough VRAM).
 
The point of bashing such a port is always the same. In the end PS5 outperforms PC because the PC version... is less polished and not because of a hardware deficiency. Classic DF.
Its funny how DF gone from saying the PC is the best, to know trying its uttermost to make it look worse. There's some shocking bugs in the PS5 version
 

Zathalus

Member
From start to finish of random loading, is there a graph - like your last one - that shows small 100MB transfers at just 2ms? And using what mainstream PC hardware to achieve that - in the scenario Alex's high-end PC would do, even if niche by cost.

That is all that really matters in this discussion for you to prove me wrong, because if the real check-in latency is 5x greater than the IO complex when changing data per frame, or every few frames, then that is exactly the problem I'm talking about that PC's current brute force strategy isn't fixing.
Page 54-62

ioTime–– From first DirectStorage request for an asset until decompression completes on the GPU.

X1 asset is roughly 350MB in size and the ioTime is 17ms.

Commandmodule asset is roughly 3.5GB in size and the ioTime is 160ms.

Some other objects have times as low as 7ms in the video but no size is given.

So roughly 4-5ms to go from requesting the asset to completing the decompression process for a 100MB file.

Why did you settle on 2ms for 100MB? I don't recall anything from the PS5 deep dive that mentions that, but I could be wrong.

That includes the entire process and not just the check-in time, so that would obviously be lower.

Specs are:

7900X
RX 7900 XTX
 
Last edited:

Skifi28

Member
convenient that, but that's DF for you...
The video clearly focused on big and reproducible issues, a rendering bug you might see once every a few dozen hours was neither within the scope of such an analysis nor important. Did you want them to play the PS5 version for 100 hours so they might be able to catch a couple of bugs most people will never see? What would that achive when testing the PC version of the game?

For the record, I also didn't encounter this bug within 20 hours of my playtime.
 
Top Bottom