• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Will Sony ever launch PSSR on their PC ports to optimize and update it better like DLSS as both Nvidia and AMD supports AI features on their GPU.

Lysandros

Member
It depends what the custom aspect is technologically timestamped IMO. if it is custom from prior to RDNA3/4 which I believe it is, then dual issue would be custom. If it is more than that, then I agree it presents a performance barrier that OG PS5 probably can't bridge with lower native and lower output resolution, alone.
I honestly believe there is about 0% chance that the custom ML hardware that Sony is talking about is simply RDNA3's dual issue feature. They didn't call PS4 PRO RPM from Vega "custom" at the time, for a good reason. Those are standard features of AMD's respective architectures.
 
Last edited:

kevboard

Member
You should DM Mark Cerny, let him know he wasted his time and Sony's R&D money.

I don't have to, he knows that. because unlike you, he knows how image reconstruction works in games.

TAA is image reconstruction. how do you think TAA smooths edges and inner model details? by gathering information over multiple frames, adding additional detail, and then applying said detail to smooth the image.

now guess what PSSR, TSR, FSR, DLSS and XeSS do 🤔
they do the exact same thing, but instead of adding detail only to smooth edges, they just use the additional detail to target a higher output resolution.

the only real difference between them is how exactly they gather and choose the detail they add. the principle of them all is the exact same and the inputs they need from the engine are also the same
 
Last edited:

ZehDon

Member
... [words] ...
Why are you wasting time with me? You know how to make buffered TAA competitive with trained image re-construction techniques without dedicated hardware - something not even AMD's top engineers can do. Go sell your miracle to AMD and collect your millions!
 

Gaiff

SBI’s Resident Gaslighter
But that 52TOPs is completely misrepresented as it exists and is accessed by a slower interconnect and has to work isochonous-ly with just the time slice between the end of the 'shaders' rendered a native frame and the v-sync timing to display the frame so if a frame takes 14ms for native, then the RTX2060 AI module sits idle for 14ms of 16.67ms, eg it is only working for 8-9TOPs per second, as it physically can't start work on inferencing a frame that hasn't been rasterized, yet.

edit:

if we then factored in my earlier comment estimate that the Pro - which is on par with a RTX DLSS solution for resources - is x4 time more powerful to infer x4 more pixels than OG PS5, then that 8-9TOPs then becomes2-2.25 TOPS, and ignoring the unit of TOPs or half FLOPs, would mean using FP16 in place would take 2.25 HFLOPs from the OG PS5's 20.46 HFLOPs which is just over 10%, even assuming that is 100% out in the TOPs to HFLOPs, conversion and we said 20% was needed., 20% of a 16.67ms frame time to allocate to 720p -> 1200p PSSR is only 3.3ms, and still leaving 13ms to render native at 720p. So still sounds possible IMO.
Then, there's no reason AMD wouldn't have developed a solution after 4 years if something as good as PSSR had been possible on PS5-tier hardware all along.

Their upcoming solution will be AI-based and unlikely to work with RDNA2.
 

kevboard

Member
Why are you wasting time with me? You know how to make buffered TAA competitive with trained image re-construction techniques without dedicated hardware - something not even AMD's top engineers can do. Go sell your miracle to AMD and collect your millions!

AMD can easily do it. but they didn't want to waste die space for ML acceleration hardware. they thought that was the way to go, and they were wrong.
same shit happened when they refused to have dedicated RT hardware, and haphazardly added the minimal needed RT hardware into their compute units.

and none of this changes the fact that all of the reconstruction methods are just slightly more aggressive TAA, nor that they are ALL interchangeable with ease. again, 2 files in many PC games to add DLSS where it isn't supported but another reconstruction method is... because they all work 90% identically, with the last 10% being the difference in how each decides which information to discard to avoid artifacts
 
Last edited:

PaintTinJr

Member
Then, there's no reason AMD wouldn't have developed a solution after 4 years if something as good as PSSR had been possible on PS5-tier hardware all along.

Their upcoming solution will be AI-based and unlikely to work with RDNA2.
PS5 hardware is still not publicly documented, or is there a PS5 Instruction Set Architecture document we can read that shows the limits to confirm your claim?

As for AMD, where is the motivation until now? Look at the synergy of FSR and the market segment they sell most products in. Only now are their low end products that they sell all powerful enough to utilise an ML AI upscaler to justify the cost of training a model..
 

Gaiff

SBI’s Resident Gaslighter
PS5 hardware is still not publicly documented, or is there a PS5 Instruction Set Architecture document we can read that shows the limits to confirm your claim?
The burden of the proof is on you.
As for AMD, where is the motivation until now? Look at the synergy of FSR and the market segment they sell most products in. Only now are their low end products that they sell all powerful enough to utilise an ML AI upscaler to justify the cost of training a model..
Except that part where it would have been a great marketing ploy for their RDNA2/3 cards. DLSS is often cited as one of the biggest reasons to choose NVIDIA over AMD. You’re telling me they might have been sitting on a PSSR-like solution for their products for years, but decided to just let NVIDIA beat them to the pavement and will only respond 5 years later with FSR4? Not buying it.

Everyone but NVIDIA had a vested interest in a non-ML or low-ML upscaler that could have given PSSR-like results, yet nobody did it.
 
Last edited:

PaintTinJr

Member
I honestly believe there is about 0% chance that the custom ML hardware that Sony is talking about is simply RDNA3's dual issue feature. They didn't call PS4 PRO RPM from Vega "custom" at the time, for a good reason. Those are standard features of AMD's respective architectures.
Given what I said in the post about the RTX 2060 only using 17% of the 52TOPS for DLSS for a 14ms native render and 2.667ms ML AI upscale, what likelihood would you place on PlayStation find space for a 300TOPS NPU in the Pro and then only meaningfully utilising 17% ( ~50TOPs) of it with the isochronous native frame data?

IMO PlayStation and AMD have been about die space being allocated to functionality that can be kept busy all the time, which pretty much rules out the Pro having a big 300TOPs NPU in addition to the shader compute, unless the custom feature is to facilitate tiled ML AI on newly designed games, so that somehow, deferred or forward+ renderers could complete tiles of a frame adhocly and supply them earlier to the NPU to infer them to a larger size throughout the native render process, rather than waiting until the native frame is complete. But realistically that feels like more like a PS6 paradigm shift solution IMO.
 

PaintTinJr

Member
The burden of the proof is on you.
And I keep citing the Ragnarok paper which does a version of the very thing I've described, and it seems it is ignored or not understood to be proof of concept.
Except that part where it would have been a great marketing ploy for their RDNA2/3 cards. DLSS is often cited as one of the biggest reasons to choose NVIDIA over AMD. You’re telling me they might have been sitting on a PSSR-like solution for their products for years, but decided to just let NVIDIA beat them to the pavement and will only respond 5 years later with FSR4? Not buying it.

Everyone but NVIDIA had a vested interest in a non-ML or low-ML upscaler that could have given PSSR-like results, yet nobody did it.
They weren't sitting on it, they just had no scenario to have it used by lots of people regardless of when Nvidia marketing shilled everyone to say DLSS was better, so just buy Nvidia.

Now, even the cheapest of their products that they sell the most of could find 50TOPs from the compute to get the benefit of an FSR4.
 

TrebleShot

Member
I don't have to, he knows that. because unlike you, he knows how image reconstruction works in games.

TAA is image reconstruction. how do you think TAA smooths edges and inner model details? by gathering information over multiple frames, adding additional detail, and then applying said detail to smooth the image.

now guess what PSSR, TSR, FSR, DLSS and XeSS do 🤔
they do the exact same thing, but instead of adding detail only to smooth edges, they just use the additional detail to target a higher output resolution.

the only real difference between them is how exactly they gather and choose the detail they add. the principle of them all is the exact same and the inputs they need from the engine are also the same
His bank balance and PS5 sales numbers beg to differ.
 

Gaiff

SBI’s Resident Gaslighter
And I keep citing the Ragnarok paper which does a version of the very thing I've described, and it seems it is ignored or not understood to be proof of concept.
This isn't proof that the PS5 can do something like PSSR, unless you got to view the papers detailing PSSR and cross-referenced them.
They weren't sitting on it, they just had no scenario to have it used by lots of people regardless of when Nvidia marketing shilled everyone to say DLSS was better, so just buy Nvidia.

Now, even the cheapest of their products that they sell the most of could find 50TOPs from the compute to get the benefit of an FSR4.
No scenario to have it used by most of their RDNA2 and RDNA3 product stack? Yeah, this doesn't make sense. A much better PSSR-like solution would have done wonders for RDNA2/3 over the past few years.
 

PaintTinJr

Member
This isn't proof that the PS5 can do something like PSSR, unless you got to view the papers detailing PSSR and cross-referenced them.
Read the paper. It uses the same aspects of the ISA that are necessary and does inferencing of 2K to 4K textures in 8ms, so is doing nearly 10x the number of pixels I was suggesting in just 4x the time slice. Meaning even if PSSR is 3x the complexity(3x times more matrix calculations) it would still produce enough inferred pixels and within the 2-3ms window I suggested.
No scenario to have it used by most of their RDNA2 and RDNA3 product stack? Yeah, this doesn't make sense. A much better PSSR-like solution would have done wonders for RDNA2/3 over the past few years.
Check the steam poll and tell me how many users would have benefitted even 3years ago had they sunk millions into training an FSR4 that wouldn't have changed their market position, but potentially damaged their low-end FSR2 user product sales
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
Read the paper. It uses the same aspects of the ISA that are necessary and does inferencing of 2K to 4K textures in 8ms, so is doing nearly 10x the number of pixels I was suggesting in just 4x the time slice. Meaning even if PSSR is 3x the complexity(3x times more matrix calculations) it would still produce enough inferred pixels and within the 2-3ms window I suggested.
I read the GOWR paper, not the PSSR one. Do you have a link to the PSSR paper?
Check the steam poll and tell me how many users would have benefitted even 3years ago had they sunk millions into training an FSR4 that wouldn't have changed their market position, and potentially damaged their low-end FSR2 user product sales
Again, this makes no sense. NVIDIA developed DLSS and RTX when 0% of the users would have benefitted from them. The objective of those technologies is to drive adoption, not simply benefit people who are already customers. A PSSR-like solution would have been a reason to buy RDNA2/3, which is why they developed those solutions in the first place...to get new buyers.
 
Last edited:

Lysandros

Member
Given what I said in the post about the RTX 2060 only using 17% of the 52TOPS for DLSS for a 14ms native render and 2.667ms ML AI upscale, what likelihood would you place on PlayStation find space for a 300TOPS NPU in the Pro and then only meaningfully utilising 17% ( ~50TOPs) of it with the isochronous native frame data?

IMO PlayStation and AMD have been about die space being allocated to functionality that can be kept busy all the time, which pretty much rules out the Pro having a big 300TOPs NPU in addition to the shader compute, unless the custom feature is to facilitate tiled ML AI on newly designed games, so that somehow, deferred or forward+ renderers could complete tiles of a frame adhocly and supply them earlier to the NPU to infer them to a larger size throughout the native render process, rather than waiting until the native frame is complete. But realistically that feels like more like a PS6 paradigm shift solution IMO.
I get it, but i didn't claim that PS5 PRO has a 300 TOPs "NPU" in itself without any CU based ML capabilities in the slightest. What i am saying is that it has "a (additional) custom/dedicated ML hardware/silicon" (sticking to Sony's words) as an efficiency focused helper block, which brings up the "total" ML figure to 300 TOPs. All the while dismissing that all these coming down/consisting of solely of RDNA 3' standard dual issue feature.
 

JimboJones

Member
I read the GOWR paper, not the PSSR one. Do you have a link to the PSSR paper?

Again, this makes no sense. NVIDIA developed DLSS and RTX when 0% of the users would have benefitted from them. The objective of those technologies is to drive adoption, not simply benefit people who are already customers. A PSSR-like solution would have been a reason to buy RDNA2/3, which is why they developed those solutions in the first place...to get new buyers.
And now AMD have to contend with years of DLSS integration already in games.
They have to either reset from 0 integrate FSR4 or somehow retroactively fit it into their years of FSR2 games. They basically just made the situation worse for themselves by pussyfooting around.
 

PaintTinJr

Member
I get it, but i didn't claim that PS5 PRO has a 300 TOPs "NPU" in itself without any CU based ML capabilities in the slightest. What i am saying is that it has "a (additional) custom/dedicated ML hardware/silicon" (sticking to Sony's words) as an efficiency focused helper block, which brings up the "total" ML figure to 300 TOPs. All the while dismissing that all these coming down/consisting of solely of RDNA 3' standard dual issue feature.
I'll be in full agreement when we see the hardware in the x-rays that show something more than just the evolution of custom RDNA from the OG PS5. But with Cerny being gagged by Hulst and Co to actually sate our technology appetites, from the info we have, I still lean towards being able to infer an extra 1.5M pixels with PSSR on OG PS5 Hardware.
 

PaintTinJr

Member
I read the GOWR paper, not the PSSR one. Do you have a link to the PSSR paper?
And yet I'm pretty sure you've liked posts that have made similar assertions about how PSSR works exactly like DLSS and XeSS without further knowledge of PSSR, so you are just cherry picking when it suits you
to need actual info, rather than follow the evidence we do have.
Again, this makes no sense. NVIDIA developed DLSS and RTX when 0% of the users would have benefitted from them. The objective of those technologies is to drive adoption, not simply benefit people who are already customers. A PSSR-like solution would have been a reason to buy RDNA2/3, which is why they developed those solutions in the first place...to get new buyers.
Nvidia are and have been the market leader for forever. AMD playing the long game that all GPUs - even integrated ones - will have eventually be good enough - with NPUs, etc - and focusing on that long term market bet to ultimately win market share is pretty sound.

Let's face it, the top AMD card runs competitive FPS games faster because of faster raster graphics than Nvidia and yet it makes zero difference to press coverage of promoting AMD RX market share - despite being a much cheaper flagship - for the vast majority of younger gamers seriously interested in those competitive FPS games.

So why would rushing to compete with Nvidia on ML AI upscaling 4years earlier be any different to growing their market share from the top end gamer anymore that faster raster graphics, when they can wait for their segment of the market to come to them?
 
Last edited:

Zathalus

Member
The only evidence we have for how PSSR works is from the Sony leaks.

- It’s a ML enhanced version of TAAU
- Inputs are similar to DLSS and FSR
- 2ms time for 1080p to 4K
- Supports DRS & HDR
- No per title training needed
- Only supports up to 4K, 8K support in a future SDK.

That’s it, anything else is speculation. Although based on that and the results it sure seems similar to DLSS.
 
Top Bottom