• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Does PSSR need to change to a transformer model?

Hudo

Gold Member
Nvidia has once again jumped way ahead of the competition with DLSS 4. DLSS 4 is using a transformer-based model for image upscaling compared to the CNN-based model that was used for DLSS 2 & 3.

So where does this leave PSSR? PSSR is using a CNN-based model and you have to wonder whether this is ultimately a dead-end. Should Sony reassess their decision to go with a CNN-based model? Perhaps they seriously need to consider building a transformer-model in conjunction with improving their current CNN-model?

Maybe the future should be a transformer-based PSSR for Playstation 6? What do you think?
It entirely depends on their training data and what they want to achieve. A transformer model only begins to make sense if you have a lot of training data. The biggest advantage of a transformer model lies within it's QKV self-attention, which enables the model to relate information across "big" distances and without being hindered by "direction", so to speak. Another advantage is that, mathematically, it has unlimited context length, which RNNs or LSTMs do not have. But realistically, you are still limited by your hardware. It's just that you don't have to worry about exploding or vanishing gradients to that extent. Transformers are also nice if you want to merge multi-modal data into one embedding space. And, maybe that's the reason why Nvidia actually switched architectures, they are much easier to scale with hardware, since you "just" add more tokens/patches. So they are more easily scaled in width and depth. A CNN is much easier to scale in depth, but not that easy (sometimes even not feasible) to scale in width.
A CNN is, by no means "outdated". They are very much researched on and very much in use. Especially if you don't have a lot of data, because CNNs intrinsically are suited for visual data/visual problems due to the nature of how hierarchical convolutions work. It's much easier to extract features from a CNN and analyze them. It's much easier to extend and modify them. They are much more efficient than transformers (I know that there are optimizations like flash attention and linear attention etc... But these come with a trade-off) for visual problems. A transformer model also is a lot more heavy weight. We don't know enough about PSSR. We don't know, for example, if they also use attention blocks or residual blocks (most likely), etc.
In fact, in SOTA models for segmentation like SAM2 or Faster Mask R-CNN-based models use convolutions, but often together with transformer blocks/modules. Or at least attention blocks. Which are, for example, used in the encoder-decoder of diffusion models. I mean, you can even solve all the problems that transformers or CNNs can solve with MLPs (Universal Approximation Theorem). But you will find that you need a lot more of them, and they won't be as good as other models (e.g. MLP-Mixers). Or what about state-space models like Mamba?

TL;DR: It depends on what your goals and your training data and your hardware are. CNNs are not obsolete and certainly not a dead end. Hell, the U-Net, which is 10 years old at this point (was introduced at MICCAI 2015) is still a popular architecture that still gets new variations each year. Transformers are not suited for every problem, or at least, less efficient in solving them. CNNs are not suited for every problem, or less efficient at solving them. Although, it is remarkable how prevalent transformers have become, considering that they were originally intended to solve problems of LSTMs in natural language processing.
 

Gaiff

SBI’s Resident Gaslighter
vpwI1NP.jpeg
I thought it was DLSS4?
 
Hur dur dur

Why come in here with shitposts?

At 3T, 4GB VRAM and older driver

1080p (720p, T/CNN): 40.38 / 47.29 (~ +3.6ms of cost)
1440p (720p, T/CNN): 32.59 / 41.13 (~ +6.3ms of cost)


This is not a video of the guy with stats above but goes into dlss 4 settings on a 3050



Plenty of performance to be gained at lower preset.

So yea, arguments that low TOPs RTX cards cannot run the model evaporated

But that's a high cost actually. PSSR already costs 2ms.

Also the way they (DF?) compare the performance cost (96%... of what? How many ms, how many frames) is confusing and only there to minimize the real cost.
 
Last edited:

Cyborg

Member
Its crazy to think that anyone can compete with Nvidia at this moment in time. I think they have a 3-5 years lead.
 

nemiroff

Gold Member
I thought it was DLSS4?
DLSS 3: Comes with the game
DLSS 4: Add it through the Nvidia app

(Super Resolution)


the way they (DF?) compare the performance cost (96%... of what? How many ms, how many frames) is confusing and only there to minimize the real cost.
"DLSS4" yields 96% of the performance (on a 5090) compared to DLSS3 on the same hardware and internal resolution.

The reason we still consider it faster in the end is that "DLSS4" looks comparable or better at 1080 internal res than DLSS3 at 1440 internal res. Thus you can use the lower resolution for better performance without losing image quality.
 
Last edited:
DLSS 3: Comes with the game
DLSS 4: Add it through the Nvidia app

(Super Resolution)



"DLSS4" yields 96% of the performance (on a 5090) compared to DLSS3 on the same hardware and internal resolution.

The reason we still consider it faster in the end is that "DLSS4" looks comparable or better at 1080 internal res than DLSS3 at 1440 internal res. Thus you can use the lower resolution for better performance without losing image quality.
How many people are owing a 5090? How many frames are they going to lose on a 2080 or a 3080 with comparable IQ? This is what people want to know.

Those percentage are totally useless and only there to minimize the cost (96% from 100% seems super low obviously).
 

Zathalus

Member
How many people are owing a 5090? How many frames are they going to lose on a 2080 or a 3080 with comparable IQ? This is what people want to know.

Those percentage are totally useless and only there to minimize the cost (96% from 100% seems super low obviously).
No need to speculate on the cost. Nvidia lays it out quite neatly in the new DLSS programming guide, page 5-7:

 

V1LÆM

Gold Member
Its crazy to think that anyone can compete with Nvidia at this moment in time. I think they have a 3-5 years lead.
AMD can't so consoles have no chance. Well, Nintendo got the right idea going with Nvidia.

Maybe Sony/Microsoft should move to Nvidia too or ARM at least.
 

Hudo

Gold Member
Maybe Sony/Microsoft should move to Nvidia too or ARM at least.
Isn't one of the reasons why Sony and Microsoft didn't go for Nvidia for most of the console gens that Nvidia are pretty shitty to deal with as a business partner? Not sure if that changed during the time the deal was made to supply the Switch with Tegra chips or if Nvidia was happy to get rid of their stock and Nintendo jumped on the opportunity. There were rumors that for Switch 2, Nvidia suddenly weren't so friendly anymore. But who knows if that's true.
 

nemiroff

Gold Member
How many people are owing a 5090? How many frames are they going to lose on a 2080 or a 3080 with comparable IQ? This is what people want to know.

Those percentage are totally useless and only there to minimize the cost (96% from 100% seems super low obviously).

"How many owes 5090"? :lollipop_confused:

In most end of the day scenarios you'll gain frames (on any supported GPU), not lose (due to lower internal resolution needed for comparable image quality).

Am I missing something, I'm not sure I understand what you're saying... 🤷‍♂️
 
Last edited:
No need to speculate on the cost. Nvidia lays it out quite neatly in the new DLSS programming guide, page 5-7:

About 2 to 3 times more expensive from 1080p to 4K on about all the GPUs tested! There is no DLSS4 miracle there. This is why their 96% from 100% is completely misleading when it actually costs twice (or more) the DLSS3 rendering!

The added IQ has a high cost which is obviously much lower on bigger GPUs and totally make sense to have that latest version there.

F6yqAcI.png
 
Last edited:

nemiroff

Gold Member
About 2 to 3 times more expensive from 1080p to 4K on about all the GPUs tested! There is no DLSS4 miracle there. This is why their 96% from 100% is completely misleading when it actually costs twice (or more) the DLSS3 rendering!

The added IQ has an incredibly high cost which is obviously much lower on bigger GPUs and totally make sense to have that latest version there.

F6yqAcI.png
These are pure execution times from a command prompt test without a renderer. What is this discussion... :messenger_weary:
 
Last edited:

Zathalus

Member
About 2 to 3 times more expensive from 1080p to 4K on about all the GPUs tested! There is no DLSS4 miracle there. This is why their 96% from 100% is completely misleading when it actually costs twice (or more) the DLSS3 rendering!

The added IQ has a high cost which is obviously much lower on bigger GPUs and totally make sense to have that latest version there.

F6yqAcI.png
It’s just the execution time of DLSS, but the increase is still more than fine for all the cards and the resolutions they are meant for. 3060ti is best used at 1080p for example and 0.79ms is really not much. Even a 4080 at 4K is 1.5ms which is still very low.
 

Silver Wattle

Gold Member
Consoles have never needed the latest when it comes to tech, they just need "good enough", so having a proper DLSS2/3 alternative is good enough.
PS6 is when you can start expecting a better version, but even that is still not a "need".
 
It’s just the execution time of DLSS, but the increase is still more than fine for all the cards and the resolutions they are meant for. 3060ti is best used at 1080p for example and 0.79ms is really not much. Even a 4080 at 4K is 1.5ms which is still very low.
1080p is the output resolution, from native 540p. How is that acceptable on a 3060ti? This is Switch level of resolution here. On a Desktop GPU the output resolution should be min 4K, not 1080p!
 

Panajev2001a

GAF's Pleasant Genius
It’s just the execution time of DLSS, but the increase is still more than fine for all the cards and the resolutions they are meant for. 3060ti is best used at 1080p for example and 0.79ms is really not much. Even a 4080 at 4K is 1.5ms which is still very low.
For 120+ Hz 0.79 - 1.5ms is still 9.87-18.75% or higher of your total frame time. Some of the logic processing your frame is pretty fixed (input handling logic does not become much cheaper at 120 Hz to make one example) and if you exclude that you have a higher percentage cost.
 
Last edited:

yamaci17

Member
1080p is the output resolution, from native 540p. How is that acceptable on a 3060ti? This is Switch level of resolution here. On a Desktop GPU the output resolution should be min 4K, not 1080p!
1080p dlss 4 performance looks better than native 1080p taa and dlss 3 quality

also 3060ti has 8 GB VRAM. you can't output to 4K in many new games with that VRAM anymore. horizon forbidden west tanks performance in settlements even with 4k dlss ultra performance on a 3060ti but it will run 60+ FPS with 1440p dlss quality. indiana jones will tank to single digits even with low texture poolsize at 4K and you can barely get away using medium texture pool at 1440p output

8 GB VRAM going forward is meant for 1080p output resolution. and regardless, even dlss 4 performance there now achieves better image quality than dlss 3 quality

devs wont even bother making their games run properly on 1440p buffer with 8 GB VRAM. you should consider yourself lucky if you don't get potato textures or tanking performance at 1080p buffer at this point. regardless how powerful your 8 GB GPU is, it will be a 1080p card going forward. sure 3070/3060ti has power for 1440p with upscaling but it doesn't matter much anymore. it is the gtx 770 2 GB situation where the GPU had power for 1080p but had to play 720p just because of VRAM. it happens. this was to be expected.

also 0.79 ms is for dlss quality which uses 720p resolution at 1080p. but it doesn't matter much anymore anyways

avowed, final fantasy 7 rebirth, spiderman 2. all these new games are nearly unplayable at 4K buffer with 8 GB GPUs. and even at 1080p buffer you have to reduce textures (to get smooth performance in spiderman 2 at 1080p output, I had to use medium textures).
 
Last edited:

nemiroff

Gold Member
1080p is the output resolution, from native 540p. How is that acceptable on a 3060ti? This is Switch level of resolution here. On a Desktop GPU the output resolution should be min 4K, not 1080p!
Again; It's not about "acceptable" or not, what you're seeing here is a ballpark illustration to engineers about DLSS execution times. It's not meant to be used for frames comparison in a game.

If you want specific performance comparisons at specific scenarios, here's one example comparing DLSS3 at Quality and Performance vs DLSS 4 at Performance. If that doesn't clear up your confusion, then I don't know what will.

 

dgrdsv

Member
Isn't one of the reasons why Sony and Microsoft didn't go for Nvidia for most of the console gens that Nvidia are pretty shitty to deal with as a business partner?
Yes which is why this company is now hanging up there in the top-3 biggest companies on the planet - because "it's pretty shitty to deal with as a business partner" /s

Not sure if that changed
It never was the case.
This is a perpetuated conjecture stemming from the issues MS had with the original Xbox production - for which they can only blame themselves and nobody else.
(They've basically asked Nvidia to take a hit on production costs of Xbox h/w which there are no reasons why Nvidia would do as they don't get any royalties from s/w sales on the consoles. So they've declined, as any company in their place would do.)

during the time the deal was made to supply the Switch with Tegra chips or if Nvidia was happy to get rid of their stock and Nintendo jumped on the opportunity. There were rumors that for Switch 2, Nvidia suddenly weren't so friendly anymore.
These rumors are very obviously false as well or otherwise Nintendo would've found a different partner for Switch 2.
Nvidia isn't very happy with how technically unambitious Nintendo is since they are literally launching a new console in 2025 on the tech from 2018. This in turn makes Nvidia tech look worse than it could. This is a reasonable stance on their part but it doesn't affect the Nintendo's business decisions.

But who knows if that's true.
The main reason why both Sony and MS went with AMD was costs - it's basically always the main reason for any console h/w choice you observe like ever.
The secondary reason is the ability of AMD to provide both an in-house CPU and GPU - which in turn also translates into costs advantage as you don't need to license these separately (and Arm CPU requires licensing fees which you're paying to Arm one way or the other).
All the rest is pretty much irrelevant, and the costs difference also explains why console h/w is so far behind the Nvidia's now - you can't expect to pay less for something which will be as advanced or just be advancing at the same pace.
 
Last edited:

Zathalus

Member
1080p is the output resolution, from native 540p. How is that acceptable on a 3060ti?
You would use quality at 1080p, which before this new model didn’t look great but is perfectly acceptable with it now. The execution cost shouldn’t differ that significantly as the output resolution is still 1080p.

This is Switch level of resolution here. On a Desktop GPU the output resolution should be min 4K, not 1080p!
Most people using a 60 class GPU from 4 years ago should probably stick to 1080p. Even a 4070 Super, that is faster than a PS5 Pro, is best served at 1440p. Resolution, frame rate, settings, and RT are all settings that need to be balanced to achieve the best gameplay experience.
 

Bieren

Member
why are we so insistent on chasing the most amazing settings and fps ever....on a console. enjoy it for what it is. if you want to spend more time chasing tweaks and settings and mods instead of playing the game, there is an option for you.
 

Panajev2001a

GAF's Pleasant Genius
You would use quality at 1080p, which before this new model didn’t look great but is perfectly acceptable with it now. The execution cost shouldn’t differ that significantly as the output resolution is still 1080p.
I will admit, I do not know for certain, but it would make sense the a great deal of cost is dependent on the number of input pixels and not only the target resolution.

Most people using a 60 class GPU from 4 years ago should probably stick to 1080p. Even a 4070 Super, that is faster than a PS5 Pro, is best served at 1440p. Resolution, frame rate, settings, and RT are all settings that need to be balanced to achieve the best gameplay experience.
 

Zathalus

Member
I will admit, I do not know for certain, but it would make sense the a great deal of cost is dependent on the number of input pixels and not only the target resolution.
Considering it only goes up from .79 to 1.38 when jumping from 1080p to 1440p, then just changing the input resolution would probably be even less than that.
 

octos

Member
The model is only part of the equation, you also need good training data. CNN can do just fine if trained properly. It has advantages too, it's not like one model is better than the other. They're different.
 

Hudo

Gold Member
Yes which is why this company is now hanging up there in the top-3 biggest companies on the planet - because "it's pretty shitty to deal with as a business partner" /s


It never was the case.
This is a perpetuated conjecture stemming from the issues MS had with the original Xbox production - for which they can only blame themselves and nobody else.
(They've basically asked Nvidia to take a hit on production costs of Xbox h/w which there are no reasons why Nvidia would do as they don't get any royalties from s/w sales on the consoles. So they've declined, as any company in their place would do.)


These rumors are very obviously false as well or otherwise Nintendo would've found a different partner for Switch 2.
Nvidia isn't very happy with how technically unambitious Nintendo is since they are literally launching a new console in 2025 on the tech from 2018. This in turn makes Nvidia tech look worse than it could. This is a reasonable stance on their part but it doesn't affect the Nintendo's business decisions.


The main reason why both Sony and MS went with AMD was costs - it's basically always the main reason for any console h/w choice you observe like ever.
The secondary reason is the ability of AMD to provide both an in-house CPU and GPU - which in turn also translates into costs advantage as you don't need to license these separately (and Arm CPU requires licensing fees which you're paying to Arm one way or the other).
All the rest is pretty much irrelevant, and the costs difference also explains why console h/w is so far behind the Nvidia's now - you can't expect to pay less for something which will be as advanced or just be advancing at the same pace.
Thanks for the clarification but calm the fuck down, dude.
 

analog_future

Resident Crybaby
As far as major upscalers go, I think it's fair to rank them as such right now:

  1. DLSS 4
  2. DLSS 3
  3. XeSS
  4. PSSR
  5. FSR 3

FSR 4 hands on previews so far also give me the impression that it's going to make the leap over PSSR as well.


So at this point it just begs the question... Why did Sony bother with their own proprietary upscaling solution when AMD has a solution that's going to be similar or superior anyway? Seems like a waste of time and resources.
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
AMD can't so consoles have no chance. Well, Nintendo got the right idea going with Nvidia.

Maybe Sony/Microsoft should move to Nvidia too or ARM at least.

Nintendo's hardware is waaaay worse though. And that matters ALOT!
 

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
As far as major upscalers go, I think it's fair to rank them as such right now:

  1. DLSS 4
  2. DLSS 3
  3. XeSS
  4. PSSR
  5. FSR 3

FSR 4 hands on previews so far also give me the impression that it's going to make the leap over PSSR as well.


So at this point it just begs the question... Why did Sony bother with their own proprietary upscaling solution when AMD has a solution that's going to be similar or superior anyway? Seems like a waste of time and resources.

Because it's their's. People that don't understand, play the short game. Nobody is beating DLSS, so go and remove those two.

No reason for Sony to rely on others, when they have the power, money, and engineering intelligence to have their own solution.
 

Lysandros

Member
I don't have much familiarity with XeSS, from where this conception of XeSS being superior to PSSR's latest itineration is coming from? Is this related to DF/Oliver constantly emphasizing PSSR's performance in still images as opposed to its clarity in motion?
 
Last edited:

daninthemix

Member
Nvidia has once again jumped way ahead of the competition with DLSS 4. DLSS 4 is using a transformer-based model for image upscaling compared to the CNN-based model that was used for DLSS 2 & 3.

So where does this leave PSSR? PSSR is using a CNN-based model and you have to wonder whether this is ultimately a dead-end. Should Sony reassess their decision to go with a CNN-based model? Perhaps they seriously need to consider building a transformer-model in conjunction with improving their current CNN-model?

Maybe the future should be a transformer-based PSSR for Playstation 6? What do you think?
So you're saying that by PS6, Sony will have caught up with Nvidia's techniques circa 2025?

Do you see the pattern here? Nvidia's next thing will leave the PS6 for dusk.

I'm sorry but this is quite a moronic thread.
 

Zathalus

Member
I don't have much familiarity with XeSS, from where this conception of XeSS being superior to PSSR's latest itineration is coming from? Is this related to DF/Oliver constantly emphasizing PSSR's performance in still images as opposed to its clarity in motion?
Less issues with temporal stability, denoising, and ghosting I’d imagine. By newer versions of PSSR have been having less problems on that front, although temporal stability is still a issue with it.

The latest version of XeSS in Intel GPUs (as it uses the best version) is similar in quality to DLSS 3.5.
 
I dont care. First they have to fix silent hill
This one doesn't get talked about, but Dragons Dogma 2 pssr is really bad. The foliage and shadows are broken now and the game looks so much noiser than it ever used to on base PS5.

Digital Foundry also just examined Hogwarts which also is more unstable than it used to be.

I'll add SW Outlaws too which has really disgusting looking reflections on Pro.
 

Panajev2001a

GAF's Pleasant Genius
Yes which is why this company is now hanging up there in the top-3 biggest companies on the planet - because "it's pretty shitty to deal with as a business partner" /s


It never was the case.
This is a perpetuated conjecture stemming from the issues MS had with the original Xbox production - for which they can only blame themselves and nobody else.
(They've basically asked Nvidia to take a hit on production costs of Xbox h/w which there are no reasons why Nvidia would do as they don't get any royalties from s/w sales on the consoles. So they've declined, as any company in their place would do.)


These rumors are very obviously false as well or otherwise Nintendo would've found a different partner for Switch 2.
Nvidia isn't very happy with how technically unambitious Nintendo is since they are literally launching a new console in 2025 on the tech from 2018. This in turn makes Nvidia tech look worse than it could. This is a reasonable stance on their part but it doesn't affect the Nintendo's business decisions.


The main reason why both Sony and MS went with AMD was costs - it's basically always the main reason for any console h/w choice you observe like ever.
The secondary reason is the ability of AMD to provide both an in-house CPU and GPU - which in turn also translates into costs advantage as you don't need to license these separately (and Arm CPU requires licensing fees which you're paying to Arm one way or the other).
All the rest is pretty much irrelevant, and the costs difference also explains why console h/w is so far behind the Nvidia's now - you can't expect to pay less for something which will be as advanced or just be advancing at the same pace.
They did give Sony a bugged old chip (FlexIO was not operating as expected and there were other some puzzling performance bugs with vertex processing IIRC) but then again Sony went to nVIDIA at the last possible minute as their RS solution was over budget (cost and power consumption I think).
 

Justin9mm

Member
PlayStation's Super Shit Resolution, am I right!? :pie_diana: :pie_roffles:

Seriously though.. PSSR just came out, I think you need to give it time!
 
Last edited:

dgrdsv

Member
They did give Sony a bugged old chip (FlexIO was not operating as expected and there were other some puzzling performance bugs with vertex processing IIRC) but then again Sony went to nVIDIA at the last possible minute as their RS solution was over budget (cost and power consumption I think).
Sony got exactly what Sony asked for. They've been contracted for the GPU h/w in PS3 very late into its development cycle and had no time to provide anything better than RSX at that point.
This is also completely on Sony - who depending on what source you believe either planned to use Cell for rendering or had a contract for the GPU with Toshiba which failed to deliver leaving them without a GPU at all.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Sony got exactly what Sony asked for. They've been contracted for the GPU h/w in PS3 very late into its development cycle and had no time to provide anything better than RSX at that point.
Debatable, some parts were broken (FlexIO was essentially unidirectional and not up to spec) and some HW bugs on the GPU side cause a lot of pain (hence the reliance on SPUs to work around them)… it is debatable whether nVIDIA could or would not modify the architecture they were working on (unified shaders) on PC because they did not want to compete with themselves.

This is also completely on Sony - who depending on what source you believe either planned to use Cell for rendering or had a contract for the GPU with Toshiba which failed to deliver leaving them without a GPU at all.
To be fair do not disagree Sony was late, the RS from Toshiba had working samples it was not just an idea. Apparently Toshiba engineers were shocked when the nVIDIA partnership was announced.
 

RafterXL

Member
It’s just the execution time of DLSS, but the increase is still more than fine for all the cards and the resolutions they are meant for. 3060ti is best used at 1080p for example and 0.79ms is really not much. Even a 4080 at 4K is 1.5ms which is still very low.
It also ignores the fact that DLSS4 looks significantly better, to the point that dropping down a notch (quality>balanced) actually improves image quality and the execution time. You literally lose nothing with the new model so comparing apples to apples, in terms of resolution makes no sense.
 

simpatico

Member
E3 2026

Sony is presenting

Kutagari slowly walks to the stage

"People like to ask us about PSSR. I've finally got something to tell them: We're going native, PSSR is canceled"

PS6 and RDNA5 banners drop all around the stage

Kutagari walks off

Nvidia stock plummets
 
But peoples have downclocked a 3050 to 3TF (guess for what comparison :messenger_beaming: ) and manage to run the model in Cyberpunk 2077. It's so good in image quality that the performance hit does not matter if DLSS balanced / performance look as good as CNN quality. So it is a net gain in performance and image quality. The model somehow handles super low internal resolutions way way better than old model.

If a 3TF downclocked 3050 can, a PS5 & Xbox series can definitely.

dlss-4-version-310-1-0-0-transformer-model-vs-dlss-3-cnn-v0-2bj1i2q2tuee1.png


Shit even ultra performance is legit with this model.
So, I guess this means DLSS4 + Transformer is a given for Switch 2... pretty decent image quality for a mobile SoC.

Now I understand why AMD had no chance to win Nintendo's contract...
 
devs wont even bother making their games run properly on 1440p buffer with 8 GB VRAM.
It should be possible with the DirectStorage API and NVMe SSDs (PCIe 3.0/4.0).

I'm not sure why they don't do it...

PCIe 4.0 mobos/SSDs are dirt cheap these days.
 
Last edited:
This is a perpetuated conjecture stemming from the issues MS had with the original Xbox production - for which they can only blame themselves and nobody else.
(They've basically asked Nvidia to take a hit on production costs of Xbox h/w which there are no reasons why Nvidia would do as they don't get any royalties from s/w sales on the consoles. So they've declined, as any company in their place would do.)
You didn't clarify if nVidia utilized a die shrink or not.

A die shrink justifies a price reduction... especially 20-25 years ago, when wafers were dirt cheap compared to today.

I'm not sure if OG XBOX had die shrinks, that's why I'm asking.
 

analog_future

Resident Crybaby
E3 2026

Sony is presenting

Kutagari slowly walks to the stage

"People like to ask us about PSSR. I've finally got something to tell them: We're going native, PSSR is canceled"

PS6 and RDNA5 banners drop all around the stage

Kutagari walks off

Nvidia stock plummets

Season 1 Nbc GIF by The Good Place
 
Top Bottom