• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RTX 5090 Review Thread

rm082e

Member
So, I was double checking the requirements for the RTX 5090 and it seems my Corsair RM850x is not enough so I should be looking for a 1000+ PSU, right?

After spending almost 3k on a new GPU I realized I need to spend another 200$+ on a new PSU, goddamit.

Is a Seasonic Vertex GX 1200 (ATX 3.0) good and reliable? I googled for reviews but I could only find two of them and they left me kinda lukewarm since I thought Seasonic was one of the best PSU manufacturers for quite some time.

I wouldn't hesitate buying any PSU from Seasonic, Super Flower, or EVGA. I bought a 1000w Super Flower last year and no issues. I'll be buying another one for my son's PC later this year.
 
So, I was double checking the requirements for the RTX 5090 and it seems my Corsair RM850x is not enough so I should be looking for a 1000+ PSU, right?

After spending almost 3k on a new GPU I realized I need to spend another 200$+ on a new PSU, goddamit.

Is a Seasonic Vertex GX 1200 (ATX 3.0) good and reliable? I googled for reviews but I could only find two of them and they left me kinda lukewarm since I thought Seasonic was one of the best PSU manufacturers for quite some time.

I had to upgrade to a 1000w to get a 4080 super to work properly (I have a few things plugged into usb for power and some RGB). A 5090 needs way more power than that so I would have thought you would want a 1200 or 1500 w.

Also - as much as we may bemoan 5090 buyers - if you bought a 4090 for £1,600 in 2023, you were a fucking genius - two years of top of the range graphics and a card you can sell for £2.5k+ on ebay now. The 5090 is already worth £4k and for all we know will be £5k in 2027 lol
 
Last edited:

Ulysses 31

Member
Wonder how much the 5090 Astral Dhahab Edition will go for :messenger_winking_tongue:

f18ac58b1ce0ebabee204a90e0f44020


Releases only in the Middle East!
 

Buggy Loop

Member
Nvidia's neural texture compression SDK is on github


The paper on it

Compression ratio comparison:


Bundle Compression Disk Size PCI-E Traffic VRAM Size
Raw Image 32.00 MB 32.00 MB 32.00 MB
BCn Compressed 10.00 MB 10.00 MB 10.00 MB
NTC-on-Load* 1.52 MB 1.52 MB 10.00 MB
NTC-on-Sample 1.52 MB 1.52 MB 1.52 MB
*: Assumes transcoding to equivalent BCn formats at decompression time.

The requirements are very good for NTC-on-load. Meaning that it'll decompress

GPU for NTC decompression on load and transcoding to BCn:
  • Minimum: Anything compatible with Shader Model 6

  • Recommended: NVIDIA Turing (RTX 2000 series) and newer.
  • [*] The oldest GPUs that the NTC SDK functionality has been validated on are NVIDIA GTX 1000 series, AMD Radeon RX 6000 series, Intel Arc A series.
GPU for NTC inference on sample:
  • Minimum: Anything compatible with Shader Model 6 (will be functional but very slow)

  • Recommended: NVIDIA Ada (RTX 4000 series) and newer.

So 15% the disk size than BCn or PCI-E traffic for all NTC, with the on-sample (requires a lot of neural computation, so Ada and newer), keeping it to 15% the size on VRAM.

So a lot of banwidth and VRAM suddenly liberated.

This github SDK is a pre-release for testing, it's a nvidia custom directX shader compiler that will not work to release any games, but when DX12 supports cooperative vectors later in the year, then it'll be usable.

Honestly, about time we see a drastic paradigm shift in on-disk and also VRAM usage.

The explanation on NTC-On-Sample is this :

NTC is designed to support decompression of individual texels in the texture set, which means it can be efficiently used to decompress only the texels needed to render a specific view. In this case, the decompression logic is executed directly in the pixel or ray tracing shader where material textures would normally be sampled. This mode is called Inference on Sample.

Compared to regular texture sampling, decompressing texels from NTC is a relatively expensive operation in terms of computation, and it only returns one unfiltered texel with all material channels at a time. This has two important consequences:

  1. Inference on Sample should only be used on high-performance GPUs that support Cooperative Vector extensions. We provide a fallback implementation that uses DP4a for decompression instead of CoopVec, but that is significantly slower and should only be used for functional validation.
  2. Simulating regular trilinear or anisotropic texture filtering with NTC would be prohibitively expensive (although functionally possible), so Inference on Sample should be used in combination with Stochastic Texture Filtering (STF) instead, and filtered by a denoiser or DLSS after shading.
Also from the paper, BCn is not equivalent quality to Neural textures for the same size

eHTq4Mb.jpeg


8VV8lIH.jpeg


Crazy how close it match a reference 171MB uncompressed texture for only 3.8 MB.

Yes please, this technology ASAP.
 
Last edited:

Yerd

Member
Nvidia's neural texture compression SDK is on github


The paper on it

Compression ratio comparison:


Bundle Compression Disk Size PCI-E Traffic VRAM Size
Raw Image 32.00 MB 32.00 MB 32.00 MB
BCn Compressed 10.00 MB 10.00 MB 10.00 MB
NTC-on-Load* 1.52 MB 1.52 MB 10.00 MB
NTC-on-Sample 1.52 MB 1.52 MB 1.52 MB
*: Assumes transcoding to equivalent BCn formats at decompression time.

The requirements are very good for NTC-on-load. Meaning that it'll decompress

GPU for NTC decompression on load and transcoding to BCn:
  • Minimum: Anything compatible with Shader Model 6

  • Recommended: NVIDIA Turing (RTX 2000 series) and newer.
  • [*] The oldest GPUs that the NTC SDK functionality has been validated on are NVIDIA GTX 1000 series, AMD Radeon RX 6000 series, Intel Arc A series.
GPU for NTC inference on sample:
  • Minimum: Anything compatible with Shader Model 6 (will be functional but very slow)

  • Recommended: NVIDIA Ada (RTX 4000 series) and newer.

So 15% the disk size than BCn or PCI-E traffic for all NTC, with the on-sample (requires a lot of neural computation, so Ada and newer), keeping it to 15% the size on VRAM.

So a lot of banwidth and VRAM suddenly liberated.

This github SDK is a pre-release for testing, it's a nvidia custom directX shader compiler that will not work to release any games, but when DX12 supports cooperative vectors later in the year, then it'll be usable.

Honestly, about time we see a drastic paradigm shift in on-disk and also VRAM usage.

The explanation on NTC-On-Sample is this :

NTC is designed to support decompression of individual texels in the texture set, which means it can be efficiently used to decompress only the texels needed to render a specific view. In this case, the decompression logic is executed directly in the pixel or ray tracing shader where material textures would normally be sampled. This mode is called Inference on Sample.

Compared to regular texture sampling, decompressing texels from NTC is a relatively expensive operation in terms of computation, and it only returns one unfiltered texel with all material channels at a time. This has two important consequences:

  1. Inference on Sample should only be used on high-performance GPUs that support Cooperative Vector extensions. We provide a fallback implementation that uses DP4a for decompression instead of CoopVec, but that is significantly slower and should only be used for functional validation.
  2. Simulating regular trilinear or anisotropic texture filtering with NTC would be prohibitively expensive (although functionally possible), so Inference on Sample should be used in combination with Stochastic Texture Filtering (STF) instead, and filtered by a denoiser or DLSS after shading.
Also from the paper, BCn is not equivalent quality to Neural textures for the same size

eHTq4Mb.jpeg


8VV8lIH.jpeg


Crazy how close it match a reference 171MB uncompressed texture for only 3.8 MB.

Yes please, this technology ASAP.

Seems cool but:

Compared to regular texture sampling, decompressing texels from NTC is a relatively expensive operation in terms of computation,
 

Buggy Loop

Member
Seems cool but: Compared to regular texture sampling, decompressing texels from NTC is a relatively expensive operation in terms of computation,

Well yes, thus the Ada and above recommendation (seems they allow any RTX to try it). Requires a lot of neural TOPS to locally inference texel per texel in real-time with low latency.
 
Last edited:

Buggy Loop

Member


The over thousand fps means that even ~0.2ms difference has a huge impact but otherwise total insanity that it’s compressing that well. From 272MB reference to 11.4MB

accuse the hunchback of notre dame GIF
 

Buggy Loop

Member
shocked holy shit GIF


The 5090 is the GOD of VR

There's as much of a jump from 3090 → 4090 than 4090 → 5090 if not more for some more modern VR games like Metro Awakening.

VR high resolution loves bandwidth. Those that have high end VR headsets with high resolution then its a no brainer to pick it. No cards around to even come close, it wipes the floor with the 4090.

Guy has annoying voice though



The most average results (but still good). Riven having 1.54x at 300%, 400% still higher framerate than 4090's 300% (had crap performances at 400%, results not included)

pfBsczN.jpeg



Outer wilds having 1.64x at 400% resolution

K2Cswxo.jpeg


1.73x at 400% for Project Wingman

2GxrLX9.jpeg


Now this is where it gets ridiculous

2.5x performance at 400% for Red Matter 2

bZCCuGV.jpeg


Metro Awakening with 3.36x at 300%, even at 400% the performance is still 2.74x that of 4090's 300%
Me98p3S.jpeg


There's probably more than just the 1 → 1.79 TB/s bandwidth upgrade to explain some of these results, clearly has more pixel output.
 

Yerd

Member



I'm not sure if I still want a 5090. I may try ONE more time whenever the next best buy restock happens. If no go then, I'm waiting for a 5080ti or super or whatever refresh. Then I'll wait for 6x. My 3090 is still plugging along with 4k, enough to get by.
 

rm082e

Member
Nvidia's neural texture compression SDK is on github


The paper on it

*snip*

So I don't have the understanding of the underlying technology enough to answer this question myself: Is this something that:
  • Game developers will have to incorporate into their game?
  • Modders/enthusiasts will be able to make unofficial patches per-game, like we saw with DLSS?
  • Users can turn on at a driver or Nvidia control panel for all games?
My impression has been this is future tech that will have to be incorporated by the developers as they are developing each game. Sounds like that it's on the horizon, but not really something that's going to practically reduce VRAM requirements for players today.
 

MikeM

Member
I dunno if its buggy drivers or buggy MSI BIOS but after playing on this 5090 quite a bit I gotta say it I wish my 4090 would not have died
Day 1 beta tester. Congrats?

Which model do you have specifically? Any others having issues with the same one?
 
yep this has been out for a while, I'm already on this one :messenger_sunglasses:


What are you experiencing?
Just some random crashes and subpar performance at times and I think its a heat issue as the fans seem to really ramp up even at odd times

Day 1 beta tester. Congrats?

Which model do you have specifically? Any others having issues with the same one?
Its just a random MSI with a 3 fan cooler, looking at MSIs site I think its the Ventus 3X
 

//DEVIL//

Banned
Just some random crashes and subpar performance at times and I think its a heat issue as the fans seem to really ramp up even at odd times


Its just a random MSI with a 3 fan cooler, looking at MSIs site I think its the Ventus 3X
Ventus sucks in general in terms of noise / heat. the lowest one would go with is the MSI TRIO.

however, not to the point it would throttle I am sure. could be noisy yes but shouldnt throttle.

as for the BIOS, MSI bios are actually quite good. same for gigabyte and asus. its the Zotac and PNY bios that are weird sometimes ( like fans never stop once you finish gaming / or ramp up for few seconds and then idle again for no reason even if not gaming etc ).

but MSI bios ? they are actually good. I am wondering if you are having drivers issue with the crashes.

also, if you have lian li L connect software, then close it. in general with random crashes I close all the apps and test. if no crashes i start enabling one app at a time.

I Really had the random crashes with Call of Duty mainly. for my case, it was the L connect software.
 

TheTurboFD

Member
Just some random crashes and subpar performance at times and I think its a heat issue as the fans seem to really ramp up even at odd times


Its just a random MSI with a 3 fan cooler, looking at MSIs site I think its the Ventus 3X
Funny you say this, I just received my MSI 5090 Gaming Trio OC yesterday and though I received a bad card. Had crashes , and massive temp issues where my card was sitting at 90c in any game. I was about to put in an RMA but I found out that it does not like my brand new Thermaltake Tower 600 as it has the card sitting vertical and the card cannot cool that way no matter what. I turned the case on its side and my temps immediately dropped to 55-60c on load. I still get the crashes / blackscreen issues but that's due to terrible drivers.
 

Ulysses 31

Member
Funny you say this, I just received my MSI 5090 Gaming Trio OC yesterday and though I received a bad card. Had crashes , and massive temp issues where my card was sitting at 90c in any game. I was about to put in an RMA but I found out that it does not like my brand new Thermaltake Tower 600 as it has the card sitting vertical and the card cannot cool that way no matter what. I turned the case on its side and my temps immediately dropped to 55-60c on load. I still get the crashes / blackscreen issues but that's due to terrible drivers.
Have you tried the hotfix drivers and setting the PCI-E slot speed in BIOS?
 

xenosys

Member
It's amusing watching all the Nvidia fanboys try and blame these overheating/melting issues on the end user when this has been a known problem for them since the 4090 launched.

Sorry, you don't get to deny that the trillion dollar corporation is at fault when they've got a history of having these sorts of issues.
 
Last edited:
So had some crashes again last night and just shut it down was tired of dicking with this PC but turn it on earlier and I have vertical lines on my OLED screen.

I tried changing cables and even changed over to another monitor and still had them.

I updated the Nvidia drivers and were still there

Went into a game and as soon as the game started the monitor went dark and is telling me no signal and even restarted my PC my monitor still isn’t getting a signal

Did my 5090 just die?

After the 4090 dying and now this shit its getting a little old.

Jqrbj3O.jpeg
 

OverHeat

« generous god »
So had some crashes again last night and just shut it down was tired of dicking with this PC but turn it on earlier and I have vertical lines on my OLED screen.

I tried changing cables and even changed over to another monitor and still had them.

I updated the Nvidia drivers and were still there

Went into a game and as soon as the game started the monitor went dark and is telling me no signal and even restarted my PC my monitor still isn’t getting a signal

Did my 5090 just die?

After the 4090 dying and now this shit its getting a little old.

Jqrbj3O.jpeg
Sorry for your lost…😭😭
 

Bojji

Member
So had some crashes again last night and just shut it down was tired of dicking with this PC but turn it on earlier and I have vertical lines on my OLED screen.

I tried changing cables and even changed over to another monitor and still had them.

I updated the Nvidia drivers and were still there

Went into a game and as soon as the game started the monitor went dark and is telling me no signal and even restarted my PC my monitor still isn’t getting a signal

Did my 5090 just die?

After the 4090 dying and now this shit its getting a little old.

Jqrbj3O.jpeg

It doesn't look good for sure..

At least monitor is ok? Any problems using different source?
 
It doesn't look good for sure..

At least monitor is ok? Any problems using different source?
Monitors seem fine as I have 2 different displays here on my desk and hooked up into the other display and exact same issues with the lines which do not show up with the PS5 or Xbox only the PC

I have tried different cables and different ports on the GPU

Interestingly enough since I get my audio over HDMI and then back to my speakers now everything sounds like a robot voice

At least it came back on so I have a signal now so I can start the return on this thing

Bryan Cranston Reaction GIF
 

Bojji

Member
Monitors seem fine as I have 2 different displays here on my desk and hooked up into the other display and exact same issues with the lines which do not show up with the PS5 or Xbox only the PC

I have tried different cables and different ports on the GPU

Interestingly enough since I get my audio over HDMI and then back to my speakers now everything sounds like a robot voice

At least it came back on so I have a signal now so I can start the return on this thing

Bryan Cranston Reaction GIF

"The best GPU on the planet". Looks like they really rushed this release for whatever reason and many units are destined to die. Your issues are far more than "driver issues", card is fucked in one way or the other.

This reminds me that at some point I thought my R9 290 was dying, I was seeing colorful artifacts on screen - turns out, it was monitor, I "fixed" it by rubbing LCD panel with my finger. Technology was more reliable in ~2013!
 
Last edited:
"The best GPU on the planet". Looks like they really rushed this release for whatever reason and many units are destined to die. Your issues are far more than "driver issues", card is fucked in one way or the other.

This reminds me that at some point I thought my R9 290 was dying, I was seeing colorful artifacts on screen - turns out, it was monitor, I "fixed" it by rubbing LCD panel with my finger. Technology was more reliable in ~2013!
Yeah this GPU hasn't been quite right since I got it

I pulled it out and stuck the 4080 super back in and everything was fine so reseated the 5090 and lines still there and robot voice still there so its 100% the card

Its fucked
 

Hohenheim

Member
So had some crashes again last night and just shut it down was tired of dicking with this PC but turn it on earlier and I have vertical lines on my OLED screen.

I tried changing cables and even changed over to another monitor and still had them.

I updated the Nvidia drivers and were still there

Went into a game and as soon as the game started the monitor went dark and is telling me no signal and even restarted my PC my monitor still isn’t getting a signal

Did my 5090 just die?

After the 4090 dying and now this shit its getting a little old.

Jqrbj3O.jpeg
Damn, that sucks!
What kind of model is the GPU?

I've had no issues with my MSI Gaming Trio 5090 so far. Only thing i've done so far is setting the power limit to 90% in Afterburner before doing anything with it. (I don't think that would change anything for issues like this, though)
Did you run DDU before installing the card?
 

Newari

Member
Oh fuck I got a Corsair cable
From Corsair reddit, official response is that every corsair cable that has melted has had other additional damage. So if you check your cable and it looks undamaged you are fine.


Hey guys, glad to answer questions about the cables and 12V-2x6 directly. I want to clear the air to ensure people have confidence and understand the actual spec we're talking about.
First things first - we have yet to see an undamaged Corsair 12V-2x6 cable fail, and we sell millions of power supplies, no exaggeration. I'm not saying it hasn't happened, but we have only seen a handful of failures of our PSUs on these cables, and 100% of those involved substantial damage to the cable. I realize, of course, that since I work for Corsair that I am biased and many of you won't believe me, but nonetheless, these are accurate facts to my (and JonnyGuru, head of our PSU team)'s knowledge right now.
(Editing this here per Jon's request to let you know that he is just a guy that used to run a bike shop and has a pretty bizarre fixation with PC power supplies and that this is also the opinion of four highly experienced electrical engineers we have on staff who have actually degrees in this and have chosen it as a career and have more than a human lifetime of combined experience in the field)
Also - the official spec of the depth allowance for the 12V-2x6 is +/- .44mm. This is intentional. The connectors are DESIGNED to have the pins have a bit of give and take so that when the plastic housings align they can then align the pins. If the pins on both the cable and the GPU were 100% rigid, then aligning all pins perfectly would be extremely difficult, and even if done it may not allow the connector to be fully seated.

Even with this +/- on the spec, once the connector is fully seated and snapped together without gap, the pins will have full contact. That .44mm variance will mostly disappear once the connectors are fully engaged. It's there simply to help with alignment.
Jon talks about it a bit here in a video from a couple months ago he put together.

Aris's video yesterday is an excellent summary of the situation right now. Aris, for those of you who may not know, is one of the most experienced PSU reviewers and testers in the world, and has an excellent technical breakdown of some of the recent concerns.

Today's update:
We also did a test in the lab on this. We used an HX1200i with a 2x 8pin to 12V-2x6 cable as a test with an RTX 5090 FE. Under Furmark2 stress tests we saw the card hit 575W load, we saw 6A to 8A power loads on the pins, with temps maxing out around 64°C on the GPU side and 46°C on the PSU side.
Through various tests we variously intentionally damaged one wire, then two, then three. Even in the final test where we used only 3x 12V and 3x ground wires in total, the temperatures did not exceed 70°C on the PSU side or 80°C on the GPU side. Those three wires all ran at 16-17A each, well over their 8.5A spec, and none of them melted or showed permanent signs of damage.
For home users: please do NOT try and replicate the "damaged cable" test we did at home, as it is not safe and could have caused substantial damage to the GPU, PSU cable, PSU, or even the environment where the test was performed. Just sharing the data we found.
Our internal testing shows multiple things here, in summary:
  1. There is a lot of concern around the design of the 12V-2x6 connector and cables. This confusion has been present since the 12VHPWR was first announced.
  2. There ARE examples of melted cables and connectors showing up online - as of yet we have not seen a corsair cable that has been melted without it being damaged - and even the damaged connectors/cables are typically not melting under full nightmare load scenarios from what we have seen.
  3. The current corsair 12V-2x6 connectors and cables are built to the spec and withstanding all torture tests we have performed.
  4. There are multiple youtubers, tiktok tech guys (TechTok?), and other reviewers and journalists all weighing in, some of which are highly experienced and performing rigid tests, others, not so much. Most of them are somewhere in the middle.
The intention everyone has here is good - nobody wants to see somebody damage their $2500 GPU. From our internal testing, we are 100% confident that every 12V-2x6 and, formerly, 12VHPWR-labeled cables and PSUs will fully support an RTX 5090 even when overclocked and running torture tests without concern when used as designed.
At this point for those with any card that draws 500W+, we advise not using cable extensions or adapters, but a well-designed cable direct from the PSU to the GPU.
And, as always, we stand behind our warranties and can be reached here or through our customer service portals if anybody has any questions or comments.
 
Damn, that sucks!
What kind of model is the GPU?

I've had no issues with my MSI Gaming Trio 5090 so far. Only thing i've done so far is setting the power limit to 90% in Afterburner before doing anything with it. (I don't think that would change anything for issues like this, though)
Did you run DDU before installing the card?
I believe it was the Ventus 3 that was put in a CLX prebuilt.

I remember seeing a post about MSI not having cards at launch for retail but interestingly enough when I returned that CLX to Best Buy they had a returned open box buy of an Cyberpower 9800X3D 5080 prebuilt that was $2200 so brought that home as I needed a second PC and that too is an MSI GPU

I think I found out why already this PC may have been returned as its fans are loud and run a lot at higher speeds for no good reason

Something is off as the 9800X3D 4080 super prebuilt I have on my wifes desk is whisper quiet and doesnt get near as loud even under load
 
Top Bottom