• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RX 7900XTX and 7900XT review thread

GHG

Member
It doens't matter the fact that the one i found is made by power color? one of the best amd third parties?!

I'll probably upset a few people here by saying this but they clearly still haven't got their drivers sorted either. They are better than they once were but when things don't work they really don't work.


They continue to shit the bed with VR performance and support (if you care or will care at any point):


One major issue although affecting relatively few gamers is poor VR RX 7900 XTX performance compared with the RTX 4080. It’s going to need some attention from AMD’s driver team before we can recommend the RX 7900 XTX for the best VR gaming.

Overall AMD are just not where they need to be in order to offer real competition to Nvidia and that's why Nvidia can get away with pricing their products how they are now.

If you want to "support the underdog" (which is an Internet obsession for some reason which never translates into the real world) and are willing to put up with the potential pitfalls and worse RT performance/upscaling support for the sake of 30 euros then go for it. But as far as I'm concerned when I'm spending that amount of money I just want something that works and consistently delivers the advertised and expected performance without any issues.

Just my 2 cents.
 

GymWolf

Gold Member
I'll probably upset a few people here by saying this but they clearly still haven't got their drivers sorted either. They are better than they once were but when things don't work they really don't work.


They continue to shit the bed with VR performance and support (if you care or will care at any point):




Overall AMD are just not where they need to be in order to offer real competition to Nvidia and that's why Nvidia can get away with pricing their products how they are now.

If you want to "support the underdog" (which is an Internet obsession for some reason which never translates into the real world) and are willing to put up with the potential pitfalls and worse RT performance/upscaling support for the sake of 30 euros then go for it. But as far as I'm concerned when I'm spending that amount of money I just want something that works and consistently delivers the advertised and expected performance without any issues.

Just my 2 cents.
No i don't give a fuck about the underdog or 30 euros, i'm just trying to value both gpu plus and minus as unbiased as i can be.
 

winjer

Gold Member
I'll probably upset a few people here by saying this but they clearly still haven't got their drivers sorted either. They are better than they once were but when things don't work they really don't work.


Seriously. You found a thread with a handful of people having problems with AMD drivers on one game, and you think that is proof AMD's drivers are the worst.
Go to the NVidia forums, guru3d or other tech sites where NVidia employees hangout, and you'll find plenty of people reporting bugs, crashes and issues with Nvidia drivers or Geforce experience.
For example, some users are reporting much more stutter and lower performance with 528.02 WHQL. And some users are reporting HDR switching issues.
And sometimes you'll find that it's not even the fault of the NVidia or AMD's fault, but some issue with the user setup.
 
Last edited:

Buggy Loop

Member


Holy shit, what happened to his card @ 9:15? Caps explode. He says he doesn’t think it’s related to thermals, so another failure mode? When a card goes kaput it typically goes silently, not like that..
 

winjer

Gold Member
No i don't give a fuck about the underdog or 30 euros, i'm just trying to value both gpu plus and minus as unbiased as i can be.

If the difference in cost is just 30 euros. Just get the 4080.
Same raster performance, much better RT performance. And better quality with DLSS2 over FSR2.
For the 7900XTX to be worth it, it would have to be 200-300 euros cheaper.
 

TuFaN

Member
I don't want to sound negative but I think I won't be purchasing any GPU anytime soon. It is just unfair how Nvidia and AMD are dealing with their customers. Nvidia releases cards with extremely high prices and all AMD does it to slide in and be on the same level with them, price wise.
I am still sitting on a 1080ti, never thought i'd hold on to this card for this long, got it at release.

GPU's I considered upgrading to:
2080ti - ordered one the day it got released, tested it and didn't like the small performance increase when comparing it with my 1080ti, so i sent it back.
3090 - tried getting one at MSRP for far too long, was not able to because of scalpers and miners.
4090 - is still above MSRP in Germany, you gotta pay 2200€ if you want one. Too expensive for my taste.
7900XTX - I sincerely considered getting a AMD GPU for the first time ever (owning a 5950x CPU and loving everything is offers), I basically wanted to go full AMD, but the price... not competitive at all unfortunately. And I also don't like how AMD is not competing with the 4090 but rather with the 4080.

Oh well :) I hope my 1080ti wont break anytime soon
 

GHG

Member
Seriously. You found a thread with a handful of people having problems with AMD drivers on one game, and you think that is proof AMD's drivers are the worst.
Go to the NVidia forums, guru3d or other tech sites where NVidia employees hangout, and you'll find plenty of people reporting bugs, crashes and issues with Nvidia drivers or Geforce experience.
For example, some users are reporting much more stutter and lower performance with 528.02 WHQL. And some users are reporting HDR switching issues.
And sometimes you'll find that it's not even the fault of the NVidia or AMD's fault, but some issue with the user setup.

It's not just the one game though. In the VR review I linked there are some games that straight up won't boot on these cards in VR.
 

winjer

Gold Member
It's not just the one game though. In the VR review I linked there are some games that straight up won't boot on these cards in VR.

I didn't contest your results with VR. Babeltech is a reputable source.
I was talking about normal games. And in these, both AMD and NVidia have their fair share of bugs and issues. But at least not as much as Intel.
As someone who recently switched from NVidia to AMD, I can say from personal experience that AMD drivers are as good as NVidia.

But you forget also, that devs have responsibilities. Releasing a game that is completely broken on one set of GPUs is not a sign of competency.
Some times it's the devs that have to fix their games. Here are a few examples.
40k: Darktide for several weeks had an issue with both nvidia and AMD cards, where it would have some random artifacts. Fatshark asked for help from NVidia and AMD to solve this. And they did. But it wasn't by new drivers from nvidia or AMD, it was with an update from Fatshark to fix their game.
Recently there have been users with nvidia cards that have issues with flickering while playing Destiny 2. Nvidia users complained to nvidia, but the issue is not with drivers, but with the game. So NVidia notified Bungie about the bug with their game.
When Witcher 3 Remaster was released, RT with AMD cards had terrible frame pacing. It wasn't solved with drivers, bit with an game update.

And remember that with low level APIs, NVidia, Intel and AMD, have much less control with their drivers over solving bugs and issues.
 
Last edited:

GymWolf

Gold Member
7900xtx pro and cons

+more vram (like sensibly more)
+overall better raster performance (but not by a sensible margin)
+"fine wine" driver factor (pretty cringe, not gonna lie)
+the one i found is the power color version (gigabyte mid series for the 4080)
+acceptable rtx performance, probably enough for a non-rtx believer like me
+fsr3 could be as good as dlss2 IQ wise (i don't care about the frame generator in the dlss3)
+it doens't need that 3x8 pin adapter like the 4080 (something less to care about)
-more ram but slower type
-worse rtx perf than 4080
-dlss3 could still be better than fsr3 (most probably)
-hotter and eat more power
-small chance of having a dud gpu because amd fucked up something else that was not discovered yet
-reverse "fine wine" factor, nvidia is probably gonna have faster and better driver release on a game per game basis

?? we still don't know what gpu brand is gonna fair better with ue5 games, are devs gonna optimize for console with amd hardware inside? is the rtx advantage of nvidia gonna help with lumen?

This and more in the next episode of "gymwolf can't decide what gpu to buy to save his life"

4080 has an advantage for now, i guess that was a good summary of the 2 cards.
 
Last edited:

GHG

Member
I didn't contest your results with VR. Babeltech is a reputable source.
I was talking about normal games. And in these, both AMD and NVidia have their fair share of bugs and issues. But at least not as much as Intel.
As someone who recently switched from NVidia to AMD, I can say from personal experience that AMD drivers are as good as NVidia.

But you forget also, that devs have responsibilities. Releasing a game that is completely broken on one set of GPUs is not a sign of competency.
Some times it's the devs that have to fix their games. Here are a few examples.
40k: Darktide for several weeks had an issue with both nvidia and AMD cards, where it would have some random artifacts. Fatshark asked for help from NVidia and AMD to solve this. And they did. But it wasn't by new drivers from nvidia or AMD, it was with an update from Fatshark to fix their game.
Recently there have been users with nvidia cards that have issues with flickering while playing Destiny 2. Nvidia users complained to nvidia, but the issue is not with drivers, but with the game. So NVidia notified Bungie about the bug with their game.
When Witcher 3 Remaster was released, RT with AMD cards had terrible frame pacing. It wasn't solved with drivers, bit with an game update.

And remember that with low level APIs, NVidia, Intel and AMD, have much less control with their drivers over solving bugs and issues.

Fair enough. I guess in that case it's up to the individual based on what games they play or want to play.
 

MikeM

Member
Quick update on my 7900xt- still no issues besides some toasty vram. No throttling so don’t care really.

Downloaded Doom Eternal and running maxxed out with RT at 4k between 100-120fps.

Witcher 3 Ultra+ settings at 4k (ultra FSR) at 100+ fps.

COD MW2 runs at 140+ fps at ultra settings (ultra quality fsr).

Overall- pretty happy with this thing. Drives my LG C1 perfectly.
 

twilo99

Member
I'll probably upset a few people here by saying this but they clearly still haven't got their drivers sorted either. They are better than they once were but when things don't work they really don't work.

There might be driver issues with RDNA3 but my RDNA2 card has been running smoothly for a year now, and I even use the beta drivers whenever they are available.
 

FingerBang

Member
7900xtx pro and cons

+more vram (like sensibly more)
+overall better raster performance (but not by a sensible margin)
+"fine wine" driver factor (pretty cringe, not gonna lie)
+the one i found is the power color version (gigabyte mid series for the 4080)
+acceptable rtx performance, probably enough for a non-rtx believer like me
+fsr3 could be as good as dlss2 IQ wise (i don't care about the frame generator in the dlss3)
+it doens't need that 3x8 pin adapter like the 4080 (something less to care about)
-more ram but slower type
-worse rtx perf than 4080
-dlss3 could still be better than fsr3 (most probably)
-hotter and eat more power
-small chance of having a dud gpu because amd fucked up something else that was not discovered yet
-reverse "fine wine" factor, nvidia is probably gonna have faster and better driver release on a game per game basis

?? we still don't know what gpu brand is gonna fair better with ue5 games, are devs gonna optimize for console with amd hardware inside? is the rtx advantage of nvidia gonna help with lumen?

This and more in the next episode of "gymwolf can't decide what gpu to buy to save his life"

4080 has an advantage for now, i guess that was a good summary of the 2 cards.
If you need to buy now, buy based on what you get NOW.

There is a chance AMD hasn't got their shit together and drivers might bump up performance to a level that goes maybe half a tier above the 4080, but you don't get that now.
There is a chance FSR is as good as DLSS2.0 but... it's a chance.
Also, assuming an amazing bump in performance: Is 10%-15% advantage important for the resolution and framerate you play at? Ignore the fanboys on either side that make a 5/10% advantage sound like a huge difference.
100 fps vs 110fps is pretty much the same experience.

If you want to take the chance on AMD, then wait a month or two. Prices might go down, especially for the 4080. Performance might go up, especially for team red.
If you need to buy now, go Nvidia and you know what you're getting, even at that awful price.
 

rnlval

Member
I don't think that's the case. Given their public statements as of late, I think Nvidia is trying to creep up the prices significantly on their whole range and have their revenue take a much larger part of the PC gamers' disposable income.
Jensen & Co. saw how PC gamers' eventually bent the knee and bought GPUs at ridiculous prices during the COVID lockdowns, and he's trying to hold that inflated value perception to see if it sticks indefinitely. We're also living in an era where FOMO gets a hold on many young people with too much daddy's money and honestly that's probably a big factor to all those 4090 being sold.
AMD is just trying to ride the same winds, but given their currently abysmal dGPU marketshare I wonder if this move isn't going to result in their GPU division ending the PC discrete card business as a whole. They won't survive without sales.


If they showed a chart won't only RT games to show "AMD bEiNg dEstRoYeD" then they'd also need to show a rasterization-only chart showing the 4080 bEiNg dEstRoYeD and then the exact same people would complain about this second graph existing at all.

Truth is the 7900XT/X are competent enough on almost all RT enabled games and score just some 17-20% below the 4080.
Save for these last minute patches and demos, like the Cyberpunk 2077 Uber mode and Portal let's-just-bounce-all-the-rays-a-gazzillion-times RTX, all of which released just in time for the RTX4090 reviews (and were undoubtedly a part of the review "guidelines".. the ones that will put reviewers into a blacklist if they're not followed).
7900XT/X is about the Ampere RT performance level with a raster performance level between RTX 4080 and RTX 4090.

GA102 threw higher RT cores and TMUs count at the problem.

As RTX 3080 Ti owner, my upgrade path is RTX 4080 Ti, but I'm aware of Display Port 2.0 and PCIe 5.0 that are missing on the current ADA generation, hence I may skip ADA generation for Blackwell (?).
 
Last edited:

PaintTinJr

Member
1. Your open-source argument that degrades Linux's memory protection and multi-user design is meaningless for the majority of the desktop Linux use cases.

Linux followed Unix's supervisor and userland apps model and MMU usage is mandatory for the mainstream Linux kernel.

Linux has its own design issues

For SteamOS, Valve's graphics API preference is DirectX API on top of Vulkan via Proton-DXVK. SteamOS effectively killed native Linux games for cloned Windows Direct3D-based APIs. Gabe Newell was the project manager for Windows 1.x to 3.x and Windows 9x's DirectX's DirectDraw. Proton-DXVK effectively stabilized Linux's userland APIs.

On PS4 and PS5, game programmers operate within the userland environment on a thin graphics API layer and Direct3D-like layer. 3rd party PS4/PS5 game programers don't have X86's Ring 0 (Kernel level) access.

AMD GCN and RDNA support hardware virtualization.
You are completely moving the goal posts.
The use of linux in this scenario isn't to run games without the same userland protection to gain performance - once debugged they'll run normally through the userland protection - it is just a means of locating performance throttling in the HAL/Kernel while debugging a GPU API, but the point is moot as Vulkan, Opengl, etc on Linux don't have the performance discrepancy that Nvidia DX gets over Opengl/Vulkan in WIndows, in say running the Dolphin emulator. So it being transparent on linux versus Windows and DX being a closed opaque shop of proprietary closed source software - going back to my first point that DX only serves Microsoft/Nvidia and potentially throttles all GPU APIs living on top of that DX HAL is still a valid criticism.
2. Facts: Radeon HD 3870 has the full MSAA hardware.

NVIDIA's DLSS is powered by fixed-function Tensor cores that are separate from shader-based CUDA cores.

For Ryzen 7040 APU, AMD added "Ryzen AI" based on Xilinx's XDNA architecture that includes Xilinx's FPGA fabric IP. Expect AMD's future SKUs to include XDNA architecture from Xilinx (part of AMD) e.g. AMD also teased that it will use the AI engine in future Epyc CPUs and future Zen 5 SKUs would include XDNA for desktop AMD Ryzens since all Ryzen 7000 series are APUs (Ref 1). AMD ported Zen 4, RDNA 3, and XDNA IP blocks into TSMC's 4 nm process node for the mobile Ryzen 7040 APU.

Ref 1. https://www.msn.com/en-us/news/tech...er-ai-ambitions-in-cpu-gpu-roadmap/ar-AAYhYv6
It also teased the next-generation Zen 5 architecture that will arrive in 2024 with integrated AI, and machine learning optimizations along with enhanced performance and efficiency.

For AMD's AI-related silicon, AMD has multiple teams from CDNA's (WMMA, Wave Matrix Multiply-Accumulate hardware), Xilinx's XDNA architecture, and Zen 4's CPU AVX-512's VNNI extensions. AMD promised that it will unify previously disparate software stacks for CPUs, GPUs, and adaptive chips from Xilinx into one, with the goal of giving developers a single interface to program across different kinds of chips. The effort will be called the Unified AI Stack, and the first version will bring together AMD's ROCm software for GPU programming (GCN/RDNA/CDNA-WMMA), its CPU software (AVX/AVX2/AVX-512), and Xilinx's Vitis AI software.

Like dedicated MSAA hardware, dedicated AI and RT cores hardware are to reduce the workload on GPU's shader cores.
We are talking specifically about fixed function wasting silicon in graphics cards' as a "design fault" that you triggered a,s a discussion point. Unlike replacing a socketed CPU, you can't throw a new £200 GPU into a graphics card slot because that old GCN one lacks features of an RDNA3 one or some old thinking fixed path idea didn't pan-out.

I wasn't making a general case against ASICs/accelerators, but in regards of emerging and futures trends in GPUs for generalised compute and laying those generalised compute foundations as a priority - like AMD have been doing since GCN -, and why AMD focusing on shader/async compute with RPM throughput is a superior design - before getting tied to dedicated cores with async lite limitations.
3. For the given generation, Async Compute extensive Doom Eternal shows Ampere RTX and ADA RTX performance leadership.



Hint: Async Compute Shader path extensively uses TMU read/write IO instead of ROPS read/write IO. RTX 4090 (512 TMUs) 's superiority gap reflects its ~33% TMU superiority over RX 7900 XTX (384 TMUs). Fully enabled AD102 (e.g. future RTX Geforce 4090 Ti) has 576 TMUs.

For GA102, 1 RT core: 4 TMU ratio.
For AD102, 1 RT core: 4 TMU ratio.

For NAVI 21, 1 RT core (missing transverse): 4 TMU ratio.
For NAVI 31, 1 RT core: 4 TMU ratio.


Like Doom Eternal, RTX 4090 (112 RT cores, 512 TMUs) 's superiority gap reflects its ~33% RT superiority over RX 7900 XTX (96 RT cores, 384 TMUs).

From https://www.kitguru.net/components/graphic-cards/dominic-moass/amd-rx-7900-xtx-review/all/1/
Doom External is a last-gen game with nothing close to the micro-polygon or texturing levels, or shaders per object levels we will see once next-gen gets beyond the last-gen Jedi Fallen Order/HZFW/Miles Morales/Death Stranding/Ghost of Tsushima looks - which are still excellent, but inherently last-gen workloads for Pro/1X, so using games as a benchmark shows nothing about the technical discussion of the physical hardware we were having, and the H/ws capability with software optimised to it.

Further more, the revelations of MSFT internal documents about the ABK buyout indicates that Zenimax games on PlayStation are a resu;lt of PC first with DX, Xbox next, and a working PS port last, explaining a lot about how poor their catalogue of software runs on PlayStation - with Death loop looking like an up-rezzed version of XIII from the Gamecube running on PC with GTX 760. on both consoles
4. Prove it. PS5's RT results are within the RDNA 2 RT power rankings e.g. Xbox Series X's Doom Eternal RT performance results are superior when compared to PS5's.

PS5's RT core is the same as any other RDNA 2 RT core i.e. missing the transverse feature. Prove Me Wrong!

PS5's RT cores did NOT deliver RDNA 3's RT core results.

One of the RDNA 2 RT optimizations (for XSX, PS5, and PC RDNA 2) is to keep the transverse dataset chunks to be small.
Road to PS5 Transcribe

Mark Cerny: "There's a specific set of formats you can use their variations on the same BVH concept. Then in your shader program you use a new instruction that asks the intersection engine to check array against the BVH.

While the Intersection Engine is processing the requested ray triangle or ray box intersections the shaders are free to do other work."


Here on GAF I thought the general consensus was that System Architect level people are to be believed without more than speculated contradictions, no? As implying he is lying with merely with your: "RDNA2 cards released have these features, so it can't be different"

When he also stated:

Mark Cerny: "First we have a custom AMD GPU based on there 'RDNA2' technology what does that mean AMD is continuously improving and revising their tech for RDNA2 their goals were roughly speaking to reduce power of consumption by rhe architecting the GPU to put data close to where it's needed to optimize the GPU for performance and to adding new more advanced feature set.

But that feature set is malleable which is to say that we have our own needs for PlayStation and that can factor into what the AMD roadmap becomes.

So collaboration is born.

If we bring concepts to AMD that are felt to be widely useful then they can be adopted into RDNA - and used broadly including in PC GPUs.

If the ideas are sufficiently specific to what we're trying to accomplish like the GPU cache scrubbers I was talking about then they end up being just for us.

If you see a similar discrete GPU available as a PC card at roughly the same time as we release our console that means our collaboration with AMD succeeded.
"

Until you have PS5 specific information that contradicts what he has said, around here most people will take his words as proof.
5. Pure GPU pixel rate is useless without the memory bandwidth factor. Read GDC 2014 lecture on ROPS alternative via TMU path.
More goal post moving. The official technical specs for pixel/texture rate are as I said, and I made no reference beyond that......
however the Slug text benchmark I linked shows it out performs the direct raw memory bandwidth comparison that your assertion claims it can't.

Fundamentally I would agree with your wider point of specs meaning nothing if the hardware is bottlenecked elsewhere, and if weren't specifically talking about the best engineering solution AMD have shown from their RDNA2 timeline - namely the PS5 - that has a cache scrubber feature to achieve those pixel/texture rates in real terms.

Mark Cerny:" So we've implemented a gentler way of doing things where the coherency engines inform the GPU of the overwritten address ranges and custom scrubbers in several dozen GPU caches do pinpoint evictions of just those address ranges."

The GPU cache scrubbers are exactly why the complex slug text benchmark works more efficiently on the RDNA2 of PS5, because as the depth of the problem increases - AAA games even in simpler Forward+ rendering are complicated compared to basic slug text - the scrubbers yield more bandwidth saved - both from latency of not waiting for a transfer request/delivery, and from the reduced volume of data being processed by eliminating lots of data transfer redundancy.

As for this side discussion about the merits of RDNAs design, this is my last reply on the topic - as I feel I've made the case for my original comment you attacked, and have nothing to add in response to your next post you'll likely fill with last-gen game benchmarks as proof of something. And I highly doubt you'll accept this argument any better than my previous to impress me with a simple response like " Fair enough, interesting chat."
 
Last edited:

rnlval

Member

PaintTinJr PaintTinJr

The use of linux in this scenario isn't to run games without the same userland protection to gain performance - once debugged they'll run normally through the userland protection - it is just a means of locating performance throttling in the HAL/Kernel while debugging a GPU API, but the point is moot as Vulkan, Opengl, etc on Linux don't have the performance discrepancy that Nvidia DX gets over Opengl/Vulkan in WIndows, in say running the Dolphin emulator. So it being transparent on linux versus Windows and DX being a closed opaque shop of proprietary closed source software - going back to my first point that DX only serves Microsoft/Nvidia and potentially throttles all GPU APIs living on top of that DX HAL is still a valid criticism.


That's irrelevant to the hardware issue.

Examples

1. RX 7900 XTX has 33% fewer TMU and RT cores unit count when to compared RTX 4090, hence RX 7900 XTX is one SKU lower when compared to RTX 4090.

Both RX 7900 XTX and RTX 4090 have 192 ROPS and 384-bit bus.

Within the RDNA3 DCU, AMD doubled the stream processor count without scaling the TMU count!


2. Before VEGA, AMD didn't design ROPS links with L2 cache design, hence AMD was behind in memory bandwidth conservation when compared to GTX 980 Ti.

VEGA competed against NVIDIA's Pascal and later Volta generation.


Xbox One X's GPU's ROPS has a 2MB render cache that didn't exist on the baseline Polaris IP.

Xbox One X's GPU (modified 44 CU Hawaii with Polaris and semi-custom enhancements) has 2MB L2 cache for Geo/TMU and 2 MB render cache for ROPS while VEGA 56/64 has 4MB L2 cache for Geo/TMU/ROPS.

3. Under "Mr TFLOPS" Raja Koduri's leadership, AMD ROPS was stuck at 64 ROPS from R9-290X/R9-390X to R9 Fury to Vega 64 to Vega II to RX 5700 XT. This is why AMD was pushing hard for Async Compute's compute shader/TMU IO path as a workaround for the ROPS IO bottleneck.

4. AMD didn't properly scale the geometry engine with CU count while NVIDIA scaled polymorph engines with SM count.

For the mesh shader era, AMD doesn't have compute shader TFLOPS high ground for NAVI21 vs GA102 and NAVI31 vs AD102.

-------
PC has Direct3D profiling tools such as
https://developer.nvidia.com/conten...t3d-11-nvidia-nsight-visual-studio-edition-40
https://learn.microsoft.com/en-us/windows/win32/direct2d/profiling-directx-applications
https://gpuopen.com/rgp/

When designing Xbox One X, Microsoft identified graphics pipeline bottlenecks for AMD.


--------------

PaintTinJr PaintTinJr

Mark Cerny: "There's a specific set of formats you can use their variations on the same BVH concept. Then in your shader program you use a new instruction that asks the intersection engine to check array against the BVH.

While the Intersection Engine is processing the requested ray triangle or ray box intersections the shaders are free to do other work."


BVH RT has three functions s i.e. BVH transverse, box intersection check, and triangle intersection check. Your statement doesn't show BVH transverse hardware.

x1njSu3.jpg


Mark Cerny confirmed Intersection Engine hardware for PS5 GPU!

With NAVI 21's 80 RT cores being close to NAVI 31's 96 RT cores count, the NAVI 31's enhanced 96 RT cores delivered nearly twice the performance of NAVI21's 80 RT cores.

PS5's RT results are within the RDNA 2 power rankings, NOT Ampere GA104 RT class e.g. 256 bit external bus RTX 3070 and RTX 3070 Ti SKUs.

PS5's Doom Eternal RT results are inferior to XSX's and RTX 3070 beats both consoles.


2QNaRl3.png


Prove PS5 has Ampere's RT-level cores. Show PS5 beating RTX 3070 in RT!


PaintTinJr PaintTinJr


Mark Cerny: "First we have a custom AMD GPU based on there 'RDNA2' technology what does that mean AMD is continuously improving and revising their tech for RDNA2 their goals were roughly speaking to reduce power of consumption by rhe architecting the GPU to put data close to where it's needed to optimize the GPU for performance and to adding new more advanced feature set.

But that feature set is malleable which is to say that we have our own needs for PlayStation and that can factor into what the AMD roadmap becomes.

So collaboration is born.

If we bring concepts to AMD that are felt to be widely useful then they can be adopted into RDNA - and used broadly including in PC GPUs.
Mark Cerny:" So we've implemented a gentler way of doing things where the coherency engines inform the GPU of the overwritten address ranges and custom scrubbers in several dozen GPU caches do pinpoint evictions of just those address ranges."
The GPU cache scrubbers are exactly why the complex slug text benchmark works more efficiently on the RDNA2 of PS5, because as the depth of the problem increases - AAA games even in simpler Forward+ rendering are complicated compared to basic slug text - the scrubbers yield more bandwidth saved - both from latency of not waiting for a transfer request/delivery, and from the reduced volume of data being processed by eliminating lots of data transfer redundancy.

PS5 GPU doesn't have PC RDNA 2's Infinity Cache design.

On new vs new comparison, RDNA 2 generation competes against NVIDIA's Ampere generation. PS5 competes against NVIDIA's Ampere generation.

RTX 3070 is NVIDIA's 256-bit GDDR6-14000 like PS5's. PS5's 448 GB/s memory bandwidth is shared between CPU and GPU.

Comparing PS5 against Turing RTX 2080 (TU104) is a lower bar comparison when Turing competed against Vega II and RX 5700 series generation!


PS; I have an MSI RTX 3080 Ti Gaming X Trio OC (gaming room, faster than RTX 3090 FE) and MSI RTX 3070 Ti Suprim X (for living room PC instead of game consoles).
 
Last edited:

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
HUB did a 50 game benchmark and they found the XT tied with 4070 Ti at 1440p, slightly faster at 4K



Glad the market adjusted the price of this card down to a 4070 Ti-like $799-$849.

Hopefully it drops another $100.
 
Last edited:

64bitmodels

Reverse groomer.
bring it down to 700 bucks & then it'll start selling units. at least for me anyways, need a new AMD card for this Linux thing i'm doing
 
Last edited:

StereoVsn

Gold Member
HUB did a 50 game benchmark and they found the XT tied with 4070 Ti at 1440p, slightly faster at 4K



Glad the market adjusted the price of this card down to a 4070 Ti-like $799-$849.

Hopefully it drops another $100.

At $800 it's basically in par with 4070ti, even beats it in some games.

Question is if being on-par is sufficient with worse RT and no DLSS (FSR 2.1 is not as wide spread), plus ongoing driver shenanigans.

It's certainly a lot closer to a decent (very relatively speaking) purchase, plus has a bit more future proofing with more VRAM.
 
Was at my local PC shop who builds my PCs and they had a 7950x 7900XTX build there that was returned as they give 30 day returns so I bought it to play with it to see how it does while I wait on my build

Coming from a 5950x 3090 PC this new PC seems to be really thumping the 3090 as long as I don't care about RT, which I don't

I feel anyone who bought this card should feel pretty damn good about their purchase
 

YeulEmeralda

Linux User
I play a lot of older games and whenever I'm looking them up on pcgamingwiki there's often a note about AMD GPU not working properly.
That's what is keeping me from considering their cards.
 
So back to the shop the 7900xtx system went as once I started really pushing it the system would crash trying to play games like Sons of the Forrest 4k 144hz ultra settings

They put it through some tests and decided its a faulty power supply but I just returned it as my 4090 rig is supposed to be done this week
 

MikeM

Member
So back to the shop the 7900xtx system went as once I started really pushing it the system would crash trying to play games like Sons of the Forrest 4k 144hz ultra settings

They put it through some tests and decided its a faulty power supply but I just returned it as my 4090 rig is supposed to be done this week
First guess would be psu. What was it?
 

AGRacing

Member
Since this is bumped.

I've been running my 5800x3D / Reference XTX build since the end of the year.

Curve optimizer for the X3D (-30)
1105 undervolt for the XTX (some games capable of further undervolt but this is the all game stable number).

Reference card tested in vertical and horizontal orientation to ensure it's not a vapor chamber defect.

The system is excellent. Runs cool and performs well above spec average according to 3Dmark. And I just wanted to state that since there's a lot of "I heard someone said this was a problem etc etc".

If you're coming over to AMD from nvidia you're going to need to give yourself time to adjust to how the thing works. As long as you approach from the mindset you'll enjoy your time with it.
 

Crayon

Member
bring it down to 700 bucks & then it'll start selling units. at least for me anyways, need a new AMD card for this Linux thing i'm doing

When I read the news that of the open source drivers, I got in the car, drove to best buy, and grabbed a polaris card for like 180 bucks or something. An 8gb 570. Seemed legit. I wasn't in a picky mood.

Went home, tore out the nvidia, slammed in the amd, did a fresh install because I wanted everything clean and I had been wanting to use a different distro in the living room for a minute. AND IT WAS GLORIOUS. At the time you still had to add the latest mesa drivers, but whatever I was gaming on an open stack it was dope. Later, all those mesa drivers came downstream and now you don't have to touch a thing. It's all rolled in and smoooooth. I forgot drivers even existed.

It's ironic how on linux you actually have a really good reason to go amd and its that the drivers are great.
 

64bitmodels

Reverse groomer.
When I read the news that of the open source drivers, I got in the car, drove to best buy, and grabbed a polaris card for like 180 bucks or something. An 8gb 570. Seemed legit. I wasn't in a picky mood.

Went home, tore out the nvidia, slammed in the amd, did a fresh install because I wanted everything clean and I had been wanting to use a different distro in the living room for a minute. AND IT WAS GLORIOUS. At the time you still had to add the latest mesa drivers, but whatever I was gaming on an open stack it was dope. Later, all those mesa drivers came downstream and now you don't have to touch a thing. It's all rolled in and smoooooth. I forgot drivers even existed.

It's ironic how on linux you actually have a really good reason to go amd and its that the drivers are great.
nvidia is missing out on free cash by not just opensourcing their stuff... would make RT on linux on par with windows too
 

Crayon

Member
nvidia is missing out on free cash by not just opensourcing their stuff... would make RT on linux on par with windows too

At least RT on in video works at all on Linux. For some reason I can play quake to RTX just fine, but I don't get any Ray tracing functionality at all in other stuff. The proprietary driver is starting to get it and they're working on it for the open source driver. I'm in no rush. The card I have now can only do a sprinkle of Ray tracing at best so I wouldn't bother with it.
 
Top Bottom