• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RX 7900XTX and 7900XT review thread

Buggy Loop

Member
Tweet ignores the fact that 225 mm^2 of N31 is on 6nm chiplets and Nvidia's entire chip is on 5nm. Anyway all AMD needed to do was hit their original 3GHz clock target. If they can do this with a respin, they will be fine.

Ok then, let’s remove the MCDs for fun, both on 5nm process

7900XTX GCD 306 mm^2 of pure graphical workforce with hybrid RT and ML in the pipeline to optimize area towards more rasterization..

4080 379 mm^2 with (estimated) 20~25% of silicon dedicated to RT and ML, with the memory controllers which as you know take a good chunk out

6nm to 5nm isn’t like a huge space factor saving. How much chunk for memory controllers you want out? 150~190mm^2 ?

Doesn’t change what the tweet said. This is Nvidia schooling AMD engineers hard. AMD has to look at what’s going on and just redo their homework. All this stubbornness to not have dedicated RT and ML for this result? Yup, Nvidia is in a league of it’s own. Not to mention that the whole ordeal of chiplet is to save money right? How much margin you think Nvidia is making selling this 379mm^2 monolithic die compared to 533mm^2 with interconnection outside lithography? Nvidia has a ton of breathing room for a price drop.

In 2017 they were both part of the consortium for DXR. Although the whole thing is based off Nvidia’s papers done prior, AMD knows the direction of RT, but gives off the vibes of a student fumbling around, fails to listen to the class and gives an half finished homework. Nvidia showed the solution before too on the board. Intel enters class like 2.5 periods later but shows it understood.

I’m flabbergasted at the situation. As much as I admire Nvidia’s engineering team, this is not the competition I want to see

Edit : interesting results when you expand to 50 games rather than the typical 10-16 list we often see. And this is with an AIB

https://babeltechreviews.com/hellhound-rx-7900-xtx-vs-rtx-4080-50-games-vr/4/

In 4K there’s 15 games out of 50 where AMD has the advantage.

2/7 dx11, 5/13 dx12 (2018-2020), 5/16 dx12 (2021-22) and 3/6 vulkan

would be curious to see the avg on all that
 
Last edited:

GreatnessRD

Member
Why did AMD lie just a month ago with their presentation? It's a lot of goodwill gone out of the window. Even MLID and Daniel Owen felt that slap from AMD. 5000 and 6000 series had legit close to retail presentations of performances. Strangely, you typically doubt Nvidia claims, but this time, it seems even the 4000 series presentation kind of undersold them. This is upside down world.
As a lot of people contend, I'll also side with that view as well, something went wrong in the hardware department. Their 6000 series top card went toe to toe with Nvidia's top card and now, their 7000 series top card goes neck and neck with Nvidia's second tier card? Something had to go wrong in the AMD war room for the GPU. Could it be the chiplets? Who knows, but something had to have happened for it to turn out like this. Because ain't no way, lol

You can also get that feeling since some tech tubers and OC'ers are reporting the card getting close or surpassing 3GHz. Then the power spikes being all over the place. Something crazy had to happen in development and AMD tried to gloss over it since everyone was hoping they would continue to take the battle to Nvidia because of the price of the 4080 and having the fake 4080. But as you can see, we live in wild times!
 

PaintTinJr

Member
Tweet ignores the fact that 225 mm^2 of N31 is on 6nm chiplets and Nvidia's entire chip is on 5nm. Anyway all AMD needed to do was hit their original 3GHz clock target. If they can do this with a respin, they will be fine.
I agree, but even without taking account of nvidia using a fabrication that it needs to fit everything in and to support the complexity of their interconnects at those frequencies and thermals, if AMD just had the same manufacturing volume as Nvidia to have higher yields the AMD design would probably take that higher clock at the current power draw, and would sit between the 4080 and 4090 comfortably IMO.

As is, it is clearly good silicon in a hardware sense for it to do so well with Calisto Protocol - which we know was optimised for PlayStation Vulkan style API and is a brand new game, so the typical Nvidia DirectX home turf advantage as creators of Nvidia CG for the OG XB and PC isn't in play - but as a product with the software library it isn't doing what the market needs, yet. AFAIK the card isn't even using GDDR6X to get every advantage it could in a direct comparison.

The AMD engineering interview gamersnexus did recently suggests that this architecture is very good and will iterate very quickly. In the interview the point was made that the boring aspects of GPU design that take 400 engineers had been largely factored out by design compartmentalization for a few AMD card iterations so that they could quickly update what matters - reusing the memory interfaces IIRC - and produce better cards quicker with less R&D cost.

In that tweet the need to use the words full fat die size, to imply the transistor efficiency in the AMD design on a lesser node isn't far superior was a bit of a puzzler. Even if this card doesn't see major improvements for RDNA3 driver optimisation the raw numbers and benchmarks like Calisto Protocol suggest AMD's design is in better shape and more future proof than Nvidia's.
 
Intel enters class like 2.5 periods later but shows it understood.
I am quite curios if intel might become the actual competitor to nVidia while AMD struggles, lacking future tech and barely competing in the now. Neither perf/W nor perf/diemm² seems great.
Unless this chiplet thing needs one or two refreshes before taking off, removing bottlenecks that were new to its new principle and we might finally get competition again, not prices where number 1 does reign supreme and number 2 just comfortably accepts their role and partakes in increased price levels.

Something had to go wrong in the AMD war room for the GPU.
whoever did Ryzen1 is hopefully still at AMD and just working on the next next generation or something because both launches this year were not great. I think I read that their server stuff was better though. And since profits per card are probably damn fine, they should not have actual trouble ahead.

I would assume talks with Sony and MS about next gen will also direct where AMD has to go in the near future. Do both want strong RT, and maybe already thinking about going to intel or intel/nVidia or maybe even ARM-nVidia if AMD can't offer/promise interesting tech and also has no efficience advantage. Maybe AMDs best people are already working for this coming iteration and hence we get stagnating Ryzen success and unimpressive RDNA.
 
N33 might shed some light on what went on here. It's supposed to be monolithic so it'll be interesting to see if it's getting a bigger uplift from n23 than 31 did from 21. That might say something about how well this first round of mcm worked out.

Yeah, it will be interesting to see if that makes a difference. Much more in the price range I'd be looking at anyway. :messenger_tears_of_joy:
 

Zathalus

Member
Looks like the problem isn't the chip but rather being hamstrung by the poor reference cards only having 2 x 8 pin. Custom cards can overclock to 3.2Ghz and then matches the 4090 in Cyberpunk for example:


Seems like a good custom card should be faster then the 4080 by a bit (obviously still lagging in RT).
 

twilo99

Member
Something is definitely wrong with the design tho, why would they not include 3x8 pin on the reference cards? Unless they wanted to give enough headroom to the AIBs to differentiate themselves, but that's really far fetched.
 

Zathalus

Member
Something is definitely wrong with the design tho, why would they not include 3x8 pin on the reference cards? Unless they wanted to give enough headroom to the AIBs to differentiate themselves, but that's really far fetched.
AMD has not really had a good track record with reference cards, the 6000 series being the exception.
 
If the OCs are consistent across most of the cards (and techpowerup didn't just get a lucky one) it gives AMD the best of both worlds in a way. The giant cards with higher performance from the AIBs and the smaller drop in replacements with the reference designs.
 

Zathalus

Member
If the OCs are consistent across most of the cards (and techpowerup didn't just get a lucky one) it gives AMD the best of both worlds in a way. The giant cards with higher performance from the AIBs and the smaller drop in replacements with the reference designs.
The XFX card they got was just under 3.2Ghz as well:

 
Probably would be bandwidth starved unless they got very creative with the memory (a normal dual channel ddr5 6000 setup is under 100GBs plus shared with the CPU). Maybe quad channel 7000 or something, but it would still be working with less than a 6600.

That's what Dell's looking to address with CAMM, but it's seemingly for the laptop market. Although, it should be adaptable to small form factor NUC-style PC devices.

In fact, I think that's the next product type Dell will use CAMM in because it just makes complete sense; you address the bandwidth issue relative to physical profile, and I'm assuming the way CAMM is implemented helps it produce a bit less heat than traditional SO-DIMM or full DIMM modules.
 

FireFly

Member
Ok then, let’s remove the MCDs for fun, both on 5nm process

7900XTX GCD 306 mm^2 of pure graphical workforce with hybrid RT and ML in the pipeline to optimize area towards more rasterization..

4080 379 mm^2 with (estimated) 20~25% of silicon dedicated to RT and ML, with the memory controllers which as you know take a good chunk out

6nm to 5nm isn’t like a huge space factor saving. How much chunk for memory controllers you want out? 150~190mm^2 ?

Doesn’t change what the tweet said. This is Nvidia schooling AMD engineers hard. AMD has to look at what’s going on and just redo their homework. All this stubbornness to not have dedicated RT and ML for this result? Yup, Nvidia is in a league of it’s own. Not to mention that the whole ordeal of chiplet is to save money right? How much margin you think Nvidia is making selling this 379mm^2 monolithic die compared to 533mm^2 with interconnection outside lithography? Nvidia has a ton of breathing room for a price drop.
It's 26% more transistors. Likely the 7900XTX is more expensive to produce, but not as much as the 39% difference in "die size" suggests. Anyway, the "result" is due to them aiming for 3 GHz+ clock speeds and missing completely. If they'd hit their targets the product would have been fine.
 

Crayon

Member
Err I was thinking before the xtx should have been 900 but now I'm starting to think 800. Maybe even lower if they made it a 7800 with a little less ram. The 4000 prices are messing with my brain. If the 4080 should be $900 then the 7900xtx should be $800.

...

These high overclocks are kind of weird. Maybe another hint about if/how amd's plans went awry.
 

Kataploom

Gold Member
Haven't been paying attention but seems like basically AIB models are doing exactly what reference model failed to do + some more, right?
 

Crayon

Member
Haven't been paying attention but seems like basically AIB models are doing exactly what reference model failed to do + some more, right?

Kinda yeah but sucking enough power that all their efficiency claims are in the trash. Not crazy amounts but enough maybe to monkey wrench their plans with the two 8 pins.
 
i think voicing concerns on the performance, pricing and naming of 7900 series towards AMD is a good thing. They have no time to fuck around and fumble, team blue is gaining momentum while team green is shitting on them.
 

//DEVIL//

Member


Can someone please confirm this?

especially in Forza Horizon 5 jumping to 140 frames by enabling SAM only ( no FSR ) ?

because if that is true, that is a 4090 level for a game that was 100fps in reviews.

The same goes for CP2077. jumping from 40 to almost 70 frames?

if this is true then
1- WTF the stupid reviewers are doing ????? why they did not have SAM enabled?
2- WTFFFF is wrong with AMD and their shitty drivers/communication? you would think they mention this to the YouTubers that are getting free cards by saying " oh btw, please enable sam to get the real performance of the card"

Honestly, it seems this 7900xtx is really way more powerful than 4080 but AMD software side is shit as always :/
 
Last edited:

SatansReverence

Hipster Princess
Something is definitely wrong with the design tho, why would they not include 3x8 pin on the reference cards? Unless they wanted to give enough headroom to the AIBs to differentiate themselves, but that's really far fetched.
It's almost like AMD didn't want a gigantic 4 slot card that doesn't fit in an absurd number of cases which necessitated a 2x8 pin limit to stop people frying the card.

I don't know how people thought AMD was going to bend physics and magically outperform nvidia with half the power consumption on the same process node.

AIB cards evidently let the card breathe and perform much closer to Nvidias flagship.
 

Buggy Loop

Member


Can someone please confirm this?

especially in Forza Horizon 5 jumping to 140 frames by enabling SAM only ( no FSR ) ?

because if that is true, that is a 4090 level for a game that was 100fps in reviews.

The same goes for CP2077. jumping from 40 to almost 70 frames?

if this is true then
1- WTF the stupid reviewers are doing ????? why they did not have SAM enabled?
2- WTFFFF is wrong with AMD and their shitty drivers/communication? you would think they mention this to the YouTubers that are getting free cards by saying " oh btw, please enable sam to get the real performance of the card"

Honestly, it seems this 7900xtx is really way more powerful than 4080 but AMD software side is shit as always :/




Check that out

It’s not a set it and forget it OC. Most games scale in the 5-7% range stable. Much like the gains of 4090 and 4080 average.
 

Crayon

Member


Can someone please confirm this?

especially in Forza Horizon 5 jumping to 140 frames by enabling SAM only ( no FSR ) ?

because if that is true, that is a 4090 level for a game that was 100fps in reviews.

The same goes for CP2077. jumping from 40 to almost 70 frames?

if this is true then
1- WTF the stupid reviewers are doing ????? why they did not have SAM enabled?
2- WTFFFF is wrong with AMD and their shitty drivers/communication? you would think they mention this to the YouTubers that are getting free cards by saying " oh btw, please enable sam to get the real performance of the card"

Honestly, it seems this 7900xtx is really way more powerful than 4080 but AMD software side is shit as always :/


Fh5 has been one of the games where sam/rebar had the most noticeable effect. It was only between like 5 and 10% if my memory serves, though.

Arc kind of needs Sam on, so there's clearly something you can do on an architecture that makes it more impactful. Maybe RDNA 3 is a little more like that?

Either way kind of weird for reviewers to skip turning that on. And then for AMD to not say anything about it? I don't know. Like you said, probably should wait for more to confirm.
 

MikeM

Member
err why? I mean congrats and stuff but.. you couldn't find xtx ?
6700xt, while great, wasn’t cutting it the way I wanted when paired with my LG C1. It’ll be sufficient for what I want.
That's a tough one to swallow. Too tough for me.

But I wish you a great time with your new GPU.
Thanks. I got it for $1,223 with the cheapest 4080 being $1,699 close by. Couldn’t justify an extra $450+ for my use case. I didn’t want to wait for the next xtx drop because who knows when that will be.
 

Turk1993

GAFs #1 source for car graphic comparisons
DOA!
Asking 1500 euro for the XTX and 1300 for the XT is making the 4080 a better deal, which says alot. We are just fucked with those prices, fuck them both.
nFOHVKr.jpg
 

M1chl

Currently Gif and Meme Champion
DOA!
Asking 1500 euro for the XTX and 1300 for the XT is making the 4080 a better deal, which says alot. We are just fucked with those prices, fuck them both.
nFOHVKr.jpg
Yeah same shitty prices here, not sure WTF is going on, and it's not VAT. The markup on AMD cards is really insane here.
 

//DEVIL//

Member
6700xt, while great, wasn’t cutting it the way I wanted when paired with my LG C1. It’ll be sufficient for what I want.

Thanks. I got it for $1,223 with the cheapest 4080 being $1,699 close by. Couldn’t justify an extra $450+ for my use case. I didn’t want to wait for the next xtx drop because who knows when that will be.
yeah I am sure . but I would have really advised to you to go with xtx that is all.
 

AGRacing

Member
6700xt, while great, wasn’t cutting it the way I wanted when paired with my LG C1. It’ll be sufficient for what I want.

Thanks. I got it for $1,223 with the cheapest 4080 being $1,699 close by. Couldn’t justify an extra $450+ for my use case. I didn’t want to wait for the next xtx drop because who knows when that will be.
I would have bought it vs. a 4080 as well.

But not vs. XTX if given a choice. But I tried to get an XTX and no dice for me either.

I think you'll enjoy the GPU. Looks great that's for sure :)
 

MikeM

Member
yeah I am sure . but I would have really advised to you to go with xtx that is all.
If available I would have.
I would have bought it vs. a 4080 as well.

But not vs. XTX if given a choice. But I tried to get an XTX and no dice for me either.

I think you'll enjoy the GPU. Looks great that's for sure :)
Agreed. If the xtx was available I would have bought it. Whatever- i’ll pocket the $200 CAD savings and buy some games.
 

Amiga

Member
But the trend of more games using RT is definitely consistent. Plus, Nvidia has no major say in how RT is implemented, so the idea of being implemented "well" is irrelevant (take that up with game creators, not Jetset "leather jacket" massiveWang. The only thing we can say for sure is that first, second, and third gen RTX cards all offer better native (ie removing noise from DLSS or FSR) performance when RT settings are turned on compared to RDNA 1, 2, or 3.

Not enough to be a dependable category. Only a handful of games implement it well. Will only be a real category if it becomes the main lighting technique. Maybe on the PS6 generation.
 
Ok then, let’s remove the MCDs for fun, both on 5nm process

7900XTX GCD 306 mm^2 of pure graphical workforce with hybrid RT and ML in the pipeline to optimize area towards more rasterization..

4080 379 mm^2 with (estimated) 20~25% of silicon dedicated to RT and ML, with the memory controllers which as you know take a good chunk out

6nm to 5nm isn’t like a huge space factor saving. How much chunk for memory controllers you want out? 150~190mm^2 ?

You misunderstand 6nm vs 5nm.
6nm is slightly optimised 7nm. 5nm is nearly twice the transistor density of 6nm. The MCDs are around 55 million transistors per mm2. The GCD is 138 million transistors per mm2.
Now obviously, SRAM will not scale with new nodes as well as logic, so you are sort of right, that it won't be that big of a shrink, but it would still be substantial.

But also each of the MCDs and GCDs have to waste die area on chip-to-chip interconnects, which takes up a fair bit of space.
So in actuality If N31 was monolithic it would be around 450mm2. Which is rather small - especially considering the amount of space taken up by cache.

The fact that the silicon is buggy, and draws way too much current at given voltage to maintain its clockspeeds is a completely separate physical design issue. It has absolutely nothing to do with the architecture itself, which is perfectly fine.
This is evidenced by AIB card reviews which, if you add more power can add an extra 15-20% more performance just by lifting average clock frequencies from 2600MHz to 3200MHz.
The potential is there. You can see what AMD was aiming for, but they fell short of their targets. Which means in simple terms their silicon execution was not good enough.

Doesn’t change what the tweet said. This is Nvidia schooling AMD engineers hard. AMD has to look at what’s going on and just redo their homework. All this stubbornness to not have dedicated RT and ML for this result? Yup, Nvidia is in a league of it’s own. Not to mention that the whole ordeal of chiplet is to save money right? How much margin you think Nvidia is making selling this 379mm^2 monolithic die compared to 533mm^2 with interconnection outside lithography? Nvidia has a ton of breathing room for a price drop.

I don't know about Nvidia schooling AMD engineers hard. AMD and Nvidia went for completely different strategies.
And to clear a few things up.
  1. AMD do have dedicated RT hardware, they just aren't spending as much of their transistor budget on it as Nvidia.
  2. AMD don't have fixed function ML accelerators because it doesn't matter that much for gaming. Yes FSR2 isn't quite as good as DLSS in image quality, but its damn close and is hardware agnostic. And if you think DLSS3's hit to image quality is an acceptable way to improve framerate, then you have absolute no right to complain about FSR2's image quality
Nvidia is indeed in a league of their own with the resources they have. Using ML to micro-optimise transistor layout to maximise performance and minimise power is exceptional stuff. However, you also need to understand that Lovelace is nothing remarkable from them as far as architecture is concerned. Every GPU architecture Nvidia has made since Volta is just a small incremental update on Volta. Turing is Volta + RT cores and a dedicated INT pipe. Ampere is Turing + double FP32. Lovelace is a die-shrunk Ampere with a jacked up RT core. If you actually look at the structure of the SM (Streaming multiprocessor) it hasn't dramatically changed since the shift from Pascal to Volta. Small incremental updates. Not unlike what AMD was doing with GCN, but just executed much more effectively with a much better starting point.

RDNA2 from AMD was successful, because it was basically RDNA1 on steroids. Optimised in physical design, fixed some hardware bugs, and clocked to insanity.
RDNA3 is effectively a completely new architecture in every possible way. The CU's are completely redesigned. The individual vALU's have been completely redesigned. The front end and geometry has been redesigned. The command processor has been streamlined, and shifted from a hardware scheduler to a software scheduler (iirc) like Nvidia. On top of this they have disaggregated last level cache and memory controllers. Very ambitious on a number of different ways. They aimed big and they failed.
If Nvidia schooled AMD at anything, its execution, not necessarily at architecture.

But more than their hardware, their true strength is in software. Software is the reason AMD doesn't make larger chips, because they have no answer to Nvidia's software.
Let me be clear. AD102 is not designed for gamers. Nvidia absolutely do not give a shit about gamers. The fact that AD102 is blazing fast at gaming is a nice side bonus for them. AD102's true target is semi-professionals and professionals. Their RTX 6000 Lovelace is where Nvidia really make money, and that is all built on Nvidia's software. CUDA and everything that plugs into, OptiX for rendering etc.

In 2017 they were both part of the consortium for DXR. Although the whole thing is based off Nvidia’s papers done prior, AMD knows the direction of RT, but gives off the vibes of a student fumbling around, fails to listen to the class and gives an half finished homework. Nvidia showed the solution before too on the board. Intel enters class like 2.5 periods later but shows it understood.

AMD doesn't have Nvidia's market incumbent to be able to dictate the way the market moves. DXR was built by Microsoft and Nvidia together, but RTX is essentially a black box. Its proprietary software. Nvidia is clever and sends software engineers to developers to help build RTX to most efficiently use their hardware. Even now AMD could dedicate a shit load of transistors to RT like Intel do, but that is no guarantee that it will be the same level of performance. AMD at present does not have the resources to do what Nvidia does, which is why they open source a lot of their stuff. They make up for their lack of resources, by providing a huge amount of detailed documentation and information so developers can do things easily themselves. However, at the end of the day, nothing is easier than having some guy from Nvidia come and do it for you.

And to be clear. I'm fairly sure AMD could make a huge 600mm2 GPU, and cram in a whole bunch of transistors for RT. They could borrow the Matrix/ML cores from their CDNA products. They could toss that all into a huge GPU. But without the software stack it would be pointless. As I said before Nvidia can justify that massive 600mm2 AD102 GPU because they intend on selling most of it to Pros in the form of the $1600+ 4090 and the $8000 RTX L6000. That helps them recover the money.
Now tell me, who the fuck would buy a $1600 AMD GPU, if it doesn't have fully functional CUDA alternative, or OptiX alternative?
Would you?
No you would expect them to charge a lower price, even if it performed exactly the same as Nvidia in gaming, let alone all the pro use-cases. So how can they make back the money spent on such an expensive GPU? They can't spend all that money, sell it at a loss or thin margins. Its not sustainable, and it won't help them compete long-term.

AMD's software is shit, which is where the real problem is. Their hardware engineers are doing just fine, barring this blip with N31. Problem with software, is that it takes time to develop. ROCm is progressing, but its still far behind CUDA. HIP exists, but its still just shy of CUDA in rendering. HIP-RT is in development with Blender, but its still far from release. Once that software stack is up and running and able to deliver something useful to professionals, then and only then will AMD start to actually dedicate valuable leading edge silicon to stuff like AI and RT.

You can talk about Intel. But Intel is a massive company that's even bigger than Nvidia. In fact, they've been doing hardware RT since before even Nvidia with Larrabee. And they also have a ton of very talented software engineers on payroll building out OneAPI for their compute stack. Again, you're high if you think ARC is designed for gamers. ARC exists as a platform to build Intel's image in GPU so they can enter that space in datacentre and pro use cases. See: Ponte Veccio.

The market situation really not that simple.

By the way, I'm not making excuses for AMD or The 7900 family. They're mediocre products that are better value than the 4080, because that's a terrible product. You really should not be buying either. But if you go ahead and buy a 4080 because you're disappointed by the 7900XTX, then you really have no right to complain about prices or the state of competition. You're just feeding the beast and nothing will ever change, so long as you keep feeding it.

I’m flabbergasted at the situation. As much as I admire Nvidia’s engineering team, this is not the competition I want to see

The market is what gamers made it. We are past the point of complaining now, we engineered these circumstances.
I know I'm guilty of be impressed with AMD/ATi GPUs in the past, congratulating them on a job well done and then going ahead and buying an Nvidia GPU when they inevitably discounted it.
If instead of providing empty platitudes to Radeon products for being great value for money, the PC gaming community; myself included, actually bought Radeon products. Maybe AMD wouldn't have been so cash starved during the whole of GCN and they would have been able to compete better against Nvidia.
But its too late now.
I made a mistake. Collectively as a community we made a mistake giving Nvidia 80% of the market. And now we have to pay.

AMD have realised that they too must make money to compete. So why would they slash prices, take less profit per unit sold, and also sell fewer units than Nvidia? That's a great way to go out of business. Now they'll sit content being second best, and just ride Nvidia's coat-tails and position themselves 10-20% below Nvidia's prices. Because why not?

The only way we can change anything, is to stop being a bunch of lunatic magpies, and just not buy any new GPU.
 
Last edited:
Based on what everyone's analysis of these new cards, team red came pretty close but fucked up on the execution and implementation with a high price tag. I think with the reviews and feedback, AMD will be forced to fix this, and will make decent RDNA3 products Q1 2023.
 
Based on what everyone's analysis of these new cards, team red came pretty close but fucked up on the execution and implementation with a high price tag. I think with the reviews and feedback, AMD will be forced to fix this, and will make decent RDNA3 products Q1 2023.
It will be a lot later than Q1. You're looking at the second half of next year or later.
 

poppabk

Cheeks Spread for Digital Only Future
So I caved and bought a reference 7900xt. I’m excited but equally ashamed my fellow pc gamers.
Enjoy. I just bought a 6900xt and couldn't be happier. Don't be obsessed with chasing numbers, just play some games.
 

poppabk

Cheeks Spread for Digital Only Future
All the cards are basically hitting way above the max framerate of any headset on the market. The only weird ones are NMS and Asseto Corsa where you get exactly 50% synthesized frames which seems a little weird. Probably they are using h264 encoding instead of h265.
 

supernova8

Banned
The optimist in me says that AMD gimped the XTX to sell it at $999 and then still allow AIBs to sell at a few hundred dollars extra and justify it by overclocking the shit out of it
 

Buggy Loop

Member




Seems like we’re close to solving the puzzle.

Quoting a guy on r/amd

A0 is the first stepping (version) of any chip that’s manufactured. A company will usually find issues with the A0 stepping, so will modify the design and fab the next stepping(s), which would be A1, A2, etc. If a major change is made, the stepping moves onto B0, B1, B2, etc.

I don’t know how common it is for GPUs to be released on the A0 stepping, but it being A0 would support the claims of hardware bugs.

Unbelievable

If true, AMD skipped all the normal procedures to release a bugged product by end of year just for investors
 
Last edited:
Top Bottom