• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD presents Zen 5

Buggy Loop

Member
It won't necessarily be a game that will force that upgrade, but rather a new GPU that causes a CPU bottleneck.

Yea of course, still don't see the point of pulling the 3080 Ti out yet.

Even with 5090 out, what game really will push it truely? Cyberpunk 2077 was the game that made me prepare for it (a bit disappointing initially with the release). I can't think of anything for 2024-25. Anyway, its just me rambling about not wanting to upgrade lol.
 

analog_future

Resident Crybaby
Gimme that X3D variant.

I've currently got a 7700x / RTX 4090 combo. Excited to upgrade it to a Zen 5 X3D / RTX 5090 combo soon.
 

GHG

Member
Yea of course, still don't see the point of pulling the 3080 Ti out yet.

Even with 5090 out, what game really will push it truely? Cyberpunk 2077 was the game that made me prepare for it (a bit disappointing initially with the release). I can't think of anything for 2024-25. Anyway, its just me rambling about not wanting to upgrade lol.

There are already some games that will bottleneck on a 4090 with a 5800X3D, even at 4k.

Cyberpunk is one of them, flight sim is another along with a bunch of sim racing titles. But even in those games, the uplift you gain from upgrading to the 7XXX3D chips isn't substantial enough to be worth it at this moment in time.

The 5090 might change that.
 

Buggy Loop

Member
There are already some games that will bottleneck on a 4090 with a 5800X3D, even at 4k.

Cyberpunk is one of them, flight sim is another along with a bunch of sim racing titles. But even in those games, the uplift you gain from upgrading to the 7XXX3D chips isn't substantial enough to be worth it at this moment in time.

The 5090 might change that.

Yes for sure. I play ultrawide 1440p and I can live with a few settings turned off.

I'm not saying the situation doesn't exist where these CPUs or GPUs aren't worth it, I'm just evaluating that I don't see the killer game to splurge a huge amount of money into an upgrade yet. Kingdom come deliverance 2 is Crytek 3 I believe? Should run fine I guess.
 
  • Like
Reactions: GHG

Allandor

Member
Yeah, there is a good chance Zen5 will lose to Zen4 3D. Zen4 3D seems to be faster in at least 3 of the games tested. Not enough data to say for sure though.

They hamstrung the Intel Machine by using Intel Defaults, which caps performance of any all-core tests. They also tested with AMD optimized DDR speeds. They also tested with a 7900 XTX and muddied the water with enabling Smart Access Memory on both machines. I've seen results online where Raptor Lake even loses performance with Smart Access Memory enabled. They hid all of this in their end-notes.
That would be a desaster for Intels own GPUs if you disable smart access memory. So it should be more than fair to let it activated. It should be activated by default by a few years now.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
Gotta give it to AMD though, they sure do know how to twist the numbers to make it look better than it actually is.

I wonder if people will even care if it turns out to actually only be 1-2% faster than 14900K, instead of the illusion "13% 6 game average" they're getting some fools to believe with this 7900 XTX nonsense. This is a new low for AMD first party benchmarks.

That would be a desaster for Intels own GPUs if you disable smart access memory. So it should be more than fair to let it activated. It should be activated by default by a few years now.
The fair thing would be to use a 4090, a faster GPU. Enable ReBAR on a 4090, I don't care. The problem is they pulled the 7900 XTX card. Coward move.

7900 XTX tanks performance in some of the cherry picked games on Raptor Lake, AMD knew what they were doing.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
All results are "up to".

AMD aren't as bad as Intel when it comes to misleading performance graphs, but I'll still wait for third party benchmarks lol
AMD is actually a lot worse than Intel when it comes to performance graphs.

Intel is truthful the last time I verified their claims.

At the Raptor Lake reveal Intel had a slide with 11% average over 7950X in 2022.

Intel delivered.

onPM0wo.png

10.3% over 7950X in the Meta Review. This verified Intel's 11% claim at the Raptor Lake Reveal. Intel was not misleading.

AMD on the other hand, At Zen4 reveal AMD had graphs showing Zen4 beating Alder-Lake by 9% on avereage but in actuality it was closer to 2%.
And AMD quoted 7600X as having "5% faster gaming performance" than 12900K, while actually being a few percent slower...

And now with 9950XT they used a 7900 XTX, which tanks Intel performance at 1080p in some of their selected games, that is slimey, and they aught to be raked over the coals, but in the end probably no one will care, AMD habitually shows misleading performance cliams (see RDNA3) and somehow people either keep forgetting or keep giving AMD a pass...
 
Last edited:

winjer

Gold Member
AMD is actually a lot worse than Intel when it comes to performance graphs.

Intel is truthful the last time I verified their claims.

At the Raptor Lake reveal Intel had a slide with 11% average over 7950X in 2022.

Intel delivered.

onPM0wo.png

10.3% over 7950X in the Meta Review. This verified Intel's 11% claim at the Raptor Lake Reveal. Intel was not misleading.

AMD on the other hand, At Zen4 reveal AMD had graphs showing Zen4 beating Alder-Lake by 9% on avereage but in actuality it was closer to 2%.

And now with 9950XT they used a 7900 XTX, which tanks Intel performance at 1080p in some of their selected games, that is slimey, and they aught to be raked over the coals, but in the end probably no one will care, since everyone knows AMD has lied multiple times with misleading performance cliams (see RDNA3) and somehow people either keep forgetting or keep giving AMD a pass...

Your table shows the 13900K beating the 7950X by 9.3%. Not by 10.3%

And were did you see that AMD was claiming that 9% performance advantage in games over the 12900K?
 

winjer

Gold Member
100/90.7 is what? Oh that's right, its 10.3%, get better at maths...

You are dividing percentages that were already calculated. wtf.
The 13900K is already set as 100%, and the 7950X is at 90.7% of that result.
That means a 9.3% percentage difference.

And where is that claim of yours that AMD promised that Zen4 was 9% faster in gaming than the alder lake.
From the Zen 4 presentation I can only find comparisons to Zen3. I'll be glad to see more data.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
And where is that claim of yours that AMD promised that Zen4 was 9% faster in gaming than the alder lake.
From the Zen 4 presentation I can only find comparisons to Zen3. I'll be glad to see more data.
LIE1: 7600X 5% faster on average (12900K was actually faster)

SFXyIPq6oQyY5xCE.jpg


LIE2: 7950X 9% faster on average (take the average of the titles shown)

8WIY54DdOnq3ndQy.jpg


And the magical 13% is a lie too, I know because I've seen more data than you, if you look hard enough you'll find it too...
 

smbu2000

Member
65w for a 8c16t 9700X is nice, very nice.

Hopefully that means the market for those god awful AIO liquid coolers will start to go away.

Give me a nice, aesthetically pleasing air cooler any day of the week.
Well they already had a 8/16 part on zen 4 with the non-x variants of their cpus. 7700X was at 105W, but the non-x variant the 7700 released later with the same 8/16 setup but only at 65W. I think max turbo speed was 100mhz less 5.4 vs 5.3. The non-x 7700 also comes with the stock AMD cooler.

I have a 7700 in my SFF mATX pc. Works well there.
 

winjer

Gold Member
LIE1: 7600X 5% faster on average (12900K was actually faster)

SFXyIPq6oQyY5xCE.jpg


LIE2: 7950X 9% faster on average (take the average of the titles shown)

8WIY54DdOnq3ndQy.jpg


And the magical 13% is a lie too, I know because I've seen more data than you, if you look hard enough you'll find it too...

I see what you did. You are pretending that AMDs 4 game average represents all games.
Then comparing those results in 4 games, against the average of several reviews, each having dozens of games.
You just managed to get even more dishonest than ever before. ok then.

BTW, did manage to see how you screwed up calculating the percentages?
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
I see what you did. You are pretending that AMDs 4 game average represents all games.
AMD is the one who said 7600X "5% faster on average in gaming" than 12900K.

Bold faced lie. Make that quote make sense to me.

You can't.

Only a fool would think 9950X comes anywhere near 13% after the BS they pulled. 3-4% at best.
 

Agent_4Seven

Tears of Nintendo
Depending on the price and availability, I will probably buy X800 or X800E mobo, 64GBs of DDR5 RAM and will wait for 9800X3D. 8700K still can deliver locked 30FPS at 4K so no rush, plus.... I won't be able to buy the new CPU as well anyway.
 
Last edited:

winjer

Gold Member
AMD is the one who said 7600X "5% faster on average in gaming" than 12900K.

Bold faced lie. Make that quote make sense to me.

You can't.

Only a fool would think 9950X comes anywhere near 13% after the BS they pulled. 3-4% at best.

AMD showed 4 games with the respective performance percentages.
You are the one making up that they promised a 5% advantage in all games.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
AMD showed 4 games with the respective performance percentages.
You are the one making up that they promised a 5% advantage in all games.
5 games in the 7600X example along with the "5% faster on average" quote. Its a bold-faced lie.

You don't put the words "5% faster on average" on a slide unless you mean it.
 

winjer

Gold Member
5 games in the 7600X example along with the "5% faster on average" quote. Its a bold-faced lie.

You don't put the words "5% faster on average" on a slide unless you mean it.

I was wondering why you always go for those reddit tables with aggregate results.
But then I started looking at some of the sites that were in that list. And there are several red flags.

For example, Sweclockers is using different memory for Zen4 and Alder Lake.
For Zen4 they are using DDR5 6000 with higher latency timings of 40-40-40-82.
While on Alder lake they are using DDR5 6000 with significantly lower timings of 30-38-38-96.
So no wonder they are one of the sites that show Zen4 losing the most.

A similar thing happens with the test from computerbase, where they use DDR5 5600 CL32 for the 13th Gen CPUs. But use DDR5 5200 CL32 for Zen4.
So once again, it's due to using slower memory on Zen4, that there is such a performance difference.

The reality, is that when using identical memory for Raptor lake an Zen4, results in gaming are much closer.
Now I understand why you use such aggregate table. Because then you can mix reviews that use slower memory for Zen4, with reviews that do a proper test with identical memory, for both systems.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
I was wondering why you always go for those reddit tables with aggregate results.
But then I started looking at some of the sites that were in that list. And there are several red flags.

For example, Sweclockers is using different memory for Zen4 and Alder Lake.
For Zen4 they are using DDR5 6000 with higher latency timings of 40-40-40-82.
While on Alder lake they are using DDR5 6000 with significantly lower timings of 30-38-38-96.
So no wonder they are one of the sites that show Zen4 losing the most.

A similar thing happens with the test from computerbase, where they use DDR5 5600 CL32 for the 13th Gen CPUs. But use DDR5 5200 CL32 for Zen4.
So once again, it's due to using slower memory on Zen4, that there is such a performance difference.

The reality, is that when using identical memory for Raptor lake an Zen4, results in gaming are much closer.
Now I understand why you use such aggregate table. Because then you can mix reviews that use slower memory for Zen4, with reviews that do a proper test with identical memory, for both systems.
Gotta love it, after seeing AMD mislead on multiple slides now the goalpost shifts to aggregate (meta-reviews) making AMD look bad, you can't make this shit up.

AMD fanboy logic is insane.

Stop playing dumb, we all know some of these sites use RAM kits within the AMD/Intel spec. If AMD/Intel want those sites that limit the speeds to spec to use faster RAM, they need to upgrade their DDR5 spec.

If AMD wants the same kits used as Intel then AMD needs to match Intel DDR5 spec.

DDR5 6000 was a bigger overclock on AMD, since AMD spec was lower, and AMD and fanboys such as you always want to use such a speed, since its what AMD uses, and benefits AMD, and you sick fanboys.
 
Last edited:

winjer

Gold Member
Gotta love it, after seeing AMD mislead on multiple slides now aggregate (meta-reviews) make AMD look bad, you can't make this shit up.

AMD fanboy logic is insane.

Stop playing dumb, we all know some of these sites use RAM kits within the AMD/Intel spec. If AMD/Intel want those sites that limit the speeds to spec to use faster RAM, they need to upgrade their DDR5 spec.

If AMD wants the same kits used as Intel then AMD needs to match Intel DDR5 spec.

DDR5 6000 is a bigger overclock on AMD, since AMD spec was lower.

Strange that just a few hours ago, you were complaining that the 14900KS that AMD should have used DDR5 that was faster than 6000. Which is beyond the Intel spec.
But now you claim that using the memory spec is the way to go. You are so biased, it is very impressive.
And you forget the sites that use higher latency ram for AMD systems, something that there is no spec for.

The reality, is that everyone knows that for a benchmark to be fair, systems should be configured as closely as possible.
That means using the same memory, the same SSD, the same Windows version, the same GPU, the same drivers, etc.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
Strange that just a few hours ago, you were complaining that the 14900KS that AMD should have used DDR5 that was faster than 6000. Which is beyond the Intel spec.
But now you claim that using the memory spec is the way to go. You are so biased, it is very impressive.
And you forget the sites that use higher latency ram for AMD systems, something that there is no spec for.

The reality, is that everyone knows that for a benchmark to be fair, systems should be configured as closely as possible.
That means using the same memory, the same SSD, the same Windows version, the same GPU, the same drivers, etc.
The main complaint I have is the use of 7900 XTX, that is what really throws off the results in their cherry picked titles. I really don't care what RAM they use within reason as that is not going to be a reason for Intel to fall that far behind.

But the choice of 7900 XTX is an abysmal choice, as it makes one CPU faster than the other. 14900K would have won some of those benchmarks if a 4090 was used, but AMD needed to fool people like you to actually think they have a shot at being somewhat faster than a 14900K... it worked, congrats AMD, you fool lots of AMD fanboys today :messenger_sun:
 

winjer

Gold Member
The main complaint I have is the use of 7900 XTX, that is what really throws off the results in their cherry picked titles. I really don't care what RAM they use within reason as that is not going to be a reason for Intel to fall that far behind.

But the choice of 7900 XTX is an abysmal choice, as it makes one CPU faster than the other. 14900K would have won some of those benchmarks if a 4090 was used, but AMD needed to fool people like you to actually think they have a shot at being somewhat faster than a 14900K... it worked, congrats AMD, you fool lots of AMD fanboys today :messenger_sun:

Do you have any proof that Radeons run faster on Zen CPUs than on Intel CPUs?
You have already made dozens of false claims on this thread alone.
And every time someone ass for proof, you either don't provide any, you run dodgy math, you post biased benchmarks, or lie outright.
 

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
Do you have any proof that Radeons run faster on Zen CPUs than on Intel CPUs?
You have already made dozens of false claims on this thread alone.
And every time someone ass for proof, you either don't provide any, you run dodgy math, you post biased benchmarks, or lie outright.
I do, you'll find it if you keep looking.

I don't lie about these things. I'm not going to make up bullshit excuses, unlike some.

In one game Intel would have gained ~5%, and in another Intel would have gained ~21%. People like you don't deserve to see the data since you'd just continue twisting the facts.

I'd willing to put a ban bet on the line right now, that's how confident I am Zen5 won't even be 4% faster than 14900K on average, let alone that BS 13% they hoodwinked some of you people on.
 

OverHeat

« generous god »
The main complaint I have is the use of 7900 XTX, that is what really throws off the results in their cherry picked titles. I really don't care what RAM they use within reason as that is not going to be a reason for Intel to fall that far behind.

But the choice of 7900 XTX is an abysmal choice, as it makes one CPU faster than the other. 14900K would have won some of those benchmarks if a 4090 was used, but AMD needed to fool people like you to actually think they have a shot at being somewhat faster than a 14900K... it worked, congrats AMD, you fool lots of AMD fanboys today :messenger_sun:
0lpvEpo.gif
 

hinch7

Member
Pretty decent if just a moderate jump. Not what I was expecting for gaming. Oh well, more reason to wait for Zen 6 with a node jump, or two coming from my current CPU.
 

Bojji

Member
The main complaint I have is the use of 7900 XTX, that is what really throws off the results in their cherry picked titles. I really don't care what RAM they use within reason as that is not going to be a reason for Intel to fall that far behind.

But the choice of 7900 XTX is an abysmal choice, as it makes one CPU faster than the other. 14900K would have won some of those benchmarks if a 4090 was used, but AMD needed to fool people like you to actually think they have a shot at being somewhat faster than a 14900K... it worked, congrats AMD, you fool lots of AMD fanboys today :messenger_sun:

Wouldn't faster GPU show even more difference in CPU power?

I have never seen anything about Radeon cards somehow making AMD CPUs perform better. I know that AMD have lower CPU overhead (vs Nvidia) for DX12/Vulcan but this is true for Intel CPUs as well.
 
Last edited:

Leonidas

AMD's Dogma: ARyzen (No Intel inside)
Wouldn't faster GPU show even more difference in CPU power?

I have never seen anything about Radeon cards somehow making AMD CPUs perform better. I know that AMD have lower CPU overhead (vs Nvidia) for DX12/Vulcan but this is true for Intel CPUs as well.
Imagine this.

Game 1.

AMD CPU
7900 XTX 250 FPS
RTX 4090 250 FPS

Intel CPU
7900 XTX 225 FPS
RTX 4090 275 FPS

This happens in one of the games they tested. And it was a huge margin like this. Yes, a huge 20%+ swing.

In another cherry picked game a similar thing happened, but the margin was much smaller, only a 5% swing, still enough to erase another of AMDs "wins" in their cherry pick.

That alone would have decimated AMDs fake "13% average" since they handpicked only 6 games.

Can't find enough data for the other 4 games sadly.
 
Last edited:

peish

Member
65w for a 8c16t 9700X is nice, very nice.

Hopefully that means the market for those god awful AIO liquid coolers will start to go away.

Give me a nice, aesthetically pleasing air cooler any day of the week.

Amd capped the all cores effective clocks to reach 65w, it is a trick!

RtPN7ej.png
oclugGk.png
 

winjer

Gold Member
Amd capped the all cores effective clocks to reach 65w, it is a trick!

RtPN7ej.png
oclugGk.png

Do you realize that all modern CPUs have base and boost clocks, be it from Intel, AMD, ARM, Apple, etc.
The base clock is just the lower limit of the range at which they operate, when in low load applications.
The TDP only comes into play in regards to boost clocks. As this is when the CPU is pushed harder.
 

peish

Member
Do you realize that all modern CPUs have base and boost clocks, be it from Intel, AMD, ARM, Apple, etc.
The base clock is just the lower limit of the range at which they operate, when in low load applications.
The TDP only comes into play in regards to boost clocks. As this is when the CPU is pushed harder.

This is wrong. Base clock here means under load as per AMD "i".
It is possible 9700x may not exceed 7700x when you run a heavy app if you keep both at their stock config.
 

JohnnyFootball

GerAlt-Right. Ciriously.
A 9700X with a 65W TDP....that alone destroys anything from Intel.

If Intel can get their power consumption down to THAT with minimal loss of performance that would be legit impressive.

AT the moment, I will hold on to my 7800X3D for a while longer. I see no reason to go with a 9700X unless the performance is drastically better.


The one thing I wouldn't expect from 9000 series CPUs is to get the same undervolting improvements we got from the 7000 series. I suspect AMD put greater effort into fine tuning the power values.
 

winjer

Gold Member
This is wrong. Base clock here means under load as per AMD "i".
It is possible 9700x may not exceed 7700x when you run a heavy app if you keep both at their stock config.

The TDP refers to the amount of heat a CPU dissipates, on average during load, over time, and this has some correlation to how much power is drawn.
So the CPU will have several budget, to determine the clock the CPU can run at any time. The base clock is, as the name implies, the base clock for when the cores are under load.
The boost clocks refers to the maximum clock speed under load.
When a core is in idle, the CPU can park cores to save on power.
On AMD, the maximum amount of power is determined by these budgets: PPT, EDC and TDC. (Thermal Design Current, Electrical Design Current and Package Power Tracking)
Another thing that AMD, Intel and ARM do, is to have different boost clocks, depending on how many cores are being used.
For example, the 5800X3D will have a boost clock of 4.55Ghz in single thread. But a 4.45Ghz boost clock in multithreaded usage.

But in practice, the base clock is rarely used, when a CPU is under load.
Here is the clock profile of a 7700X, with scaling per amount of cores in use.
As you can see, despite the base clock of 4.7Ghz, the CPU never goes to that speed. Even with a heavy load on all cores.

boost-clock-analysis.png


And Intel does a similar thing with their base and boost clocks.
Here is an example with a 13600K, that has a base clock of 3.5Ghz, but never goes near it.
boost-clock-analysis.png
 

peish

Member
this is 65w 7700, lost of 100-200mhz. doing my expert maths, 9700x running all cores at 65w tdp, gives us around 4.6ghz. Will the ipc make the 700-800mhz loss against 7700x? hehe

boost-clock-analysis.png
 
Last edited:

winjer

Gold Member
this is 65w 7700, lost of 100-200mhz. doing my expert maths, 9700x running all cores at 65w tdp, gives us around 4.6ghz. Will the ipc make the 700-800mhz loss against 7700? hehe

boost-clock-analysis.png

IPC means Instructions Per Clock.
This is a normalized value, of how much work a CPU can do per core, per clock cycle.
It is not a reference to clock speeds. That is why, when reviewers do IPC tests, they lock all CPU clocks to one value.

All modern CPUs lose clock speed when they are under heavy load on all cores. That is what they are designed to do.
And as you can see, even with a TDP of 65W, the 7700 never reaches it's base clock of 4.3Ghz. It is always, closer to the boost clock.

And your math is wrong, mostly because Zen4 and Zen5 use different process nodes, with different power and clock performance.
 
IPC means Instructions Per Clock.
This is a normalized value, of how much work a CPU can do per core, per clock cycle.
It is not a reference to clock speeds. That is why, when reviewers do IPC tests, they lock all CPU clocks to one value.

All modern CPUs lose clock speed when they are under heavy load on all cores. That is what they are designed to do.
And as you can see, even with a TDP of 65W, the 7700 never reaches it's base clock of 4.3Ghz. It is always, closer to the boost clock.


And your math is wrong, mostly because Zen4 and Zen5 use different process nodes, with different power and clock performance.

Pretty much this. According to the box for my 5800X3D, the base clock is 3.4Ghz but it never runs that low. Even when all threads are maxed & the CPU is hitting around 120W during a shader compilation it will still sit at 4.25Ghz
 

Zathalus

Member
Gotta love it, after seeing AMD mislead on multiple slides now the goalpost shifts to aggregate (meta-reviews) making AMD look bad, you can't make this shit up.

AMD fanboy logic is insane.

Stop playing dumb, we all know some of these sites use RAM kits within the AMD/Intel spec. If AMD/Intel want those sites that limit the speeds to spec to use faster RAM, they need to upgrade their DDR5 spec.

If AMD wants the same kits used as Intel then AMD needs to match Intel DDR5 spec.

DDR5 6000 was a bigger overclock on AMD, since AMD spec was lower, and AMD and fanboys such as you always want to use such a speed, since its what AMD uses, and benefits AMD, and you sick fanboys.
I sincerely hope you are trolling at this point, because nobody can be this oblivious to their own bias.
 

Roni

Member
If AMD's commitment to standardization and efficiency can lead to similar results in the GPU side of things, they have a bright future ahead of them.
 

YCoCg

Gold Member
Very impressive power wattage wise, Intel can only dream of getting their CPUs back to this level, hope it pushes them to do SOMETHING though.
 
Top Bottom