He addresses things accordingly. It's without a doubt a one of kind visual experience and easy to say nothing looks as good or is pushing our HW as much as this.Does Alex address the horrible ghosting that is omnipresent in the first 5 minutes? Or is he too busy gushing about ray reconstruction for that?
It's amazing to think that Intel of all people have better AI acceleration on GPUs. As well as a better upscaled then AMD in the form of XeSS. Not to mention better RT acceleration as well.I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?
2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
It's amazing to think that Intel of all people have better AI acceleration on GPUs. As well as a better upscaled then AMD in the form of XeSS. Not to mention better RT acceleration as well.
Yes, later on in the video. I also noticed that the reflections on a rainy street were a bit too clear and sharp considering the mixture of pavement and rainwater, and he addresses that issue too.Does Alex address the horrible ghosting that is omnipresent in the first 5 minutes? Or is he too busy gushing about ray reconstruction for that?
fake frames, fake rays, what next? soon we will just be playing figments of our imagination thanks to neurolink
To me it is no different than this:fake frames, fake rays, what next? soon we will just be playing figments of our imagination thanks to neurolink
To me it is no different than this:
If the end result of this tech gives us the IQ that we want or close to that, it's a net positive.
As long as Nvidia is locking their software behind their latest cards. They can go fuck themselves.
As a 3000 series owner I'm stuck on dlss2 and I'm sure dlss4 will be locked behind their 5000 series.
Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?
2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
You get DLSS 3 on 3000 series cards,just not framegenAs long as Nvidia is locking their software behind their latest cards. They can go fuck themselves.
As a 3000 series owner I'm stuck on dlss2 and I'm sure dlss4 will be locked behind their 5000 series.
oh im not saying its a bad thing, gotta make up for nvidias laziness and money grabbing scumlordery some how
And why no frame generation?wrong
you are stuck to dlss 3.5 with RR. only thing missing is frame generation
There's no way AMD wouldn't improve things on their end to match this...right?Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.
All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.
You can't just slap in an "AI Tech" and suddenly match Nvidia, AMD were caught with their pants down regarding both resolution upscaling and ray tracing, they are playing catch up to an Nvidia that has been investing heavily in both of those technologies.I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?
2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
And why no frame generation?
Nvidia claims the Optical Flow Accelerator inside Ampere GPUs is not fast enough. I guess it would work but would deliver less of a speedup. Of course Nvidia wants you to buy their latest GPUs.And why no frame generation?
Rich said in the newest DF direct that he would hope MS would go with Nvidia for next gen. It is not impossible but would be pricey of course.Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.
All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.
Why should they? There's only 2 current games with path tracing and when Nvidia launched turning,they stated it would be until 2024 when ray tracing was going to be impactful.I don't get how AMD aren't ashamed of themselves at this point. Aren't they seeing all this?
2028 is still a ways off, but if by 2026 AMD hasn't matched Nvidia with the AI tech in their GPUs and all the stuff that brings, I really think Sony and MS should consider switching GPU vendors. As hard as that may be, it would be worth it.
INTEL has been making GPUs for 20+ years in the mobile space, it's not like one day Intel made a desktop GPU for the first time.It really is remarkable that Intel came along with better tech on their very first try while AMD has been doing this for years and years.
No faker than shadows or coloured light bounce. It's all just magic tricks coming together in a more or less believable image.fake frames, fake rays, what next? soon we will just be playing figments of our imagination thanks to neurolink
INTEL has been making GPUs for 20+ years in the mobile space, it's not like one day Intel made a desktop GPU for the first time.
Tried this with my 4090 and the difference is indeed HUGE this is a big fucking deal. The game finally really looks like true 4k. The smear/blur is gone. Outstanding job from Nvidia and CDPR.
Rich said in the newest DF direct that he would hope MS would go with Nvidia for next gen. It is not impossible but would be pricey of course.
That's called a tabletop RPG, which Cyberpunk is actually based on.soon we will just be playing figments of our imagination...
Night and day difference with path tracing plus Ray Reconstruction verses normal RT; with current denoisers. So much detail lost and regained with RR. You could say Nvidia fixed RT to how it should be with 3.5.
Tbh.. AMD has their work cut out for them in the years to come. Hopefully they can get all this sorted by the time RDNA 5 is out. But for now people can enjoy PT and Raytracing in all its glory with a moderately powerful RTX 4000 series card. Can't wait to see how Nvidia advances this tech going through to next generation, and sort out PT performance so its actually a viable option to switch on in a mainstream card.
I wonder if we will ever get another bleeding edge leap like this from cdpr since they are switching to Unreal 5.
What even is the next game in the near future that pushes visuals this hard. Alan Wake 2 is the only thing that comes to mind.
Immortals of Aveum was made by a team a 10th of the size, and was originaly built in UE4. And we already have pathtracing on UE5 with Desordre.Very sad that they are switching to Unreal 5 after the initial backlash of Cyberpunk 2077. Game has no stutter, has cutting edge tech. I know internal tools to make it happen might have been development hell, but going to unreal 5 is gonna suck, inevitably, what we're seeing so far is the cost of graphics ain't even worth the visuals. Cyberpunk 2077 overdrive performs better and scales better than say, Immortals of Aveum. A freaking open world megacity full of details vs a damn linear shooter.
Or they should have their engineering teams help amd where they are lacking. Instead of doing a copy pasta, maybe do some actual r&d for a change.
All i heard leading up to the console reveal was how both Sony and ms had custom rt solutions. Turns out they just took what amd gave them. Going with nvidia won’t change things because nvidia creates massive gpus that will be too expensive for consoles. Those dedicated rt cores and tensor cores take up space on the die.
And we already have pathtracing on UE5 with Desordre.
This is not true, `AI tech` is basically matrix math or FP4/6/8 operations. Once you have the hardware, you can accelerate AI operations. Just look at Intel, their very first attempt at a dedicated GPU, and they already have AI on par with dlss and RT too. At this point, AMD needs to shamelessly just at the very least, copy Nvidia and Intel and make sure they at least have hardware parity. At the very least, that is what they should target. As it stands, their GPUs are lacking proper AI acceleration and the RT cores aren't even accelerating the full RT pipeline (thats why AMD RT is so bad).You can't just slap in an "AI Tech" and suddenly match Nvidia, AMD were caught with their pants down regarding both resolution upscaling and ray tracing, they are playing catch up to an Nvidia that has been investing heavily in both of those technologies.
They are doing a decent job with hardware not really built specifically for RT.
The consoles won't be switching to Nvidia, margins are too tight for lord Jensen.
There is a gross embarrassment of lacking tech features between an AMD GPU vs an Nvidia or even Intel GPU. This is just the truth. The sad thing is, outside BC complications, even an Intel GPU built for consoles would be better performant than an AMD GPU.Why should they? There's only 2 current games with path tracing and when Nvidia launched turning,they stated it would be until 2024 when ray tracing was going to be impactful.
RDNA3 can accelerate AI work loads,it may not be as performant as Nvidia but how much do you need for gaming at every level of resolution?
And this is the problem, everyone who tries to defend AMD, including AMD themselves, does this. Talj up their raster performance. The reason they are winning the raster battle is because everyone else ha seen that it doesn't mean shit. They are increasing die area on more meaningful GPU features instead of just more Raster performance.Is it massive?
a 4080 with 379 mm^2 competes with a 7900XTX's GCD 306mm^2 + 6x37mm^2 MCDs
I removed the MCDs on RDNA 3, which includes the cache, just to showcase how stupid this architecture is. You're left with nearly a raw GCD chip of 306mm^2 of pure hybrid RT/ML to optimize the area towards more rasterization, as per patent.
- With 20~25% of silicon dedicated to RT/ML
- Without taking into account the memory controllers (how much you want out? 150~190 mm^2? They take space too)
- Without taking into account the huge cache upgrades Ada got. How much area, who knows, but cache is typically not space savy.
Yet we're talking a 2~4% RASTERIZATION performance advantage for nearly a 60W more power consumption on AMD side
I would say that's pretty fucking amazing what they did on Ada's architecture.
If i was a console manufacturer and i hesitate on Nvidia because "reasons", monetary or APU, then i go Intel. They're on the fast track to jump AMD on next iteration.
UE5 games already perform like shit before path tracing. I can't even find benchmarks of path tracing in that indie puzzle game. But we're a far cry from a game like Cyberpunk 2077, i think we can agree on that. ReSTIR PT for the nearly thousands of lights present in night city would make pretty much all other path tracing engines crawl. For UE5 you would have to import nvidia plugins anyway to match this at the very least, not the native path tracing branch.
CDPR's Cyberpunk 2077 overdrive engine is now soooo good. I would hope they fix their dev tooling to smooth things out, but the foundation of that engine is top tier now, in fact, nothing like it as of now, until another devs implements ReSTIR Path tracing. Alan Wake 2 is next. Northlight engine.
Not impressed with UE5 so far. I'll just say that.
This is not true, `AI tech` is basically matrix math or FP4/6/8 operations. Once you have the hardware, you can accelerate AI operations. Just look at Intel, their very first attempt at a dedicated GPU, and they already have AI on par with dlss and RT too. At this point, AMD needs to shamelessly just at the very least, copy Nvidia and Intel and make sure they at least have hardware parity. At the very least, that is what they should target. As it stands, their GPUs are lacking proper AI acceleration and the RT cores aren't even accelerating the full RT pipeline (thats why AMD RT is so bad).
There is a gross embarrassment of lacking tech features between an AMD GPU vs an Nvidia or even Intel GPU. This is just the truth. The sad thing is, outside BC complications, even an Intel GPU built for consoles would be better performant than an AMD GPU.
let's not sugarcoat this... there have been two defining hardware features in GPUs in the last 6 years. AI and RT acceleration. That's it, those two hardware components are the difference between a gen GPU and a current-gen GPU. They represent a clear technological shift from everything that came after 2018 and everything that came before that. They are the single biggest advancements made in GPU tech since we started having programmable shaders in the 2000s.
And yet, somehow... almost 6 years from their first appearance on the market, AMD doesn't even have full or at least comparable hardware for them? AMD is still fighting a Raster/FP battle with who? like we are still looking at a Vega 64. It's honestly embarrassing. There is absolutely no reason, why a 6-year-old GPU from their rivals (2080ti) should perform better than a just-released 7800XT in a current-gen game with modern graphical features. NO REASON that should be happening. And that just goes to show how far behind AMD is letting themselves lag behind.
Good enough or not that bad is not okay anymore.
And this is the problem, everyone who tries to defend AMD, including AMD themselves, does this. Talj up their raster performance. The reason they are winning the raster battle is because everyone else ha seen that it doesn't mean shit. They are increasing die area on more meaningful GPU features instead of just more Raster performance.
The key things that define any current-gen games are things that when fully utilized, would make the best AMD GPUs perform worse than 5 year old GPUs.
Raster...smh, AMD is like a damn one-trick pony right now.
![]()
![]()
RDNA 2 → RDNA 3 = 1.5~1.6x
Turing → Ampere = 2.1~2.4x
Ampere → Ada = 1.8~1.9x (not including frame gen )
To catch up, AMD has no choice but to throw their hybrid RT pipeline in the garbage.
Even if RDNA 3→ 4 has a 1.5x jump (not negligible) and again a 1.5x jump from RDNA 4 → 5, it wouldn't catch up to 4090 today in this game. And they have an advantage in rasterization as a baseline in Cyberpunk 2077 before anyone comes in screaming it's Nvidia biased. It's one of the better AMD performing titles before enabling RT.
They're 2 gen behind for path tracing. A damn 2080 Ti matches the 7900XTX flagship.
Turing 2080Ti which had
- no concurrent RT & graphic workload
- way lower frequencies, 1545MHz vs 2499MHz clocks
- 18.6M vs 57.7M transistors
- 28% pixel rate, under, 43% texture rate, 22% TFlops
- 68 RT cores vs 96
- Virtually negligible cache compared to RDNA 3
![]()
Let's not even get into ML to match DLSS 2, 3 frame gen and now 3.5 with ray reconstruction. They haven't even touched ML yet. Scary. I'm assuming that the current pipeline already has its hands full to juggle between RT & graphic workload, to add ML into the mix would choke it even further. Thus, i really think AMD needs to rethink the whole architecture. Do they swallow their pride and change, or they dig their heels in and risk Intel to come with a 2nd iteration that will put them at risk because they already have better RT & ML ?
Very sad that they are switching to Unreal 5 after the initial backlash of Cyberpunk 2077. Game has no stutter, has cutting edge tech. I know internal tools to make it happen might have been development hell, but going to unreal 5 is gonna suck, inevitably, what we're seeing so far is the cost of graphics ain't even worth the visuals. Cyberpunk 2077 overdrive performs better and scales better than say, Immortals of Aveum. A freaking open world megacity full of details vs a damn linear shooter.
Intel has some of the best engineers in their team but they lack better leadership and vision. They were stuck on 4 core CPU's for 5-6 yr and only changed when AMD came up with a better solution.It's amazing to think that Intel of all people have better AI acceleration on GPUs. As well as a better upscaled then AMD in the form of XeSS. Not to mention better RT acceleration as well.
To me it is no different than this:
If the end result of this tech gives us the IQ that we want or close to that, it's a net positive.
Very ,very impressive, it's crazy we have this kinda of lighting in real time now. Someone gimme a 4090
Unless you’re watching recent marvel movies or Netflix films, I bet you’d be genuinely surprised at the amount of non blockbuster, down-to-earth films that use background CGI(static buildings, grass, skies, etc), that you would not notice at all. It’s way more than you think.The problem is that these modern movie scenes DO look fake. It's extremely impressive on a technical level but also mind-numbing. As soon as you see a trillion things going on at the same time you know it's CG and then my brain gets overstimulated and I lose all interest.
Trust me, I know. I've stopped watching almost all new movies since a few years. The only ones I can stand are low budget/indie movies and some international stuff.Unless you’re watching recent marvel movies or Netflix films, I bet you’d be genuinely surprised at the amount of non blockbuster, down-to-earth films that use background CGI(static buildings, grass, skies, etc), that you would not notice at all. It’s way more than you think.
Those are the ones I’m talking about.The only ones I can stand are low budget/indie movies and some international stuff.