• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Are "Generational" leaps become a thing in the past? AI upscaling and Frame generation is the future

Shaki12345

Member
Devs are focusing on the wrong things and are trying to sell games based on how they look. The top 10 charts show that the most played games are mostly older games where graphics are secondary to gameplay.

Games look pretty enough but mission design, AI haven't progressed at all. Put some power into that.

I don't care if my gf is hot as F when she is dead on the inside.
 
Last edited:

Azelover

Titanic was called the Ship of Dreams, and it was. It really was.
You forgot to mention the big one which is diminishing returns.

Even if we do figure out a way to keep Moore's law going in some fashion, it will quickly become irrelevant again due to the fact you won't see the improvements with your eyes..

I think we need to come to peace with the fact that we need something other than graphics to continue the roll out of the industry. The graphics race is quickly coming to an end. We need to accept it and look to other things that can improve the experience..
 

Danjin44

The nicest person on this forum
stanley-hudson.gif


I personally care faaaaaaaar more for good art direction rather than raw graphics.
 
I think developers need to use better, less resource intense, engines and start developing their own again. We've seen some amazing detail out of Source engine, especially in VR. I'm not sure why more devs don't utilize it.
 
Last edited:

Myths

Member
How many times are we going to have the same topic on the matter? It was already made obvious in various threads that efficiency is what’s prioritized. If optimization yields diminishing returns for the business over the time they could get a product out the door, then it will be offloaded to AI accordingly. Likewise, if the cost of engineering semi-conductors comes with more complicated and complex challenges (cost-benefit analysis) at present then the obstacles therein will be better addressed by leveraging more intelligent design theory — this is where AI steps in.

It’s highly accurate and precise based on the data it is fed, and the whining tends to be perceptual issues than statistical. The only people who think otherwise are stuck in primitive, technologically conservative paradigms where they believe a stone club that’s bigger is better.
 
Last edited:
Call me loony but I think something happened when they introduced ray tracing. Up until that point better hardware went hand in hand with huge leaps. Suddenly Nvidia came around the corner with this, lets call it, effect that demanded an INSANE amount of hardware for no obvious reason and ever since hardware demands have went up through the roof without the games even looking that much better.... and every time you raise an eyebrow it's like... but.... but... but... RAY TRACING!!!!

Biggest fluke in gaming technology! There, I said it... Come at me!
Yes you're kinda close. What happened is Moore's law and dennard scaling died so it became unsustainable to shrink transistors and achieve the same power and cost reductions we used to have. Now every next generation node is drastically more expensive than the last. Take a look at this image displaying TSMC's wafer prices:

iMN33Bj.jpeg


We were seeing the effects of this before but it really started to become unsustainable at 7nm which is what the PS5, Xbox Series, AMD RDNA1 and 2 use. Whata happening now is that each node is becoming more expensive to make chips on such that technology is getting more expensive not cheaper as it used to. This is why the PS5 has not dropped in price when this time last gen we already had the PS4 slim. 2nm is expected to cost $30k+ per wafer, that's pure insanity. GPUs and consoles made on 2nm will be faster but they will be very expensive.
 

E-Cat

Member
For rasterization, yes. I think it's going to be all ray tracing leaps and AI for the rest of the life of silicon based transistors. We could live to see a change past that, but too early to tell.
Whatever the the next paradigm beyond silicon transistors, it will probably be designed by AI, too.
 
Yes you're kinda close. What happened is Moore's law and dennard scaling died so it became unsustainable to shrink transistors and achieve the same power and cost reductions we used to have. Now every next generation node is drastically more expensive than the last. Take a look at this image displaying TSMC's wafer prices:

iMN33Bj.jpeg


We were seeing the effects of this before but it really started to become unsustainable at 7nm which is what the PS5, Xbox Series, AMD RDNA1 and 2 use. Whata happening now is that each node is becoming more expensive to make chips on such that technology is getting more expensive not cheaper as it used to. This is why the PS5 has not dropped in price when this time last gen we already had the PS4 slim. 2nm is expected to cost $30k+ per wafer, that's pure insanity. GPUs and consoles made on 2nm will be faster but they will be very expensive.
That doesn’t explain though why the hardware is so much better today then say 10 years ago but the quality difference is anything but obvious.
 

yurinka

Member
Nah, generational leaps will continue there. New generational leaps will be needed to make a major improvement in RT/RTGI and real time AI stuff applied to games not only for upscaling or extra frame generation, which also will improve with generational leaps.

Stuff like tweaking / improving textures, models, animations, lighting, shadows, reflections, NPC conversations etc. in real time via AI will also be introduced and improved over time in different generational leaps.
 
Last edited:

Durin

Member
We've definitely hit diminishing returns on rasterization improvements, and just what amount more detail the average person's eyes will even notice.

I don't think generational leaps have to go away, but it's going to be some other kind of technology that improves fidelity in areas that aren't just higher res textures, or maybe stuff in caustics/fluid simulation.

Either way, the 2 benefits are that it forces more inventive presentation than just banking on muh realism, and the diminished return benefits devices like the Switch 2 & inevitable next Steam Deck that will have PS4-PS4 Pro tier visuals on the go that will be enough for most people.
 

Gamer79

Predicts the worst decade for Sony starting 2022
I think developers need to use better, less resource intense, engines and start developing their own again. We've seen some amazing detail out of Source engine, especially in VR. I'm not sure why more devs don't utilize it.
I agree with that but it's about money. Developing a custom game engine cost Tens of millions of dollars. They find it cheaper just to license something like Unreal 4 or 5.
 
I agree with that but it's about money. Developing a custom game engine cost Tens of millions of dollars. They find it cheaper just to license something like Unreal 4 or 5.
Which is why there should be more ambitious indie devs. Which there are, but there needs to be more of them. The industry was founded on innovation and optimization and over the years, greed has gotten in the way. It's sad. Unity and UE, need to be utilized less, or need to be utilized properly, which also is an issue.
 
Last edited:
You forgot to mention the big one which is diminishing returns.

Even if we do figure out a way to keep Moore's law going in some fashion, it will quickly become irrelevant again due to the fact you won't see the improvements with your eyes..

I think we need to come to peace with the fact that we need something other than graphics to continue the roll out of the industry. The graphics race is quickly coming to an end. We need to accept it and look to other things that can improve the experience..
Until full RT is available in consoles id never think to presume that "graphics race is coming to an end"....we have a ways to go
 
Top Bottom