• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

I want a next gen 3D "pre-rendered" background game ala old RE.

Buggy Loop

Member
Or more precisely, with NERF-like variants like 3D Gaussian Splatting

It's basically a new graphic pipeline that splits from the graphic pipeline tree all the way back to the first rasterization, before shadows, cube maps, lighting techniques etc.

The beauty of it is that its not 2D, by taking videos of real-life this tech basically makes a 3D scene out of it. I doubt it would be functional for a huge game or a very complex map, thus why in the thread title I go back to the pre-rendered background games we know, small scenes that would still be 3D but with a visual quality of real life.

This is a good explanation



Now its equivalent to a "cloud" of splats, so it currently breaks down when you don't have enough data, or if you zoom in or out too much. Thus the pre-rendered background type of game ala Resident Evil requirement.

The tech of this is advancing so rapidly that training AI for these scenes in the video above went from 48h in MipNeRF360 a year ago to 6.1 minutes in Ours7K a few months ago, a 1/50th training time advancement.

Can also have some crazy effects like a scene just burning out and in of existence (Silent Hill :pie_thinking:)



Or your next Waifu

0*RC0lyJIx2_dFrGRf


Oct-09-2023_11-40-16.gif


But its static! Well for now, already papers on making dynamic with neural networks. Its a question of time before AI basically manipulates a cloud of splats like it would animate a skeleton of a game's character.

Those clouds can keep shrinking as they find new algorithms to speed things even faster to the point where accuracy is nearly at microscopic details, or even AI can help with that even further.

Point is, its a new branch of rendering and its evolving super rapidly. It might catch up faster in making photorealistic games than raw rendering techniques with path tracing and so on. Or maybe not 🤷‍♂️ Its part of the AI revolution and its very hard to predict how it'll be in just a few years.

Be it this tech (eventually), or photogrammetry, I can't be the only one that would love a modern RE with real-life like backgrounds? An indie dev will surely think of it, its a matter of time.
 
Last edited:

mortal

Banned
So you want a photo-realistic game with fixed camera angles, essentially.

I wouldn't mind a new game with that sort of aesthetic if the premise were truly interesting. Although, the gameplay itself would need to offer something fresh.
Just being a graphical showcase and the novelty of looking like older games wouldn't be enough for me.
 

Buggy Loop

Member
So you want a photo-realistic game with fixed camera angles, essentially.

I wouldn't mind a new game with that sort of aesthetic if the premise were truly interesting. Although, the gameplay itself would need to offer something fresh.
Just being a graphical showcase and the novelty of looking like older games wouldn't be enough for me.

But games like Signalis are super well received even with PSX like graphics and "old" gameplay.



I think a photo-realistic version of that would be amazing? Yea I think I want that.

It wouldn't be totally fixed camera angles either as these 3D gaussian scenes can still be 3D, but it would have to get enough data points and have limits in how close you can zoom in, etc. It would have to be more restrainted like old pre-rendered fixed camera games basically as I don't think the tech is ready for like a complex map or a full fledged open world, for now.
 
I've seen posts about that technique before (maybe by you), and it's super impressive.

I don't think big AAA studios are going to be the first to put out a game using it. They're too ingrained in the current processes they have. And too many jobs might become unnecessary if this new technique can streamline things.

But I think an upstart is going to use this tech, blow everyone away, and be a "disruptor". Might take a few years, but yeah.
 
It's absolutely insane but not all that surprising that Mark Zuckerberg is the one pioneering this facial capture technology. Somehow it looks even more convincing than Avatar 2.

Plus it runs in real time. I really hope video game faces start to improve like this throughout the industry at some point. It essentially the future version of the tech from LA Noire.
 

rofif

Can’t Git Gud
There will never be a day I understand wtf blockchain and bitcoin really is and there will never be a day what brain splatting is. I've done reading on it and nothing actually fucking explains it
 

consoul

Member
If you want photo-realistic fixed angles, there's a technique called photography that does it well.

Seriously though, if your character models and lighting were up to scratch, you could just use photos instead of static pre-rendered backgrounds.
 

Buggy Loop

Member
There will never be a day I understand wtf blockchain and bitcoin really is and there will never be a day what brain splatting is. I've done reading on it and nothing actually fucking explains it

You know triangles and how its rasterized.

A 2D gaussian can replace the triangle

gausiano.png


A 3D gaussian can replace a cumulation of triangles that would form a shape

Attaching a "clump" of 3D gaussians to a cloud point will define your scene (and optimize where to put the gaussians. This clump is a "splat", like a splat of water drops.

Then when it looks a deep field universe image of galaxies, unusable and looks very very strange to what we are used to for polygonal worlds, motherfucking magic enters with an iterative process that determines what those gaussians should look like depending on camera angle, properties of it (color, alpha, etc)

Then the rasterization is equivalent, just like you rasterize polygons into a 2d projection, same happens for that splat of gaussians.

Check this short vid that showcases every stages of it



The 3D gaussian splats goes from this :

q9ss2Lb.jpg


To this :

5GLm6tB.jpg



Blow Your Mind Wow GIF by Product Hunt


Fucking witchcraft
 

rofif

Can’t Git Gud
You know triangles and how its rasterized.

A 2D gaussian can replace the triangle

gausiano.png


A 3D gaussian can replace a cumulation of triangles that would form a shape

Attaching a "clump" of 3D gaussians to a cloud point will define your scene (and optimize where to put the gaussians. This clump is a "splat", like a splat of water drops.

Then when it looks a deep field universe image of galaxies, unusable and looks very very strange to what we are used to for polygonal worlds, motherfucking magic enters with an iterative process that determines what those gaussians should look like depending on camera angle, properties of it (color, alpha, etc)

Then the rasterization is equivalent, just like you rasterize polygons into a 2d projection, same happens for that splat of gaussians.

Check this short vid that showcases every stages of it



The 3D gaussian splats goes from this :

q9ss2Lb.jpg


To this :

5GLm6tB.jpg



Blow Your Mind Wow GIF by Product Hunt


Fucking witchcraft

wtf. So like voxels but round and diffused ?
 
Last edited:
  • Like
Reactions: Gp1

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Very promising tech. Looking forward to seeing how this plays out. Could be talking absolutely hyper realistic games in real time. Not sure how the lighting would work.
 

RoadHazard

Gold Member
Isn't this pretty much how Dreams is rendered? Pretty sure it uses "splats".

Either way, I don't really see the point. If a dev wanted to make a game like that they could do it using traditional rendering and photogrammetry, and it could look pretty damn photorealistic these days if the environments are confined and the camera angles tightly controlled (without having to be completely static like in the old pre-rendered days).
 
Last edited:
reminds me a bit of the "infinite detail" tech that was "coming soon" like 20 years ago, where basically the more you zoom in, more geometry is procedurally generated, so everything looks high poly at all levels.

AIs gonna change the rendering pipeline big time, and stuff like this may be implemented as a side effect.
 

Robb

Gold Member
Not sure if I’d be interested in a game like that, but I’d be willing to give it a shot.
 

E-Cat

Member
Too dumb to understand what Gaussian Splatting is, but I, too, want static pre-rendered backgrounds back.

With today's resolution, the offline rendered backgrounds would look absolutely spectacular. What's more, with all the processing power left to drive the character models, they could be made at a truly movie-like quality; also run some path tracing algos to make them blend dynamically with the environment lighting.

An old-school art style RE or FF utilizing such technology would be a sight to behold. Alas...
 
Last edited:

nkarafo

Member
I don't get it.

You show some videos of some real time 3D techniques but you want a 2D pre-rendered backgrounds game?

I mean, i also want one but there's no need for some fancy, new rendering tech. They can make 2D HD assets just fine either way.
 

Buggy Loop

Member
Isn't this pretty much how Dreams is rendered? Pretty sure it uses "splats".

Either way, I don't really see the point. If a dev wanted to make a game like that they could do it using traditional rendering and photogrammetry, and it could look pretty damn photorealistic these days if the environments are confined and the camera angles tightly controlled (without having to be completely static like in the old pre-rendered days).

I'll take either technique if one damn dev can make the most real-life looking game to be honest, I don't care on the way to get there but I'm presenting also interesting alternatives than what we already know (photogrammetry)

Photogrammetry works nice at small scale but kinds of break down at large scale complex scenes and some materials as it does not work well with water/glass/shiny surfaces.

The radiance field and AI of these new techniques mean you can create new scenes OUT of the dataset you gave it, AI determines what it should look like even though it has absolutely no data for certain spots. Any angles. Very important distinction, you'll get a result even without taking a photo of all angles. Like you take a photo of a church and there's a spot above you couldn't capture, this solution will give you something, while photogrammetry you're limited to the data you accumulated. That will then take work to edit. It's a totally different ballpark of work.

Example :





NERF is already old news but the video explains the pros/cons between them.

The advantage of photogrammetry now is that its a mesh and it works well with our understanding of how to make games, engines as of now aren't really made for this new method of course, but I don't think that advantage will last long.

Things about all this is AI is behind this. What is the next step? Modify the cloud points like particles and make everything dynamic with an AI interpreting everything? Isn't this kind of the future the DLSS guy from Nvidia imagines? This has the possibility to become totally insane and skip all the hard working steps of meshing/texturing/lighting.

Very promising tech. Looking forward to seeing how this plays out. Could be talking absolutely hyper realistic games in real time. Not sure how the lighting would work.

Since what we see is all interpreted by AI, if you fed it enough information or that eventually its trained well enough to determine what an image should be looking like, lighting will change with camera angle and not be baked like photogrammetry, you end up with stuffs like this :



(old NERF from 2 years ago again, things have improved)

Want a new art style without redoing all the work? AI will do it, that's what its good for


Eventually if it works in a game, you could go from hyper realistic to cartoony style, whatever you want, telling AI to change will just alter everything.

Our minds are not ready for this, its too many decades of a games being made a certain way.

Holy fuck...Zuckerbot is almost a REAL BOY now!

He looks more real in an avatar than real-life :messenger_tears_of_joy:

I don't get it.

You show some videos of some real time 3D techniques but you want a 2D pre-rendered backgrounds game?

I mean, i also want one but there's no need for some fancy, new rendering tech. They can make 2D HD assets just fine either way.

What I'm saying is that the tech as of now, breaks down a certain distance, from my understanding, so you can't just make a GTA 6 with this.

3D backgrounds, with a smaller scope more similar to 2D pre-rendered backgrounds of the old days, scene-by-scene and limits to what the player can do with the camera and zoom in/out. I hope this is more clear. Like it can be a room, a photorealistic room, from my understanding, you have to put dampeners on user control or it could become really ugly when it runs out of dataset.

The tech might evolve beyond those constraints but as of right now, I think this would be the best use case.
 
Last edited:

CGNoire

Member
Unless Im missing something 3D Guasian Splattering is a far faster but less accurate form of photogrammetry so Im not sure how its development is gonna help players at all since it seems to be mostly just a speed up on a previous workflow. Its doesnt even produce any proper dynamic surface material indication. Everything is baked no? In a world where everything stat8c is becoming dynamic this is a major step back not forward. I understand why devs are excited but gamers?


I feel like the real reason this tech keeps being brought up here is cause of all the click bait headlines and manufactured enthusiam generatex on YT for clicks.

Maybe Im wrong about it and you can enlighten me what your seeing that Im not. Even that fire shader effect could easily be a applied to the point cloud rendering in PS4's Dreams. Its not like point clouds rendering is new and the reason its not replaced polygons has been well documented and shouldnt be of any suprise.
 

Buggy Loop

Member
Unless Im missing something 3D Guasian Splattering is a far faster but less accurate form of photogrammetry so Im not sure how its development is gonna help players at all since it seems to be mostly just a speed up on a previous workflow. Its doesnt even produce any proper dynamic surface material indication. Everything is baked no? In a world where everything stat8c is becoming dynamic this is a major step back not forward. I understand why devs are excited but gamers?


I feel like the real reason this tech keeps being brought up here is cause of all the click bait headlines and manufactured enthusiam generatex on YT for clicks.

Maybe Im wrong about it and you can enlighten me what your seeing that Im not. Even that fire shader effect could easily be a applied to the point cloud rendering in PS4's Dreams. Its not like point clouds rendering is new and the reason its not replaced polygons has been well documented and shouldnt be of any suprise.

Post above pretty much explains all I can think of 🤷‍♂️
 

Alexios

Cores, shaders and BIOS oh my!
0 reason to not go fully real time, restricted camera angles and gameplay of the sort will already allow crazy detailing if they wanna go that route + more freedom to set/alter the stage without having to re-pre-render everything, be unable to move the camera to good effect at various points etc.
 
Last edited:

Buggy Loop

Member
0 reason to not go fully real time nowadays, restricted camera angles and gameplay of the sort will already allow crazy detailing if they wanna go that route + more freedom to set/alter the stage without having to re-pre-render everything, move the camera to good effect at various points etc.

Mr Bean Waiting GIF by Bombay Softwares


I guess there's no market for it
 

CGNoire

Member
Mr Bean Waiting GIF by Bombay Softwares


I guess there's no market for it
No i think you are onto something with the whole using it for Prerendered scenes. It would allow some small lateral motion of the camera (a gentle swaying for instance) that would create a lot of subtle parralax and I love the idea of Prerendered backgrounds making a comeback.

I think the Matrix Awakens has given us a glimpse of the character quality we could expect from such a aproach with its constuct scene which is basicly a static background with realtime characters and props being rendered on top. Imagine that fidelity with voxel based fluid dynamic effects work. I would kill for something like that now a days.
 
Last edited:

CGNoire

Member
Or more precisely, with NERF-like variants like 3D Gaussian Splatting

It's basically a new graphic pipeline that splits from the graphic pipeline tree all the way back to the first rasterization, before shadows, cube maps, lighting techniques etc.

The beauty of it is that its not 2D, by taking videos of real-life this tech basically makes a 3D scene out of it. I doubt it would be functional for a huge game or a very complex map, thus why in the thread title I go back to the pre-rendered background games we know, small scenes that would still be 3D but with a visual quality of real life.

This is a good explanation



Now its equivalent to a "cloud" of splats, so it currently breaks down when you don't have enough data, or if you zoom in or out too much. Thus the pre-rendered background type of game ala Resident Evil requirement.

The tech of this is advancing so rapidly that training AI for these scenes in the video above went from 48h in MipNeRF360 a year ago to 6.1 minutes in Ours7K a few months ago, a 1/50th training time advancement.

Can also have some crazy effects like a scene just burning out and in of existence (Silent Hill :pie_thinking:)



Or your next Waifu

0*RC0lyJIx2_dFrGRf


Oct-09-2023_11-40-16.gif


But its static! Well for now, already papers on making dynamic with neural networks. Its a question of time before AI basically manipulates a cloud of splats like it would animate a skeleton of a game's character.

Those clouds can keep shrinking as they find new algorithms to speed things even faster to the point where accuracy is nearly at microscopic details, or even AI can help with that even further.

Point is, its a new branch of rendering and its evolving super rapidly. It might catch up faster in making photorealistic games than raw rendering techniques with path tracing and so on. Or maybe not 🤷‍♂️ Its part of the AI revolution and its very hard to predict how it'll be in just a few years.

Be it this tech (eventually), or photogrammetry, I can't be the only one that would love a modern RE with real-life like backgrounds? An indie dev will surely think of it, its a matter of time.

It weird i read the op right before bed this morning but didnt reread it upon waking and replying. You seem to have already covered all my concerns already making my reply redundant. My bad.
 

CGNoire

Member
Ahhh so it’s always kinda 2d illusion?
Yes but no more of an illusion than rendering via textures "wrapped" aorund polygons.. I mean its splats or points are populated around the surface of the geometry. While voxels "render" only the surface via point cloud splats (which is similar) the voxels themselves are used to represent its entire volume reguardless of whether there rendered or not.
 

nkarafo

Member
0 reason to not go fully real time, restricted camera angles and gameplay of the sort will already allow crazy detailing if they wanna go that route + more freedom to set/alter the stage without having to re-pre-render everything, be unable to move the camera to good effect at various points etc.
There is one reason they could go RE pre-rendered backgrounds:

Ultra-detailed 3D models of the characters and monsters.

When REmake was released on the Gamecube, the one thing that impressed me the most was the detail of the character models. Obviously, all the polygon budget could be spent on them and they looked amazing at the time.

Modern games have way more detail nowadays, to the point where both the 3D models and environments have crazy geometry and detail. But what if you could use all that on the characters only? What kind of detail could you achieve on a PS5? Maybe something that's actually photorealistic and not CGI looking? I'd like to see that..
 
Last edited:

Alexios

Cores, shaders and BIOS oh my!
There is one reason they could go RE pre-rendered backgrounds:

Ultra-detailed 3D models of the characters and monsters.

When REmake was released on the Gamecube, the one thing that impressed me the most was the detail of the character models. Obviously, all the polygon budget could be spent on them and they looked amazing at the time.

Modern games have way more detail nowadays, to the point where both the 3D models and environments have crazy geometry and detail. But what if you could use all that on the characters only? What kind of detail could you achieve on a PS5? Maybe something that's actually photorealistic and not CGI looking? I'd like to see that..
3D fighting games can afford that and I can't say they look that much better unless we're comparing to some open world game NPC or something, I think the restricted environments and camera angles and the gameplay itself usually not being some crazy vs hordes of enemies mode in such survival horror style games would afford more than enough polygons for them coupled with normal maps, tessellation and whatever else can be used nowadays to enhance them further than seen even in the likes of Callisto Protocol which has similar benefits but being in free third person much less so.

Probably more an issue of budget/skill than tech nowadays (and I don't think they'd save much on that since the pre-rendered backgrounds would still have to be lavishly created anyway, they may save on performance resources but not on dev time I don't think, maybe even lose on that nowadays).
 
Last edited:
You're not alone, Resident Evil Zero still looks amazing 21 years later, it's one of the best looking games of all time.

I'd love to see that pre-rendered Resident Evil style return, I assume we could already achieve photorealism with it.
 

Danjin44

The nicest person on this forum
Photo realistic mean we getting boring and ugly character design for sake of “realism”……no thanks.

I don’t care for photo realistic graphics in my games because they fucking boring.
 

Three

Member
Photo realistic mean we getting boring and ugly character design for sake of “realism”……no thanks.

I don’t care for photo realistic graphics in my games because they fucking boring.
It doesn't mean it has to be photo realistic. Dreams by Media Molecule actually uses a splat based renderer already. One of the only games I know that uses it (there may be more)

In that you can make something that looks photo realistic like this:


or completely gamey stuff like these:

 
Last edited:

Danjin44

The nicest person on this forum
It doesn't mean it has to be photo realistic. Dreams by Media Molecule actually uses a splat based renderer already. One of the only games I know that uses it (there may be more)

In that you can make something that looks photo realistic like this:
I actually like idea we get games with pre-render background or hand painted background like this..
MV5BM2I1MzM2OTQtMGJiZS00NmQwLTgyMTYtNDhkZmQyNDBkM2Y1XkEyXkFqcGdeQXVyNzgxMzc3OTc@._V1_.jpg


Edit this game kind does this..

kXf52gC.jpg

z2227lJ.jpg


I just dont like second part about putting photo realistic models......thats boring in my opinion.
 
Last edited:

Buggy Loop

Member
Welp, it advanced way faster than I thought it would





- The simulator instantiates two exquisite 3D assets: pirate ships with different decorations. Sora has to solve text-to-3D implicitly in its latent space.
- The 3D objects are consistently animated as they sail and avoid each other's paths.
- Fluid dynamics of the coffee, even the foams that form around the ships. Fluid simulation is an entire sub-field of computer graphics, which traditionally requires very complex algorithms and equations.
- Photorealism, almost like rendering with raytracing.
- The simulator takes into account the small size of the cup compared to oceans, and applies tilt-shift photography to give a "minuscule" vibe.
- The semantics of the scene does not exist in the real world, but the engine still implements the correct physical rules that we expect.

Next up: add more modalities and conditioning, then we have a full data- driven UE that will replace all the hand-engineered graphics pipelines

Holy shit
 

Buggy Loop

Member
An even better contemporary way to capture the spirit of pre-rendered backgrounds, but with a cutting-edge spin on the concept, would be to pre-render as neural radiance fields using AI diffusion-based models.

This would mean essentially pre-rendering more than single vantage point, allowing a smooth transition of angle in the scene--or even ability to alter lighting dynamically and do other effects.

Examples: https://www.matthewtancik.com/nerf

Sora just kind of blew off the lid on all this last night.

The paper is a must read


"We do this by arranging patches of Gaussian noise in a spatial grid with a temporal extent of one frame. The model can generate images of variable sizes—up to 2048x2048 resolution."

" Emerging simulation capabilities We find that video models exhibit a number of interesting emergent capabilities when trained at scale. These capabilities enable Sora to simulate some aspects of people, animals and environments from the physical world. These properties emerge without any explicit inductive biases for 3D, objects, etc.—they are purely phenomena of scale."

"Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI."

I'm stealing a quote from someone on openAI reddit but he nailed it :

I think a lot of people are looking at this the wrong way. Everyone is thinking... oh cool, it's a video Gen tool.

That's not the main story though.

The real story is the fact that this thing can model the future and past and project it into pixel space using an internal world model and do it very well.

Humans have something like that too. It's called imagination. When you walk around absorbing the data from your eyeballs, you are constantly thinking of what could happen next. When you close your eyes you can imagine it.

Now we have a system that does this quite well. And this is also a key part of making things like truly autonomous cars and robotics a reality. It really is a only a matter of time and getting the right hardware
 
Last edited:

ResurrectedContrarian

Suffers with mild autism
Sora just kind of blew off the lid on all this last night.

The paper is a must read


"We do this by arranging patches of Gaussian noise in a spatial grid with a temporal extent of one frame. The model can generate images of variable sizes—up to 2048x2048 resolution."

" Emerging simulation capabilities We find that video models exhibit a number of interesting emergent capabilities when trained at scale. These capabilities enable Sora to simulate some aspects of people, animals and environments from the physical world. These properties emerge without any explicit inductive biases for 3D, objects, etc.—they are purely phenomena of scale."

"Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI."

I'm stealing a quote from someone on openAI reddit but he nailed it :
Exactly... I think that--eventually--the world of gaming and CGI film effects will be totally upended by the sampling approach using models, instead of manually constructing the familiar pipelines of polygons, textures, etc. The entire stack of expertise that runs the current gaming world has a likely endpoint on the horizon, or at least a total transformation.
 
I maintain that prerendered backgrounds are still the way to go for survival horror fixed camera games. I want the REAL RE2 remake, not that god awful casual third person shooter we got. The visuals in RE1 remake blow RE2R out of the water. Hell, I'd argue the environment detail in the original RE2 from 1998 is still superior to what we got in the remake. Just compare the sheer amount of unique assets present in any given scene from the original to the remake's equivalent one and it's night and day. The basketball court comes to mind. Remake is the most bland, devoid of detail scene in any modern videogame I can think of, it's pathetic. I want my prerendered modern RE2 god damn it and I want it now.
 

Phase

Member
Betting All In GIF


I was just talking with my brother about this. The level of detail we could achieve today would make for an amazing experience, especially because you could make it either super accurate to real environments or super detailed in whatever artistic style you wanted to create unique worlds. The obvious example would be a new RE or Onimusha (which imo would be scary as fuck with hyper-realistic environments).

Also, what if they were to make VR games with ultra-detailed prerendered backgrounds? That would be nuts.
 
Top Bottom