• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NVIDIA RTX 50 Series silently removed 32-bit PhysX support

hinch7

Member
No offense to the ones bothered by this, but i remember PhysX back then to always have been quite disliked due to killing fps, with the effects often not looking all that good to begin with (those borderlands 2 liquids looked awful). Whenever i saw people online at the time talking about this, vast majority turned the option off and recommended others to turn them off. Honestly i dont feel the loss all that much.
Looking at some of the video's I'm inclined to agree. Not worth kicking up a fuss about.

It sucks that 5000 series doesn't support it but PhysX is largely outdated anyways.
 
No offense to the ones bothered by this, but i remember PhysX back then to always have been quite disliked due to killing fps, with the effects often not looking all that good to begin with (those borderlands 2 liquids looked awful). Whenever i saw people online at the time talking about this, vast majority turned the option off and recommended others to turn them off. Honestly i dont feel the loss all that much.

No worries Jensen, I got you. You just keep working at pumping up that stock price.
 
Looking at some of the video's I'm inclined to agree. Not worth kicking up a fuss about.

It sucks that 5000 series doesn't support it but PhysX is largely outdated anyways.
The concept of GPGPU-accelerated physics was never outdated, it's just that the execution (optimization) was subpar on PhysX and Microsoft didn't push for standardization (DirectPhysics API).

15+ years ago gaming was better than today...

I still remember Uncharted 2 utilizing Havok to offer Cell SPU-accelerated physics (destruction, exploding cars, particle effects, RPG/grenade/propane tank explosions etc):

It was a good era for both PC and console gamers.

I don't like static worlds with no physics interaction. I like experimenting with game objects (sometimes you see cool bugs).

What's the point of having 100 Teraflop GPUs (RTX 5090)? Just for graphics (and a little bit of AI)?

Imagine if Nintendo had 100 Teraflops on Switch to develop Zelda BoTW... they would put them to good use gameplay-wise.
 
Last edited:

hinch7

Member
The concept of GPGPU-accelerated physics was never outdated, it's just that the execution (optimization) was subpar on PhysX and Microsoft didn't push for standardization (DirectPhysics API).

15+ years ago gaming was better than today...

I still remember Uncharted 2 utilizing Havok to offer Cell SPU-accelerated physics (destruction, exploding cars, particle effects, RPG/grenade/propane tank explosions etc):

It was a good era for both PC and console gamers.

I don't like static worlds with no physics interaction. I like experimenting with game objects (sometimes you see cool bugs).

What's the point of having 100 Teraflop GPUs (RTX 5090)? Just for graphics (and a little bit of AI)?

Imagine if Nintendo had 100 Teraflops on Switch to develop Zelda BoTW... they would put them to good use gameplay-wise.

Havok or its iterations exists and is pretty much standardized. All without completely tanking performance or needed and external GPU to handle calculations.

We had all of this with Half-Life 2 and Battlefield Bad Company 2 decades ago but sadly not enough developers care for more advanced physics outside superfluous ones like hair etc. Physx is just legacy along the lines of hairworks etc.
 
Last edited:

dgrdsv

Member
nVidia can earn a bit of goodwill if they open-source older PhysX SDK versions (CPU/Ageia PPU).
It could be a lot more complex than one would think if the code weren't supposed to be open source from the start.
But let's hope that if not that then people will just reverse engineer the calls and figure something out.

Or you could just buy a GTX 1630 to handle PhysX in these five or so games where it even matters.
 
Havok or its iterations exists and is pretty much standardized. All without completely tanking performance or needed and external GPU to handle calculations.

We had all of this with Half-Life 2 and Battlefield Bad Company 2 decades ago but sadly not enough developers care for more advanced physics outside superfluous ones like hair etc. Physx is just legacy along the lines of hairworks etc.
Sure we had CPU physics, just like we had software rasterization back in the 90s.

But the GPU is a better fit for parallel processing... whether it's nVidia, AMD, or even Intel.

Or you could just buy a GTX 1630 to handle PhysX in these five or so games where it even matters.
Some people have Micro-ATX motherboards with only one PCIe slot (personally I hate them, but it's a trend in the PC industry these days, along with small, console-like cases), there's also increased power consumption in idle (maybe not much, but every watt counts in some countries) and last but not least, nVidia can easily remove PhysX altogether for all GPUs and/or stop supporting Maxwell/Turing cards (just like they did with Kepler).

Our best bet is reverse-engineering PhysX *.dll files.
 
Top Bottom