The FLOPS difference does not always translate into an exact difference in frame rate, but this metric does give an idea of GPU shading performance, especially if the GPU architecture is the same.
The PS4 GPU is based on the GCN 1.1 architecture and has 1.84TF (specs somewhere between Radeon 7870 and 7850). The RTX3060 chip is based on a totally different (ampere) architecture and has 12.74TF (that's 6.9x flops difference). According to the techpowerup benchmark from The Witcher 3, the RTX3060 pushed around 315 million pixels (1440p 86fps) which is 5x more pixels than the PS4's (62 million pixels 1080p 30fps). Taking into account that the RTX3060 was also running the witcher 3 at max settings, while the PS4 was running at medium/high settings, I would say that the real performance difference was much closer to the theoretical flops difference of 6.9x.
On the screenshot you posted the RTX3060 has 75fps at 1080p with optimized settings. I doubt these "optimized" settings are comparable to the base PS4 (I saw missing details even at maxed out settings compared to my PS4 version), but let's assume that's the case. The PS4 version probably averages around 35-40fps without framecap (if the game maintains 30fps for 99.9% of the time, it needs to run at a much higher framerate than locked 30fps), so we are looking at 2x difference in framerate betweem PS4 GPU 35-40fps vs RTX3060 75fps. I dont think this is a good scaling when we consider 6.9x difference in shading power between the PS4 and RTX3060.
As for my RTX4080S, it has around 60TF (depending on the GPU clocks). That's 32x difference in shading power compared to PS4 (1.84TF), but Ada Lovelace is a completely different architecture. Even with Ampere, shading performance does not scale linearly (Ada Lovelace shader cores are slower compared to Ampere, but they are way more power efficient). That being said, I see around
21x PS4 scaling at 1080p (without DLSS) and the settings I used (high draw distance) were even higher compared to PS4, because you cant match PS4 settings exactly without ini tweaks.
So based on this comparison let's assume my RTX4080S is only 21x faster than the PS4 (instead of 32x in theory).
My RTX4080S can run TLOU2 in 1440p at 120fps. This translates into 440 million pixels, so we're looking at a 7x difference compared to the PS4 (62 million pixels), meaning my RTX4080S in this particular game has comparable performance to 12,8TF GCN 1.1 architecture

. You're right, there's nothing wrong with this port, we should find an excuse for everything and just enjoy playing this PS4 game at 60fps.
Why 60fps? My CPU (7800X3D) is sometimes decompressing data in the background and framerate can dip to around 80fps. In order to avoid the stuttering I need to lock framerate at 60fps. Terra Ware noticed the same problem in this video on his 9800X3D CPU and RTX4090, so he also locked the framerate to 60fps:
Sony once said that you need 8TF GPU to run PS4 games at 4K native but games like TLOU2 suggest it actually takes 60TF to do that

. I get around 60fps at 4K native in this game.
I dont know why you guys are defending this port so much, but I'm not happy with performance. I know my PC can run PS4 era games a lot better, for example RE3 Remake runs at 150-200fps at 4K native with maxed out settings (including RT) on my PC.