Hard to get a good fix on MS and their strategy, but as a rough estimate using a theoretical PS6 in 2028*; I think ~45TF makes sense, but the architecture (RDNA6?) will be >2x as efficient vs RDNA2. Pair with a much stronger CPU, twice the RAM, a little over twice the bandwidth and at least twice the storage+I/O speed paired with even better decompression blocks.
The main thing is can they [by then, with AMD(?)] take a 1080p-1440p base image and accelerate AI upscaling with a perceptible result equal to or greater than a well anti-aliased Native 4K+ image; and can they take a 40fps native framerate and consistently frame-gen it up to 120fps with imperceptible latency and artefacting. All with minimal load on the wider hardware.
That way you can spend the equivalent -- I know this is a shaky comparison -- of ~90TF of RDNA2 on a 1080-1440p 40fps image and get a pristine 4K 120fps result. Allowing the bulk of that juice to go towards making the best quality assets and fx.
In addition, layer on heavily accelerated Ray Tracing and even some Path Tracing.
I understand of course that different parts of the chip work together in these operations so you can't just say one thing does this and another thing does that with all resources to itself; and of course the upscaling/frame-gen aren't free (though I expect they'll be better accelerated and more efficient with resources by then). But, as a rough estimate...I think we still see an 10x uptick in terms of what the system can do. It's just that the majority of the pixels will be generated after the initial render.
I think by the time next-gen is in full swing we might only have something like 1/8 or 1/12 of the pixels on screen (spatial+temporal) being "real". And not only that, but the acceleration will be much cheaper in terms of die space/energy and the inferred result will be superior to native.
* That said, I think MS are probably gunning for 2027.