Well this settles it. Transformer based models is clearly the future of upscaling, which means it's probably the future of graphics full stop, since we're in diminishing returns of price/performance from node shrinking.
And with AMD finally joining the "we can do AI-based upscaling too!" club, and Intel Arc having their hardware based XESS, they would both be VERY smart to quickly shift over to a transformer based approach, which means they needed to start training their respective theoretical T-models like, years ago?
Nvidia's head start with all of this stuff means they are simply the winners now. They already have the market penetration that supports Transformer upscaling going all the way back to the 20 series.
This is brutally one-sided now, even more than it was previously.