When MicroLED and QDEL become a thing; and assuming they take the modular approach by joining lots of fine-tolerance, bezel-less mini panels under a monolithic flexible glass substrate. I could see them using just a few different mini panel ["wafer"] sizes for better production efficiency. With larger displays simply having an 8K default as consequence. All displays could take all inputs up to 8K and just use high quality scaling where applicable.
For eg.
5.2" 216p Wafers
26" | 1080p via 5 x 5 @ 384x216p 5.2"
52" | 2160p via 10 x 10 @ 384x216p 5.2"
104" | 4320p via 20 x 20 @ 384x216p 5.2"
6.4" 216p Wafers
32" | 1080p via 5 x 5 @ 384x216p 6.4"
64" | 2160p via 10 x 10 @ 384x216p 6.4"
128" | 4320p via 20 x 20 @ 384x216p 6.4"
8.0" 432p Wafers
40" | 2160p via 5 x 5 @ 768x432p 8.0"
80" | 4320p via 10 x 10 @ 768x432p 8.0"
That way the cost per inch won't rise as exponentially (though the chassis/additional electronics/shipping/packaging will still add a small premium) and the scaling of pixels relative to screen size would be more linear. Plus, just creating 2-3 smaller "wafers" would be very economical rather than a bunch of different large screen sizes with different pixel densities. If your image scaling is good enough you could theoretically reduce it down to just 1 or 2 wafer sizes/densities and use intermediate resolutions.
...
Outside traditional displays, VR/AR [in the long term] will probably need to go out to >12Kx12K per eye for an effectively perfect image.