winjer
Member
An Nvidia supercomputer has been improving DLSS non-stop for the past six years
While discussing the tech at at the consumer electronics show, Nvidia's VP of applied deep learning research, Bryan Catanzaro, said improving DLSS has been a continuous, six-year...
www.techspot.com
It's been more than six years since Nvidia introduced the world to its image enhancement and upscaling tech – deep learning super sampling, or DLSS for short. The latest implementation, DLSS 4, was announced earlier this month at CES and promises to be exponentially better than what we first saw with GeForce 20 Series, but have you ever stopped to ponder exactly how we got to this point? As it turns out, a massive supercomputer has been involved in the process since the very beginning.
While discussing the tech at at the consumer electronics show, Nvidia's VP of applied deep learning research, Bryan Catanzaro, said improving DLSS has been a continuous, six-year learning process. According to Catanzaro, a supercomputer at Nvidia loaded with thousands of the latest and greatest GPUs runs 24/7, 365 days a year – and its sole focus in on improving DLSS.
The training process largely involves analyzing failures, Catanzaro said. When a DLSS model fails, it looks like ghosting, flickering, or blurriness in a game. When such failures are detected, Nvidia tries to figure out what caused the model to make the wrong choice.
Analyzing errors helps Nvidia figure out how to improve their training data. The model is then retrained on the newer data, and gets tested across hundreds of games. Rinse, repeat. "So, that's the process," Catanzaro concluded.