Digital art has, historically, been somewhat denigrated for being something someone could make with a few clicks of a mouse, unlike forealsies real art (which takes angst and crushed beetles), but now that’s actually true, thanks to NVIDIA and their deep learning model: GauGAN.
According to NVIDIA, “The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images”. They are also fans of Gauguin, the post-impressionist painter; thus GauGAN. This means you can make a rough sketch and the tool fills it in with realistic textures, depth, and light. Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, says:
It’s like a coloring book picture that describes where a tree is, where the sun is, where the sky is. And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colors, based on what it has learned about real images.
I’ve always been rubbish at backgrounds, so I can see the appeal, but this would actually be useful for architects, urban planners, game developers, and the like, or anyone in need of rapid brainstorming with natural elements. Check out how it works below.
The GauGAN app focusses on natural elements, but NVIDIA has demonstrated that the neural network is entirely capable of generating buildings, roads, and people.