Modal Title
Machine Learning

Deep Learning AI Generates Realistic Game Graphics by Learning from Videos

Massachusetts Institute of Technology and Nvidia (the company that invented the graphics processing unit or GPU) recently demonstrated how it is possible to generate synthetic 3D gaming environment using a neural network that has been trained on real videos of cityscapes.
Jan 10th, 2019 11:53am by
Featued image for: Deep Learning AI Generates Realistic Game Graphics by Learning from Videos
Images: Nvidia & M.I.T.

The video game design industry has evolved immensely since its early, pixellated days: nowadays, games often feature high-end graphics, underpinning immersive worlds that are populated with non-player characters that one can interact with. Not surprisingly, creating these engaging game environments often requires a sizeable complement of human writers, artists and developers using a variety of software tools like game engines to graphically render these complex worlds.

But what if some of that work could be automated instead, using artificial intelligence? A team from the Computer Science and Artificial Intelligence Lab at Massachusetts Institute of Technology and Nvidia (the company that invented the graphics processing unit or GPU) recently demonstrated how it is possible to generate synthetic 3D gaming environment using a neural network that has been trained on real videos of cityscapes. Such technology could have big implications for the game and film industries, as well as the development of virtual reality platforms. You can see for yourself what the results look like:

Video-to-Video Synthesis

As the team notes in their research paper, their hybrid approach involves using deep learning artificial intelligence, along with a traditional game engine, to generate visuals synthesized from video footage of the real thing. This process, called video-to-video synthesis, involves getting the AI model to “learn” how to best translate input source video into video output that looks as photorealistic as the original video content.

“Nvidia has been inventing new ways to generate interactive graphics for 25 years, and this is the first time we can do so with a neural network,” said Bryan Catanzaro, who led the team and is also vice president of Nvidia’s deep learning research arm. “Neural networks — specifically generative models — will change how graphics are created. This will enable developers to create new scenes at a fraction of the traditional cost.”

To achieve this, the team based their approach on previous work like Pix2Pix, an open-source image-to-image translation tool that uses neural networks. In addition, the researchers utilized a particular type of unsupervised deep learning algorithm called generative adversarial networks (GANs), which designates one neural network as a “generator” and another neural network as a “discriminator.” These two networks play a zero-sum game — with the generator network aiming to produce a synthesized video that the discriminator network cannot ultimately determine as fake.

Training data was taken from video of driving sequences, culled from autonomous vehicle research data in various cities, and segmented into various categories, such as buildings, cars, trees and so on. The GAN is then fed these data segments so that it can then synthesize a variety of fresh and different iterations of these objects, in order to eliminate any perceived sense of déjà vu.

The team then used a conventional game engine to produce a virtual urban environment, using the GAN to generate and overlay the synthesized images in real-time. Moreover, to prevent the system from producing a video where things might completely change appearance from one frame to the next, the team had to incorporate a kind of short-term memory that would enable the model to consistently remember the attributes of objects.

Comparison of AI-synthesized video: Segmentation map (top left); pix2pixHD (top left); COVST (bottom left); NVIDIA’s model (bottom right).

Granted, the researchers admit that the end results aren’t large in scale — it resembles something like a simple driving simulator that only allows the player to drive around for a few blocks, without the possibility of leaving the vehicle to interact with other characters. There is some tell-tale smearing of the generated video that hints at its artificiality, but what’s notable is that the whole experiment was done using a single GPU.

The team’s method is also more flexible than prior research, as it permits users to easily swap out objects, such as inserting a long row of trees, instead of buildings as shown in the original video — a feature that could be applied to images of people as well. For instance, the researchers were able to translate the dance moves of someone in a video, and transfer those movements onto an artificially generated model of a completely different person.

Of course, there are a lot of potential advantages — and disadvantages — to such technology. Similar technology has been used by researchers to demonstrate how AI could be used to generate faked videos that look quite convincing. Nevertheless, the team’s goal is to now further improve the system’s consistency and performance for other uses.

“The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents,” said the researchers. “Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics.”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.