Home / Tech News / Featured Tech News / Nvidia’s AI-generated graphics hint at an automatically generated gaming future

Nvidia’s AI-generated graphics hint at an automatically generated gaming future

This month, Nvidia showcased one of the first proof-of-concept demos for the real-time creation of video game graphics. The new tech showcase focuses on a real-time driving simulation that covers a few city blocks, with AI generating in-game graphics based off of real-world footage. 

The demo is purely focused on graphics, as there is no way to step out of the car or interact with the generated world. However, the results shown are impressive enough on their own to merit further research. By scanning several city blocks and using Unreal Engine 4 to build a foundation for the simulation, Nvidia has given us a glimpse at what could be the future of graphical content generation.

Nvidia's focus has shifted towards machine learning in recent years but recently, the company has been showing off how it can bring those benefits to the gaming audience. The RTX GPU launch brought along ray-tracing, DLSS and Adaptive Shading, all of which utilise dedicated cores found within the Turing architecture. Now, Nvidia is leveraging AI again, this time to generate in-game graphics based on real world cities.

While the technology shown off in graphics generation has plenty of appeal, it's still very much a process in its infancy. There's enough visual smearing to show the telltale mark of an imperfect AI at work and the game built to showcase the technology is more a proof of concept than a fun or engaging experience. It's also not a simple process once you take into account just how much outside data has to be collected in the form of photos and videos for the sake of giving the AI elements to work with. Teaching this system how to create visuals with fantasy or unrealistic elements may be beyond the scope of what is currently feasible.

The current setup runs on a single Titan V rather than a traditional AI network of computers, The Titan V is a $3,000 graphics card, but the fact that this is technically feasible on a single machine at all is an impressive feat.

One of the most promising future developments for this new tech might revolve around the world of augmented reality. Many apps already take advantage of the multitude of high megapixel cameras found on most consumer smartphones by overlaying digitally-created graphics over a real-world image to create the illusion of a game or service coming to life. In most current applications there's a very hard disconnect between the real world and the digital AR side, often due to how those graphics fail to mimic the subtle nuances of the reality they pop up within.

For example, almost any game you can play on your phone right now feels very unreal. It's not just an issue of phone power but also how the human mind processes things that seem out of the ordinary in a way that sets off a sense of disbelief when a game, a movie or even a piece of art doesn't match up to reality well. By taking in sensor data through a phone camera in the same way Nvidia's driving demo was trained with locale data, AR processes could be able to fine-tune their graphics to better fit the lighting, shadows and atmosphere of the world they enhance.

At the very least it would give AR apps that rely on real-world data, like Facebook's ventures into gaming, a better suite of tools to virtually model the faces of those we know in real life to help avoid that uncanny valley feeling.

Nvidia has been working towards various AI solutions for image and video manipulation as shown just a few short weeks ago. We've previously seen randomly generated faces meant to mimic human celebrities without copying any distinct individual. We're coming closer to a world where photorealistic faces generated by a computer are more feasible than ever before. Machine learning solutions to randomly generated content that is consumable by humans might be following close behind.

Take ‘Killer Bounce' and ‘Death Walls' as an interesting example: Created by a student and assistant professor team at GT’s School of Interactive Computing, these two games were created by showing AI a series of classic games and allowing it to create its own games with those classics as an inspiration. They bear a striking resemblance to the kind of game that would come out of a basic programming tutorial yet required little to no human interaction to make them a reality. They're fairly basic and their lasting appeal is slight, but they hold many of the basic rules a video game follows without cultural assimilation to assist in their creation.

Something as simple as how a jump looks and feels, how controls should respond or the idea that a platformer isn't complete without jumping on an enemy's head to defeat them all come from years of trial and error. Consuming games media is one of the quickest ways to get a feel for what makes a game fun versus what makes one frustrating. In a way, this process is cutting out part of the human element and offloading that work onto a machine in a surprising fashion. Currently applications are mostly novel but in the future these AI applications could help designers develop portions of their game based off of past successes in similar genres.

We're still quite a few years away from machines creating endless entertainment for humans but Nvidia's strides towards blending the line between man made and auto-generated content seems to be a step along the right lines. Even if machine-created graphics never make it into triple-A titles, the ability to reduce the need for human eyes on smaller projects could prove invaluable.

Become a Patron!

Check Also

Sonic x Shadow Generations

Sonic x Shadow Generations hits new sales milestone

Just one month after release, the remaster/expansion Sonic x Shadow Generations has sold 1.5 million copies – far outpacing the 2011 original.