AI Might Soon Make it Easy to Put Yourself In Your Favorite Video Game

We may earn a commission from links on this page.
Gif: YouTube

In a rare move by Facebook that doesn’t have the world shaking its head in disbelief, the company’s AI researchers have developed a way to easily turn real people into playable game characters by simply analyzing videos of them going through specific motions. My dreams of finally being an unlockable character in NBA Jam just got one step closer.

Using footage of real people to help create a video game is far from a new idea. In the ‘90s, Sega’s Time Traveler, the first holographic video game, pieced together a gameplay experience by playing pre-recorded clips, based on the player’s choices. The earliest versions of Mortal Kombat were also created by filming costumed characters on a sound stage, but the footage was then converted into animated sprites to ensure the game played smoothly. Today, most video game characters are realized as fully three-dimensional models, and while players can spend hours customizing the appearance to reflect themselves, even mapping their own faces, the character’s movements are still based on stock animations.

This recently published research, from Facebook’s AI Research division could change all that. Two different neural networks were trained on footage, just five to eight minutes in length, of someone performing a specific action, like playing tennis. The first network, Pose2Pose, analyzes the footage and extracts the person who’s going through the motions. The second network, Pose2Frame, then transfers all the elements of that person, including shadows and reflections they’re creating, and then overlays it onto a new background setting, which could be a rendered video game locale.

Advertisement

The results aren’t quite as smooth or fluid as the detailed 3D video game characters modern consoles can generate, but they are completely controllable. As this research evolves the results will undoubtedly improve, but a hybrid approach might be even better. The AI could extract characteristics of someone in a video, including the nuances of how they move, and automatically apply them to a custom 3D character, saving players from painstakingly have to make hundreds of tweaks themselves. It won’t be useful for just video games, though, as the world moves towards more virtual reality experiences (remember, Facebook owns Oculus) it would make creating believable avatars of ourselves much easier. Your friend could shoot a smartphone video of you dancing for a few seconds, and a few minutes later you’d appear just as awkward in a virtual world.

[Vid2Game: Controllable Characters Extracted from Real-World Videos (PDF) via Venturebeat]

Advertisement