Photorealistic faces in video games have been with us for a while, but they are still a long way from what you would call realisticAs even though we have the polygons and textures almost perfected, the facial animations are still stuck deep in the haunting valley.
This is not a criticism for animators and modelers – mimicking the nuances of human facial expressions is clearly very hard work, and teams have come a long way since Goldeneye until Half-life 2 and Yakuza, But it is what it is. However, progress is being made, and while we’ve all gotten used to accepting the fact that video game characters can seem a bit real until they open their mouths or try to look scared, one day we’ll get to the point where the Valley Haunting have a bridge, and just seem real.
This is one of the ways we could get there. You’re looking at the work of visual effects studio Ziva Dynamics, whose Unreal Engine-based examples here show the creation of facial animation technology that is capable of doing some really outstanding things. As an example, look at the model on the left generated from the real-time capture on the right:
This material in timeor real looks good, even excellent, but not much beyond what we are used to at the moment. Now look at what happens when you run the same model through a few terabytes of reference data and go through some AI learning:
Mmy dear. While Ziva works in the entertainment industry, from television to movies, this technology has the video game industry in particular in mind, as it would allow developers to introduce a wide range of facial emotions that are difficult, if not impossible. , to transmit at this time using only expressions.
For a slightly more technical example, here’s an industry-focused teaser from the company, ultimately showing one of the most promising side effects of the approach being used: that it can work across different ethnicities and even (fictitious) species. :