Nvidia introduces AI for generating video conference talking heads from 2D images
Nvidia AI researchers have introduced AI to generate talking heads for video conferences from a single 2D image. The team says they are capable of achieving a wide range of manipulation, from rotating and moving a person's head to motion transfer and video reconstruction. The AI uses the first frame in a video as a 2D photo and then uses an unsupervised learning method to gather 3D keypoints within a video. In addition to outperforming other approaches in tests using benchmark datasets, the AI achieves H.264 quality video using one-tenth of the bandwidth that was previously required. Nvidia research scientists Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu published a paper about the model Monday.
Dec-11-2020, 18:06:11 GMT
- Country:
- North America > United States (0.18)
- Industry:
- Information Technology > Hardware (0.91)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning (0.74)
- Natural Language > Chatbot (0.62)
- Communications
- Collaboration (0.76)
- Networks (0.62)
- Social Media (0.80)
- Artificial Intelligence
- Information Technology