Virtual Stage. How was it possible?
The Background Matting is based on a brand-new technique from the University of Washington. Due to the lack of labeled training data portraying standing humans, original AI was trained with 512 512 square images/videos until the hip or knee-length, resulting in poor quality when matting full HD standing human videos. In order to get high quality foreground in zones like hair, hands, or feet we have made two major contributions to the original method. First, we have replaced the original segmentation step by the AI models of the Azure Body Tracking SDK, getting a segmentation that is more tolerant of color similarities and ambiguous zones of the image. Second, we are splitting the body into two square images with a small overlapping and processing them separately. This allows the model to "see" better in difficult zones like the shadow between the feet, without losing precision in hair or hands. To download the code, test it, or get more technical details please check Github. If you want to know more about our technical marketing services visit this page! Or more info about the technology we have developed in collaboration with Microsoft Corp, Contact Us!
Oct-24-2020, 19:15:13 GMT
- Technology: