Novel techniques extract more accurate data from images degraded by environmental factors
Computer vision technology is increasingly used in areas such as automatic surveillance systems, self-driving cars, facial recognition, healthcare and social distancing tools. Users require accurate and reliable visual information to fully harness the benefits of video analytics applications but the quality of the video data is often affected by environmental factors such as rain, night-time conditions or crowds (where there are multiple images of people overlapping with each other in a scene). Using computer vision and deep learning, a team of researchers led by Yale-NUS College Associate Professor of Science (Computer Science) Robby Tan, who is also from the National University of Singapore's (NUS) Faculty of Engineering, has developed novel approaches that resolve the problem of low-level vision in videos caused by rain and night-time conditions, as well as improve the accuracy of 3D human pose estimation in videos. The research was presented at the 2021 Conference on Computer Vision and Pattern Recognition (CVPR), a top ranked computer science conference. Night-time images are affected by low light and man-made light effects such as glare, glow, and floodlights, while rain images are affected by rain streaks or rain accumulation (or rain veiling effect).
Jul-20-2021, 07:55:19 GMT
- Genre:
- Research Report > Promising Solution (0.36)
- Industry:
- Health & Medicine (0.99)
- Information Technology (0.61)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.39)
- Robots > Autonomous Vehicles (0.63)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence