Posture Recognition using Kinect, Azure IoT, ML and WebVR
With the recent success of depth cameras such as Kinect, gesture and posture recognition has become easier. Using depth sensors, the 3D locations of body-joints can be reliably extracted to used with any machine learning framework, specific gestures or posture can be modelled and inferred. Real world applications in Virtual Reality can be used for Yoga, Ballet training, Golf, anything related to activity recognition and proper postures. I also see application of it in the Architectural, Engineering, Construction and Manufacturing Industry by sending depth sensor data to the cloud to identify correct configurations. This is a proof of concept to detect pose "Y", "M", "C", "A" and stream the result back to the browser. This video explains a littlebit how it's hooked up together.
Sep-20-2016, 21:10:46 GMT
- Industry:
- Information Technology (1.00)
- Leisure & Entertainment > Games
- Computer Games (0.47)
- Technology: