Goto

Collaborating Authors

 peoplelen


Assessing AI system performance: thinking beyond models to deployment contexts - Microsoft Research

#artificialintelligence

AI systems are becoming increasingly complex as we move from visionary research to deployable technologies such as self-driving cars, clinical predictive models, and novel accessibility devices. Unlike singular AI models, it is more difficult to assess whether these more complex AI systems are performing consistently and as intended to realize human benefit. How do we know when these more advanced systems are'good enough' for their intended use? When assessing the performance of AI models, we often rely on aggregate performance metrics like percentage of accuracy. But this ignores the many, often human elements, that make up an AI system. Our research on what it takes to build forward-looking, inclusive AI experiences has demonstrated that getting to'good enough' requires multiple performance assessment approaches at different stages of the development lifecycle, based upon realistic data and key user needs (figure 1).


Microsoft's Newest AI technology, "PeopleLens," is Helping Blind People See

#artificialintelligence

Microsoft debuted a slew of new AI technologies at their annual Ignite conference. One of the most interesting is an AI system called "PeopleLens." PeopleLens is a platform that uses computer vision algorithms to help blind people engage with their social surroundings. The system is designed to identify and interpret objects in the user's environment and relay those details back to the user in a way that they can understand. This opens a world of possibilities for blind people, who until now have been largely cut off from social interaction. With PeopleLens, they can now participate in conversations, navigate their surroundings, and generally experience the world in a way that was once impossible.