human-centered machine learning
Human-Centered Machine Learning - AI Summary
But as more and more experiences are built with ML, it's clear that UXers still have a lot to learn about how to make users feel in control of the technology, and not the other way round. In the Google UX community, we've started an effort called "human-centered machine learning" (HCML) to help focus and guide that conversation. Born out of our work with UX and AI teams at Google (and a healthy dose of trial and error), these points will help you put the user first, iterate quickly, and understand the unique opportunities ML creates. When doing user research with early mockups, have participants bring in some of their own data -- e.g. This can result in "conspiracy theories" where people form incorrect or incomplete mental models of a system and run into problems trying to manipulate the outputs according to these imaginary rules.
Human-Centered Machine Learning
Machine learning (ML) is the science of helping computers discover patterns and relationships in data instead of being manually programmed. It's a powerful tool for creating personalized and dynamic experiences, and it's already driving everything from Netflix recommendations to autonomous cars. But as more and more experiences are built with ML, it's clear that UXers still have a lot to learn about how to make users feel in control of the technology, and not the other way round. As was the case with the mobile revolution, and the web before that, ML will cause us to rethink, restructure, displace, and consider new possibilities for virtually every experience we build. In the Google UX community, we've started an effort called "human-centered machine learning" (HCML) to help focus and guide that conversation.
Towards Human-Centered Machine Learning
Please join us this evening, October 29th, to discuss interpretable machine learning and the techniques that go behind making a white-box model! Machine learning systems are used today to make life-altering decisions about employment, bail, parole, and lending. Moreover, the scope of decisions delegated to machine learning systems seems likely only to expand in the future. Unfortunately, serious discrimination, privacy, and even accuracy concerns can be raised about these systems. Many researchers and practitioners are tackling disparate impact, inaccuracy, privacy violations, and security vulnerabilities with a number of brilliant, but often siloed, approaches.
Human-Centered Machine Learning
Our April meetup features a presentation, Human-Centered Machine Learning, by Patrick Hall of H2O.ai. After a half-hour of networking and refreshments courtesy of meetup sponsor Allegis Group, our program starts at 6:30 pm. Patrick's presentation illustrates how to combine innovations from several sub-disciplines of machine learning research to train understandable, observationally fair, trustable, and accurate predictive modeling systems. Techniques from research into fair models, directly interpretable Bayesian or constrained machine learning models, and post-hoc explanations can be used to train transparent, observationally fair, and accurate models. Additional techniques from fairness research can be used to check for disparate impact in model predictions and to preprocess data and post-process predictions to ensure the demographic parity of predictive models.