humble ai
An entropy-optimal path to humble AI
Bassetti, Davide, Pospíšil, Lukáš, Groom, Michael, O'Kane, Terence J., Horenko, Illia
Progress of AI has led to a creation of very successful, but by no means humble models and tools, especially regarding (i) the huge and further exploding costs and resources they demand, and (ii) the over-confidence of these tools with the answers they provide. Here we introduce a novel mathematical framework for a non-equilibrium entropy-optimizing reformulation of Boltzmann machines based on the exact law of total probability. It results in the highly-performant, but much cheaper, gradient-descent-free learning framework with mathematically-justified existence and uniqueness criteria, and answer confidence/reliability measures. Comparisons to state-of-the-art AI tools in terms of performance, cost and the model descriptor lengths on a set of synthetic problems with varying complexity reveal that the proposed method results in more performant and slim models, with the descriptor lengths being very close to the intrinsic complexity scaling bounds for the underlying problems. Applying this framework to historical climate data results in models with systematically higher prediction skills for the onsets of La Niña and El Niño climate phenomena, requiring just few years of climate data for training - a small fraction of what is necessary for contemporary climate prediction tools.
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.04)
- Pacific Ocean (0.04)
- Europe > Czechia > Moravian-Silesian Region > Ostrava (0.04)
- (4 more...)
Humble AI in the real-world: the case of algorithmic hiring
Nair, Rahul, Vejsbjerg, Inge, Daly, Elizabeth, Varytimidis, Christos, Knowles, Bran
Humble AI (Knowles et al., 2023) argues for cautiousness in AI development and deployments through scepticism (accounting for limitations of statistical learning), curiosity (accounting for unexpected outcomes), and commitment (accounting for multifaceted values beyond performance). We present a real-world case study for humble AI in the domain of algorithmic hiring. Specifically, we evaluate virtual screening algorithms in a widely used hiring platform that matches candidates to job openings. There are several challenges in misrecognition and stereotyping in such contexts that are difficult to assess through standard fairness and trust frameworks; e.g., someone with a non-traditional background is less likely to rank highly. We demonstrate technical feasibility of how humble AI principles can be translated to practice through uncertainty quantification of ranks, entropy estimates, and a user experience that highlights algorithmic unknowns. We describe preliminary discussions with focus groups made up of recruiters. Future user studies seek to evaluate whether the higher cognitive load of a humble AI system fosters a climate of trust in its outcomes.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
- Questionnaire & Opinion Survey (0.91)
- Research Report (0.64)
Humble AI
One of the central uses of artificial intelligence (AI) is to make predictions. The ability to learn statistical relationships within enormous datasets enables AI, given a set of current conditions or features, to predict future outcomes, often with exceptional accuracy. Increasingly, AI is being used to make predictions about individual human behavior in the form of risk assessments. Algorithms are used to estimate the likelihood that an individual will fully repay a loan, appear at a bail hearing, or safeguard children. These predictions are used to guide decisions about whether vital opportunities (to access credit, to await trial at home rather than while incarcerated, or to retain custody) are extended or withdrawn. An adverse decision--for instance, a denial of credit based on a prediction of probable loan default--has negative consequences for the decision subject, both in the near term and into the quite distant future (see the sidebar on credit scoring for an example).
The Problem With Biased AIs (and How To Make AI Better)
AI has the potential to deliver enormous business value for organizations, and its adoption has been sped up by the data-related challenges of the pandemic. Forrester estimates that almost 100% of organizations will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year. But there is growing concern around AI bias -- situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm. I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.
- Banking & Finance (0.74)
- Law > Statutes (0.32)
- Government > Regional Government (0.31)
Why Your AI Must Be Humble
The past year has truly shown manufacturers that they need to transform, and at a much faster rate than previously. AI provides speed by increasing your rate of learning and augmenting employees. But while we've put to rest the argument as to whether AI will be transformational for business, many manufacturers are still trying to figure out how AI will be most useful, and where to start. "The big lie about AI is that AI alone will save you," Colin Parris, Senior Vice President and Chief Technology Officer at GE Digital. "But AI only works when you embed it inside a business process."
Act Now to Prevent Regulatory Derailment of the AI Boom - RTInsights
If businesses do not adopt their own best practices in addressing AI transparency and bias issues, they may not have a choice in the future. Continuous intelligence (CI) relies on the use of artificial intelligence (AI) and machine learning (ML) to derive actionable information in milliseconds to minutes from streaming data. Adoption is booming, but one obstacle could derail industry efforts if not addressed immediately. That obstacle is regulatory oversight and interference. The looming problem relates to the way AI and ML are used in CI applications today.
How GE uses a 'Humble AI' approach to manufacturing
Colin Parris has a challenging job. As the vice president of software and analytics research at General Electric, Parris must evaluate new technology and applications that can benefit the manufacturing giant. All of which must work within the framework that GE employs when assessing safety and efficiency known as Humble AI. But even after his own rigorous evaluation and approval process there is no easy way to get buy-in from the rest of the company for new ways of doing business. With huge investments in aviation systems, energy and healthcare, GE is always looking for ways to use technology to improve operations, deliver products faster and better anticipate problems along the way -- all areas where AI can potentially be of use.