Goto

Collaborating Authors

explainability


Top Responsible AI (Artificial Intelligence) Tools in 2022

#artificialintelligence

A governance paradigm called "responsible AI" describes how a particular organization handles the ethical and legal issues around artificial intelligence (AI). Liable AI projects are primarily motivated by the need to clarify who is responsible if something goes wrong. The data scientists and software engineers who create and implement an organization's AI algorithmic models are responsible for developing appropriate, reliable AI standards. This indicates that each organization has different requirements for the procedures needed to stop prejudice and ensure transparency. Supporters of responsible AI believe that a widely accepted governance framework of AI best practices will make it simpler for organizations worldwide to ensure that their AI programming is human-centered, interpretable, and explainable, much like ITIL provided a common framework for delivering IT services.


Intuit Director of Data Science Provides Inside Look at Company

#artificialintelligence

When Diane Chang began working at Intuit, maker of Turbo Tax and Quick Books, more than a decade ago in 2009 as a data scientist, there were only a few other people performing that role at the company. Being a data scientist back then reminded her of a time when she had worked for a consulting firm. "We had to convince people that they should work with us, and that they'd want to work with us, and that we could help provide value. There was a lot of selling initially and explaining and describing what we could do," she says. There's more demand than there is supply for data scientists." Chang, now director of data science at the company, provided an inside view into how Intuit is leveraging data science today. The company says it has evolved into an AI-driven platform company. Chang provided insights into the business trends have impacted the company and what data science trends are getting the attention of the C-suite. One of the major business trends that has impacted Intuit's ...


How ML Model Explainability Accelerates the AI Adoption Journey for Financial Services - KDnuggets

#artificialintelligence

Financial services firms are increasingly employing artificial intelligence to better not just their operational operations, but also business-related tasks, including assigning credit scores, identifying fraud, optimizing investment portfolios, and supporting innovations. AI improves the speed, precision, and efficacy of human efforts in these operations, and it can automate data management chores that are currently done manually. However, as AI advances, new challenges arise. The real issue is transparency: when individuals don't comprehend or only a few people understand the reasoning behind AI models, AI algorithms may inadvertently bake in bias or fail. This has accelerated the need for explainability in ML models across industries.


Can fairness be automated with AI? A deeper look at an essential debate

#artificialintelligence

In part one, I examined some noted ethicists' opinions about fairness measurement - and found some reasonable, and some incomplete (Can we measure fairness? In this article, I will begin with an example that was in dire need of fairness assessment. I will also introduce another method for fairness assessment. And finally, I'll try to resolve some different opinions between Reid Blackman, myself, and some Oxford scholars. I want to start with an example where the fairness measurement described in Part 1 could have avoided nearly catastrophic results.


Explainable AI Unleashes the Power of Machine Learning in Banking

#artificialintelligence

Explainability has taken on more urgency at many banks as a result of increasingly complex AI algorithms, many of which have become critical to the deployment of advanced AI applications in banking, such as facial or voice recognition, securities trading, and cybersecurity. The complexity is due to greater computing power, the explosion of big data, and advances in modeling techniques such as neural networks and deep learning. Several banks are establishing special task forces to spearhead explainability initiatives in coordination with their AI teams and business units. They are also stepping up their oversight of vendor solutions as the use of automated machine learning capabilities continues to grow considerably. Explainability is also becoming a more pressing concern for banking regulators who want to be assured that AI processes and outcomes can be reasonably understood by bank employees.


FICO Announces Winners of Inaugural xML Challenge

#artificialintelligence

FICO, the leading provider of analytics and decision management technology, together with Google and academics at UC Berkeley, Oxford, Imperial, UC Irvine and MIT, have announced the winners of the first xML Challenge at the 2018 NeurIPS workshop on Challenges and Opportunities for AI in Financial Services. Participants were challenged to create machine learning models with both high accuracy and explainability using a real-world dataset provided by FICO. Sanjeeb Dash, Oktay Gu nlu k and Dennis Wei, representing IBM Research, were this year's challenge winners. The winning team received the highest score in an empirical evaluation method that considered how useful explanations are for a data scientist with the domain knowledge in the absence of model prediction, as well as how long it takes for such a data scientist to go through the explanations. For their achievements, the IBM team earned a $5,000 prize.


Using Human-in-the-Loop Approach in Machine Learning

#artificialintelligence

Did you hear of the self-driving Uber car that hit and killed a woman in Arizona? On another occasion, a facial recognition solution profiled an innocent man of color as a criminal in New Jersey, and Amazon's AI-powered recruitment tool displayed bias against female candidates. Clearly, artificial intelligence makes mistakes. So, how can we still get the benefits of AI while eliminating these types of errors? One option is letting human experts train, evaluate, and monitor AI business solutions after deployment.


The Explainable AI Imperative Amid Global AI Regulation

#artificialintelligence

The General Data Protection Regulation (GDPR) was a big first step toward giving consumers control of their data. As powerful as this privacy initiative is, a new personal data challenge has emerged. Now, privacy concerns are focused on what companies are doing with data once they have it. This is due to the rise of artificial intelligence (AI) as neural networks accelerate the exploitation of personal data and raise new questions about the need for further regulation and safeguarding of privacy rights. Core to the concern about data privacy are the algorithms used to develop AI models.


Understanding Multilevel Models(Artficial Intelligence)

#artificialintelligence

Abstract: Multilevel linear models allow flexible statistical modelling of complex data with different levels of stratification. Identifying the most appropriate model from the large set of possible candidates is a challenging problem. In the Bayesian setting, the standard approach is a comparison of models using the model evidence or the Bayes factor. However, in all but the simplest of cases, direct computation of these quantities is impossible. Markov Chain Monte Carlo approaches are widely used, such as sequential Monte Carlo, but it is not always clear how well such techniques perform in practice.


How Banks Can Shed Light on the 'Black Box' of AI Decision-Making

#artificialintelligence

The use of artificial intelligence technology in banking has great potential, much of it still untapped. It's use in powering chatbots and digital assistants using natural language processing is of the best-known AI applications. AI can also be used as part of data analytics, helping banks and credit unions detect fraud more quickly on the one hand and create more personalized customer messaging and offers on the other. Significantly, AI can help make institutions -- bank and nonbank -- make faster lending decisions. However, there is downside to the use of artificial intelligence, the consequences of which loom ominously for banks and credit unions.