Goto

Collaborating Authors

Results


Unleashing the power of machine learning models in banking through explainable artificial intelligence (XAI)

#artificialintelligence

The "black-box" conundrum is one of the biggest roadblocks preventing banks from executing their artificial intelligence (AI) strategies. It's easy to see why: Picture a large bank known for its technology prowess designing a new neural network model that predicts creditworthiness among the underserved community more accurately than any other algorithm in the marketplace. This model processes dozens of variables as inputs, including never-before-used alternative data. The developers are thrilled, senior management is happy that they can expand their services to the underserved market, and business executives believe they now have a competitive differentiator. But there is one pesky problem: The developers who built the model cannot explain how it arrives at the credit outcomes, let alone identify which factors had the biggest influence on them.


Anthropic's quest for better, more explainable AI attracts $580M – TechCrunch

#artificialintelligence

Less than a year ago, Anthropic was founded by former OpenAI VP of research Dario Amodei, intending to perform research in the public interest on making AI more reliable and explainable. Its $124 million in funding was surprising then, but nothing could have prepared us for the company raising $580 million less than a year later. "With this fundraise, we're going to explore the predictable scaling properties of machine learning systems, while closely examining the unpredictable ways in which capabilities and safety issues can emerge at-scale," said Amodei in the announcement. His sister Daniela, with whom he co-founded the public benefit corporation, said that having built out the company, "We're focusing on ensuring Anthropic has the culture and governance to continue to responsibly explore and develop safe AI systems as we scale." Because that's the problem category Anthropic was formed to examine: how to better understand the AI models increasingly in use in every industry as they grow beyond our ability to explain their logic and outcomes.


Explainable AI: Language Models

#artificialintelligence

Just like a coin, explainability in AI has two faces -- one it shows to the developers (who actually build the models) and the other to the users (the end customers). The former face (IE i.e. intrinsic explainability) is a technical indicator to the builder that explains the working of the model. Whereas the latter (EE i.e. extrinsic explainability) is proof to the customers about the model's predictions. While IE is required for any reasonable model improvement, we need EE for factual confirmation. A simple layman who ends up using the model's prediction needs to know why is the model suggesting something.


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...


Explainable AI: Why transparency is the future for artificial intelligence

#artificialintelligence

For example, social media platforms employ machine learning to keep their users engaged and some -- like Facebook and Twitter -- collect users' data …


AI is explaining itself to humans. And it's paying off.

The Japan Times

Microsoft Corp.'s LinkedIn boosted subscription revenue by 8% after arming its sales team with artificial intelligence software that not only predicts clients at risk of canceling, but also explains how it arrived at its conclusion. The system, introduced last July and described in a LinkedIn blog post on Wednesday, marks a breakthrough in getting AI to "show its work" in a helpful way. While AI scientists have no problem designing systems that make accurate predictions on all sorts of business outcomes, they are discovering that to make those tools more effective for human operators, the AI may need to explain itself through another algorithm. The emerging field of explainable AI, or XAI, has spurred big investment in Silicon Valley as startups and cloud giants compete to make opaque software more understandable and has stoked discussion in Washington and Brussels where regulators want to ensure automated decision-making is done fairly and transparently. AI technology can perpetuate societal biases like those around race, gender and culture.


Upol Ehsan on Human-Centered Explainable AI and Social Transparency

#artificialintelligence

Bio: Upol Ehsan cares about people first, technology second. He is a doctoral candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining his expertise in AI and background in Philosophy, his work in Explainable AI (XAI) aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. Actively publishing in top peer-reviewed venues like CHI, his work has received multiple awards and been covered in major media outlets. Bridging industry and academia, he serves on multiple program committees in HCI and AI conferences (e.g., DIS, IUI, NeurIPS) and actively connects these communities (e.g, the widely attended HCXAI workshop at CHI).


Software vendors are pushing "explainable A.I." that often isn't

#artificialintelligence

Wall Street titans warn a recession is imminent but its not that simple. Here are the warning signs you need to understand.


Explainable AI

Communications of the ACM

Advances in AI, especially based on machine learning, have provided a powerful way to extract useful patterns from large, heterogeneous data sources. The rise in massive amounts of data, coupled with powerful computing capabilities, makes it possible to tackle previously intractable real-world problems. Medicine, business, government, and science are rapidly automating decisions and processes using machine learning. Unlike traditional AI approaches based on explicit rules expressing domain knowledge, machine learning often lacks explicit human-understandable specification of the rules producing model outputs. With growing reliance on automated decisions, an overriding concern is understanding the process by which "black box" AI techniques make decisions.