Goto

Collaborating Authors

explanation


The ABCs of AI, algorithms and machine learning

#artificialintelligence

Advanced computer programs influence, and can even dictate, meaningful parts of our lives. Think of streaming services, credit scores, facial recognition software. As this technology becomes more sophisticated and more pervasive, it's important to understand the basic terminology. People often use "algorithm," "machine learning" and "artificial intelligence" interchangeably. There is some overlap, but they're not the same things.


Concept Learning: Making your network interpretable

#artificialintelligence

Over the last decade, neural networks have been showing superb performance across a large variety of datasets and problems. While metrics like accuracy and F1-score are often suitable to measure the model's ability learn the underlying structure of the data, the model still performs like a black box. This fact often renders neural networks unusable for safety critical applications where one needs to know on which assumptions a predication was made. Just imagine a radiologist using a program with a neural network backbone to assist him finding a disease on an X-ray image. With traditional methods, it would only output the name of the disease, without any measurement of confidence (For a description of how to output true confidence scores see my last article on neural network calibration).


What data scientists keep missing about imbalanced datasets

#artificialintelligence

Many data scientists fail to fully understand the problems imbalanced datasets cause and the methods to alleviate this. As data scientists we come across many different datasets where there is a clear dominance in some types of data instances (known as majority classes) with other types significantly underrepresented (minority classes). This has significant implications for the practice of data science, where simply training a model on a dataset with this characteristic will likely lead to bias towards the majority classes. For example, if we were focussed on predicting heart disease and had a dataset of 20 people with the disease and 80 without, we could have a case with a model predicting no disease every time and as such achieving a solid accuracy score of 80% and an F1-score of 88%. Despite this well-known problem, there are too many cases where data scientists have ignored this issue and just trained a model without a real understanding of imbalances within the dataset.


Explainable AI using OmniXAI - Analytics Vidhya

#artificialintelligence

This article was published as a part of the Data Science Blogathon. In the modern day, where there is a colossal amount of data at our disposal, using ML models to make decisions has become crucial in sectors like healthcare, finance, marketing, etc. Many ML models are black boxes since it is difficult to fully understand how they function after training. This makes it difficult to understand and explain a model's behaviour, but it is important to do so to have trust in its accuracy. So how can we build trust in the predictions of a black box?


Deep Learning Explainability: Hints from Physics

#artificialintelligence

Nowadays, artificial intelligence is present in almost every part of our lives. Smartphones, social media feeds, recommendation engines, online ad networks, and navigation tools are some examples of AI-based applications that already affect us every day. Deep learning in areas such as speech recognition, autonomous driving, machine translation, and visual object recognition has been systematically improving the state of the art for a while now. However, the reasons that make deep neural networks (DNN) so powerful are only heuristically understood, i.e. we know only from experience that we can achieve excellent results by using large datasets and following specific training protocols. Recently, one possible explanation was proposed, based on a remarkable analogy between a physics-based conceptual framework called renormalization group (RG) and a type of neural network known as a restricted Boltzmann machine (RBM).


There Are Too Few Women in Computer Science and Engineering

#artificialintelligence

Only 20 percent of computer science and 22 percent of engineering undergraduate degrees in the U.S. go to women. Women are missing out on flexible, lucrative and high-status careers. Society is also missing out on the potential contributions they would make to these fields, such as designing smartphone conversational agents that suggest help not only for heart attack symptoms but also for indicators of domestic violence. Identifying the factors causing women's underrepresentation is the first step towards remedies. Why are so few women entering these fields?


OmniXAI: Making Explainable AI Easy for Any Data, Any Models, Any Tasks

#artificialintelligence

TL;DR: OmniXAI (short for Omni eXplainable AI) is designed to address many of the pain points in explaining decisions made by AI models. This open-source library aims to provide data scientists, machine learning engineers, and researchers with a one-stop Explainable AI (XAI) solution to analyze, debug, and interpret their AI models for various data types in a wide range of tasks and applications. OmniXAI's powerful features and integrated framework make it a major addition to the burgeoning field of XAI. With the rapidly growing adoption of AI models in real-world applications, AI decision making can potentially have a huge societal impact, especially for application domains such as healthcare, education, and finance. However, many AI models, especially those based on deep neural networks, effectively work as black-box models that lack explainability.


Explainable AI Unleashes the Power of Machine Learning in Banking

#artificialintelligence

Explainability has taken on more urgency at many banks as a result of increasingly complex AI algorithms, many of which have become critical to the deployment of advanced AI applications in banking, such as facial or voice recognition, securities trading, and cybersecurity. The complexity is due to greater computing power, the explosion of big data, and advances in modeling techniques such as neural networks and deep learning. Several banks are establishing special task forces to spearhead explainability initiatives in coordination with their AI teams and business units. They are also stepping up their oversight of vendor solutions as the use of automated machine learning capabilities continues to grow considerably. Explainability is also becoming a more pressing concern for banking regulators who want to be assured that AI processes and outcomes can be reasonably understood by bank employees.


This AI newsletter is all you need #5

#artificialintelligence

The big news: DALL-E 2 is now in beta! OpenAI just announced the release of DALL-E 2 to 1 million people, ten times more than the pre-beta model. You can no longer spam generations to have funny memes for free -- it is now nearly $300 for the same amount of free generations you had pre-beta. We had some terrific publications this past week like NUWA, BigColor, and Mega Portraits, all advancing the image generation field with fantastic approaches and results -- as well as the ICML 2022 event that released its outstanding papers that are worth the read. Last but not least, listen to this podcast hosted by one of our community members in this iteration!


FICO Announces Winners of Inaugural xML Challenge

#artificialintelligence

FICO, the leading provider of analytics and decision management technology, together with Google and academics at UC Berkeley, Oxford, Imperial, UC Irvine and MIT, have announced the winners of the first xML Challenge at the 2018 NeurIPS workshop on Challenges and Opportunities for AI in Financial Services. Participants were challenged to create machine learning models with both high accuracy and explainability using a real-world dataset provided by FICO. Sanjeeb Dash, Oktay Gu nlu k and Dennis Wei, representing IBM Research, were this year's challenge winners. The winning team received the highest score in an empirical evaluation method that considered how useful explanations are for a data scientist with the domain knowledge in the absence of model prediction, as well as how long it takes for such a data scientist to go through the explanations. For their achievements, the IBM team earned a $5,000 prize.