Goto

Collaborating Authors

Results


Trust is a must: why business leaders should embrace explainable AI - Raconteur

#artificialintelligence

"Trust is a must," she said. "The EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide." Any fast-moving technology is likely to create mistrust, but Vestager and her colleagues decreed that those in power should do more to tame AI, partly by using such systems more responsibly and being clearer about how these work. The landmark legislation – designed to "guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation" – encourages firms to embrace so-called explainable AI.


Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain

#artificialintelligence

In the present paper we present the potential of Explainable Artificial Intelligence methods for decision-support in medical image analysis scenarios. With three types of explainable methods applied to the same medical image data set our aim was to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). The visual explanations were provided on in-vivo gastral images obtained from a Video capsule endoscopy (VCE), with the goal of increasing the health professionals' trust in the black box predictions. We implemented two post-hoc interpretable machine learning methods LIME and SHAP and the alternative explanation approach CIU, centered on the Contextual Value and Utility (CIU). The produced explanations were evaluated using human evaluation.


Elystar redefines Public Securities Investment with explainable Artificial Intelligence - Founder, Dr Satya Gautam Vadlamudi – ThePrint

#artificialintelligence

Mumbai (Maharashtra) [India], May 7 (ANI/NewsVoir): Since the dawn of the 2000s, Artificial Intelligence (AI) has been making waves through its penetration into various sectors. While AI helps increase efficiency and speed in a system, the lack of feedback when faced with errors has been a glaring concern. Recently developed Explainable Artificial Intelligence (XAI) technology tackles this issue by analyzing data to provide users with explanations for given issues and activities. Utilizing this technology to create investment strategies, Elystar aims to increase net returns by reducing machine/AI-made errors and thereby successfully leveraging the superior insights provided by AI. "Artificial Intelligence in finance is a relatively new concept that is still being explored and experimented upon. While few of the firms experimenting are sparingly using it for short-term trading, we have spent the past 15 months developing models to use it for long-term investments. One simple way to look at this concept is to compare it with Microsoft Excel. While Excel is used in different fields and by different people, it is used in various ways and forms. Similarly, AI has a number of variations in which it can be utilized, so no two approaches may be completely the same. AI not only helps us scale and analyze data rapidly, but the integration of Explainable AI allows us to understand and eliminate unwarranted biases to create a sound investment strategy," said Dr Satya Gautam Vadlamudi, Founder and CEO of Elystar.


Explainable AI: Physics in Machine Learning?

#artificialintelligence

In trying to describe phenomenon in the real world, we would need to build models that can closely replicate these events. In general, most modeling approaches could be grouped into two main categories: data-driven or theory-driven solutions. Data-driven approach relies on using data to make sense of the phenomenon around them, but often with limited understanding of the underlying theoretical explanation. For instance, you are told to predict the housing price in a particular neighborhood. You have a good working hypothesis to work with, such as, the size and distance to popular service amenities will have some bearing to the housing price.


Explaining Explainable AI

#artificialintelligence

Explainable AI (XAI) has long been a fringe discipline in the broader world of AI and machine learning. It exists because many machine-learning models are either opaque or so convoluted that they defy human understanding. But why is it such a hot topic today? AI systems making inexplicable decisions are your governance, regulatory, and compliance colleagues' worst nightmare. But aside from this, there are other compelling reasons for shining a light into the inner workings of AI.


Explainable AI - How humans can trust AI

#artificialintelligence

Artificial intelligence (AI) has achieved growing momentum in its application in many fields to deal with the increased complexity, scalability, and automation, and that also permeates into digital networks today. A rapid surge in the complexity and sophistication of AI-powered systems has evolved to such an extent that humans do not understand the complex mechanisms by which AI systems work or how they make certain decisions -- something that is particularly a challenge when AI-based systems compute outputs that are unexpected or seemingly unpredictable. This especially holds true for opaque decision- making systems, such as those using deep neural networks (DNNs), which are considered complex black box models. The inability for humans to see inside black boxes can result in AI adoption (and even its further development) being hindered, which is why growing levels of autonomy, complexity, and ambiguity in AI methods continues to increase the need for interpretability, transparency, understandability, and explainability of AI products/outputs (such as predictions, decisions, actions, and recommendations). These elements are crucial to ensuring that humans can understand and -- consequently -- trust AI-based systems (Mujumdar, et al., 2020). Explainable artificial intelligence (XAI) refers to methods and techniques that produce accurate, explainable models of why and how an AI algorithm arrives at a specific decision so that AI solution results can be understood by humans (Barredo Arrieta, et al., 2020).


SAP BrandVoice: What Is 'Explainable AI' And How Can It Help Your Business?

#artificialintelligence

Explainable AI provides a whole new layer of insight by allowing analysts to clearly see why a prediction was made. When it comes to Enterprise AI (Artificial Intelligence), we often focus on automating repetitive business processes for a very simple reason and it doesn't take much imagination to see the value. But what if you wanted to gauge the impact of an unexpected event, such as a hurricane, on your business's bottom line? Maybe you'd like to compare the probable financial outcomes of a strategic decision before you make it? Explainable AI, which combines Human Intelligence with Artificial Intelligence, means employees now have the visibility to make these decisions.


A 'Glut' of Innovation Spotted in Data Science and ML Platforms

#artificialintelligence

These are heady days in data science and machine learning (DSML) according to Gartner, which identified a "glut" of innovation occurring in the market for DSML platforms. From established companies chasing AutoML or model governance to startups focusing on MLops or explainable AI, a plethora of vendors are simultaneously moving in all directions with their products as they seek to differentiate themselves amid a very diverse audience. "The DSML market is simultaneously more vibrant and messier than ever," a gaggle of Gartner analysts led by Peter Krensky wrote in the Magic Quadrant for DSML Platforms, which was published earlier this month. "The definitions and parameters of data science and data scientists continue to evolve, and the market is dramatically different from how it was in 2014, when we published the first Magic Quadrant on it." The 2021 Magic Quadrant for DSML is heavily represented by companies to the right of the axis, which anybody who's familiar with Gartner's quadrant-based assessment method knows represents the "completeness of vision."


What Are Explainable AI Principles

#artificialintelligence

Explainable AI (XAI) principles are a set of guidelines for the fundamental properties that explainable AI systems should adopt. Explainable AI seeks to explain the way that AI systems work. These four principles capture a variety of disciplines that contribute to explainable AI, including computer science, engineering and psychology. The four explainable AI principles apply individually, so the presence of one does not imply that others will be present. The NIST suggests that each principle should be evaluated in its own right.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.