Goto

Collaborating Authors

 ai trust


Trust Artificial Intelligence? Still A Work In Progress, Survey Shows

#artificialintelligence

Our dependency on AI-based outputs seems to grow every day, both from a business as well as personal perspective. But are we willing to fully trust this output? Are we sure the data fed into these systems is accurate? Are the decision models and algorithms kept up to date? Are humans kept in the loop?


HPE Ezmeral ML Ops Recognized by Gartner

#artificialintelligence

On September 1, 2021 Gartner published their 2021 "Market Guide for AI Trust, Risk and Security Management". Per Gartner, "This Market Guide identifies new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions."1 At HPE, we believe HPE Ezmeral ML Ops was recognized for the advantages our solution provides to our customers. As such, we're proud to announce that Gartner listed HPE Ezmeral ML Ops as a Representative ModelOps Vendor in the 2021 "Market Guide for AI Trust, Risk and Security Management." Gartner defines the AI Trust, Risk and Security Management (TRiSM) market as being made up of multiple software segments.


Council Post: Addressing AI's Biggest Problem: Trust

#artificialintelligence

Subex helps businesses embrace disruptive changes in the business landscape and succeed with digital trust. The popular crime flick Minority Report from 2002 seemed ahead of its time when it portrayed how a computer system could predict when a person is likely to commit a crime, long before they have even thought about it. However, a decade later, the possibilities imagined by the Tom Cruise blockbuster seem real with the emergence of COMPAS, an artificial intelligence (AI) algorithm that can predict how likely a person is to commit a crime again. The software was widely used across the U.S. until 2016, when a detailed investigation highlighted that the program was biased against a particular race. The problem lay not in the AI algorithm itself but in the data that was fed into it.


Gartner Market Guide for AI Trust, Risk and Security Management

#artificialintelligence

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


Trustworthy and ethical AI systems–possible?

#artificialintelligence

I hate to say this: Artificial intelligence (AI) as a technology is maturing. Far from the stuff of science fiction, AI has moved from the exclusive regimes of theoretical mathematics and advanced hardware to an everyday aspect of life. Over the last several years of exponentially accelerating development and proliferation, our needs and requirements for mature AI systems have begun to crystallize. Trust is not an internal quality of an AI system like accuracy, or even fairness. Instead, it's a characteristic of the human-machine relationship formed with an AI system.


Hacking Super Intelligence

#artificialintelligence

These attacks are not similar to the traditional ones and can't be countered with traditional measures. Today there's only a trickle of these attacks, but in the coming decade, we may be facing a tsunami. To prepare, we need to start securing our AI systems today. I wanted to start with the premise to AI Security, only to realize that I'm risking a cliché on how disruptive AI is. Just to get it off the table, I'll mention that AI is not only part of our daily life (search engine suggestions, photo filters, digital voice assistant).


AI Trust in business processes: The need for process-aware explanations

Jan, Steve T. K., Ishakian, Vatche, Muthusamy, Vinod

arXiv.org Artificial Intelligence

Business processes underpin a large number of enterprise operations including processing loan applications, managing invoices, and insurance claims. There is a large opportunity for infusing AI to reduce cost or provide better customer experience, and the business process management (BPM) literature is rich in machine learning solutions including unsupervised learning to gain insights on clusters of process traces, classification models to predict the outcomes, duration, or paths of partial process traces, extracting business process from documents, and models to recommend how to optimize a business process or navigate decision points. More recently, deep learning models including those from the NLP domain have been applied to process predictions. Unfortunately, very little of these innovations have been applied and adopted by enterprise companies. We assert that a large reason for the lack of adoption of AI models in BPM is that business users are risk-averse and do not implicitly trust AI models. There has, unfortunately, been little attention paid to explaining model predictions to business users with process context. We challenge the BPM community to build on the AI interpretability literature, and the AI Trust community to understand


Why you need to pay more attention to combatting AI bias

#artificialintelligence

As artificial intelligence (AI) continues its march into enterprises, many IT pros are beginning to express concern about potential AI bias in the systems they use. A new report from DataRobot finds that nearly half (42%) of AI professionals in the US and UK are "very" to "extremely" concerned about AI bias. The report, conducted last June of more than 350 US- and UK-based CIOs, CTOs, VPs, and IT managers involved in AI and machine learning (ML) purchasing decisions, also found that "compromised brand reputation" and "loss of customer trust" are the most concerning repercussions of AI bias. This prompted 93% of respondents to say they plan to invest more in AI bias prevention initiatives in the next 12 months. SEE: The ethical challenges of AI: A leader's guide (free PDF) (TechRepublic) Despite the fact that many organizations see AI as a game changer, many organizations are still using untrustworthy AI systems, said Ted Kwartler, vice president of trusted AI, at DataRobot.


Fair and Equitable: How IBM Is Removing Bias from AI - DZone AI

#artificialintelligence

As more apps come to market that rely on Artificial Intelligence, software developers and data scientists can unwittingly (or perhaps even knowingly) inject their personal biases into these solutions. This can cause a variety of problems ranging from a poor user experience to major errors in critical decision-making. We at IBM have created a solution specifically to address AI bias. Because flaws and biases may not be easy to detect without the right tool, IBM is deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent. Thus, we are pleased to back up that commitment with the launch of AI Fairness 360, an open-source library to help detect and remove bias in Machine Learning models and data sets.