Data science cowboys are exacerbating the AI and analytics challenge

#artificialintelligence

In the below, Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, explains to Information Age why data science cowboys and citizen data scientists could cause catastrophic failures to a business' AI and analytics ambitions. Although the future will see fast-paced adoption and benefits driven by applying AI to all types of businesses, we will also see catastrophic failures due to the over-extension of analytic tools, and the rise of citizen data scientists and data science cowboys. The former does not have data science training but uses analytic tooling and methods to bring analytics into their businesses; the latter has data science training, but a disregard for the right way to handle AI. Citizen data scientists often use algorithms and technology they don't understand, which might result in inappropriate use of their AI tools; the risk from the data science cowboys is that they build AI models that may incorporate non-causal relationships learned from limited data, spurious correlations and outright bias -- which could have serious consequences for driverless car systems, for example. Today's AI threat stems from the efforts of both citizen data scientists and data scientist cowboys to tame complex machine learning algorithms for business outcomes.


Explainable Artificial Intelligence

#artificialintelligence

Introduction In the era of data science, artificial intelligence is making impossible feats possible. Driverless cars, IBM Watson's question-answering system, cancer detection, electronic trading, etc. are all made possible through the advanced decision making ability of artificial intelligence. The deep layers of neural networks have a magical ability to recreate the human mind and its functionalities. When humans make decisions, they have the ability to explain their thought process behind it. They can explain the rationale; whether its driven by observation, intuition, experience or logical thinking ability.


TechVisor - Het vizier op de tech industrie

#artificialintelligence

The possibilities of artificial intelligence are endless. AI helps businesses create tremendous efficiencies through automation, while enhancing an organizations ability to make more effective business decisions. However, it's no surprise that companies are beginning to be held accountable for the outcomes of their AI-based decisions. From the proliferation of fake news to most recently, the deliberate creation of the AI psychopath Norman, we're beginning to understand and experience the potential negative outcomes of AI. While AI, machine learning, and deep learning have been deemed to be'black box' technologies, unable to provide any information or explanation of its actions, this inability to explain AI will no longer be acceptable to consumers, regulators, and other stakeholders.


Explainable AI: 4 industries where it will be critical

#artificialintelligence

Let's say that I find it curious how Spotify recommended a Justin Bieber song to me, a 40-something non-Belieber. That doesn't necessarily mean that Spotify's engineers must ensure that their algorithms are transparent and comprehensible to me; I might find the recommendation a tad off-target, but the consequences are decidedly minimal. This is a fundamental litmus test for explainable AI – that is, machine learning algorithms and other artificial intelligence systems that produce outcomes that humans can readily understand and track backwards to the origins. Conversely, relatively low-stakes AI systems might be just fine with the black box model, where we don't understand (and can't readily figure out) the results. "If algorithm results are low-impact enough, like the songs recommended by a music service, society probably doesn't need regulators plumbing the depths of how those recommendations are made," says Dave Costenaro, head of artificial intelligence R&D at Jane.ai.


UK data regulator urges business towards explainable AI - TechHQ

#artificialintelligence

The Information Commissioner's Office (ICO) is putting forward a regulation that businesses and other organizations are required to explain decisions made by artificial intelligence (AI) or face multimillion-dollar fines if unable to. The guidance will provide advice such as how to explain the procedures, services, and outcomes delivered or assisted by AI to affected individuals. The report would detail the documentation of the decision-making process and data used to arrive at a decision. In extreme cases, organizations that fail to comply may face a fine of up to 4 percent of a company's global turnover, under the EU's data protection law. The new guidance is crucial as many firms in the UK are using some form of AI to execute critical business decisions, such as shortlisting and hiring candidates for roles.