Goto

Collaborating Authors

Data science cowboys are exacerbating the AI and analytics challenge

#artificialintelligence

In the below, Dr Scott Zoldi, chief analytics officer at analytic software firm FICO, explains to Information Age why data science cowboys and citizen data scientists could cause catastrophic failures to a business' AI and analytics ambitions. Although the future will see fast-paced adoption and benefits driven by applying AI to all types of businesses, we will also see catastrophic failures due to the over-extension of analytic tools, and the rise of citizen data scientists and data science cowboys. The former does not have data science training but uses analytic tooling and methods to bring analytics into their businesses; the latter has data science training, but a disregard for the right way to handle AI. Citizen data scientists often use algorithms and technology they don't understand, which might result in inappropriate use of their AI tools; the risk from the data science cowboys is that they build AI models that may incorporate non-causal relationships learned from limited data, spurious correlations and outright bias -- which could have serious consequences for driverless car systems, for example. Today's AI threat stems from the efforts of both citizen data scientists and data scientist cowboys to tame complex machine learning algorithms for business outcomes.


Explainable Artificial Intelligence

#artificialintelligence

Introduction In the era of data science, artificial intelligence is making impossible feats possible. Driverless cars, IBM Watson's question-answering system, cancer detection, electronic trading, etc. are all made possible through the advanced decision making ability of artificial intelligence. The deep layers of neural networks have a magical ability to recreate the human mind and its functionalities. When humans make decisions, they have the ability to explain their thought process behind it. They can explain the rationale; whether its driven by observation, intuition, experience or logical thinking ability.


TechVisor - Het vizier op de tech industrie

#artificialintelligence

The possibilities of artificial intelligence are endless. AI helps businesses create tremendous efficiencies through automation, while enhancing an organizations ability to make more effective business decisions. However, it's no surprise that companies are beginning to be held accountable for the outcomes of their AI-based decisions. From the proliferation of fake news to most recently, the deliberate creation of the AI psychopath Norman, we're beginning to understand and experience the potential negative outcomes of AI. While AI, machine learning, and deep learning have been deemed to be'black box' technologies, unable to provide any information or explanation of its actions, this inability to explain AI will no longer be acceptable to consumers, regulators, and other stakeholders.


5 Key Research Findings on Enterprise Artificial Intelligence

#artificialintelligence

Hot off the press today is a FICO-commissioned research study on artificial intelligence and how Chief Analytics Officers (CAOs) and Chief Data Officers (CDOs) are responding to the current pandemic, economic uncertainty, and renewed focus on social justice. In additional to a survey, in-depth interviews with the top AI leaders at HSBC, AXA PPP, Banorte, and Chubb provides additional perspective and commentary. The entire 24-page report is available for download; however I wanted to share some highlights from the research that I found to be particularly impactful, or perhaps even surprising given the amount of hype around AI in the market today. The pandemic has caused a drastic shift in consumer behavior as individuals stay at home and adjust their daily routines. Many travel, hospitality, and restaurant workers are out of work, and those fortunate to still be employed have shifted their spending patterns.


Explainable AI: 4 industries where it will be critical

#artificialintelligence

Let's say that I find it curious how Spotify recommended a Justin Bieber song to me, a 40-something non-Belieber. That doesn't necessarily mean that Spotify's engineers must ensure that their algorithms are transparent and comprehensible to me; I might find the recommendation a tad off-target, but the consequences are decidedly minimal. This is a fundamental litmus test for explainable AI – that is, machine learning algorithms and other artificial intelligence systems that produce outcomes that humans can readily understand and track backwards to the origins. Conversely, relatively low-stakes AI systems might be just fine with the black box model, where we don't understand (and can't readily figure out) the results. "If algorithm results are low-impact enough, like the songs recommended by a music service, society probably doesn't need regulators plumbing the depths of how those recommendations are made," says Dave Costenaro, head of artificial intelligence R&D at Jane.ai.