Collaborating Authors

AAAI AI-Alert for Oct 1, 2019

Reproducibility Challenges in Machine Learning for Health


Last year the United States Food and Drug Administration (FDA) cleared a total of 12 AI tools that use machine learning for health (ML4H) algorithms to inform medical diagnosis and treatment for patients. The tools are now allowed to be marketed, with millions of potential users in the US alone.Because ML4H tools directly affect human health, their development from experiments in labs to deployment in hospitals progresses under heavy scrutiny. A critical component of this process is reproducibility. A team of researchers from MIT, University of Toronto, New York University, and Evidation Health have proposed a number of "recommendations to data providers, academic publishers, and the ML4H research community in order to promote reproducible research moving forward" in their new paper Reproducibility in Machine Learning for Health. Just as boxers show their strength in the ring by getting up again after being knocked to the canvas, researchers test their strength in the arena of science by ensuring their work's reproducibility.

Big Data, ML and APSs: Marketers Digital Jargon They Don't Understand


Despite 60% of Marketers Demanding Control of the'Digital Experience', Many Do Not Understand Common Digital Terms Despite 60% of marketers wanting to'own' the digital experience, many admit that they don't fully understand digital terminology such as API, big data and machine learning. The research, which surveyed over 200 IT professionals and 200 marketers, explores the growing disconnect between each group as they struggle to decide who should'own' the emerging digital experience sector. Magnolia found that 24% of marketers don't understand what'machine learning' is, and 23% say they don't know what the term'big data' means. A third of marketers also confess to not know what API stands for. IT teams are also suffering from a similar disconnect, with 77% saying they don't understand the buzzwords marketers use.

Can we automate data quality to support machine learning?


Over the last decade, companies have begun to grasp and unlock the potential that artificial intelligence (AI) and machine learning (ML) can bring. While still in its infancy, companies are starting to understand the significant impact this technology can bring, helping them make better, faster and more efficient decisions. Of course, AI and ML is no silver bullet to help businesses embrace innovation. In fact, the success of these algorithms is only as good as their foundations -- specifically, quality data. Without it, businesses will see the very objective they've installed AI and ML to do fail, with the unforeseen consequences of bad data causing irreversible damage to the business both in terms of its efficiency and reputation.

Generalized earthquake frequency–magnitude distribution described by asymmetric Laplace mixture modelling


The complete part of the earthquake frequency–magnitude distribution, above the completeness magnitude mc, is well described by the Gutenberg–Richter law. On the other hand, incomplete data does not follow any specific law, since the shape of the frequency–magnitude distribution below max(mc) is function of mc heterogeneities that depend on the seismic network spatiotemporal configuration. This paper attempts to solve this problem by presenting an asymmetric Laplace mixture model, defined as the weighted sum of Laplace (or double exponential) distribution components of constant mc, where the inverse scale parameter of the exponential function is the detection parameter κ below mc, and the Gutenberg–Richter β-value above mc. Using a variant of the Expectation-Maximization algorithm, the mixture model confirms the ontology proposed by Mignan [2012,], The performance of the proposed mixture model is analysed, with encouraging results obtained in simulations and in eight real earthquake catalogues that represent different seismic network spatial configurations.

Striking the Balance between Supervised and Unsupervised Machine Learning


Today, a fresh generation of technologies, fuelled by advances in artificial intelligence based on machine learning, is opening up new opportunities to reassess the upper bounds of operational excellence across these sectors. To stay one step ahead of the pack, businesses not only need to understand machine learning complexities but be prepared to act on it and take advantage. After all, the latest machine learning solutions can determine weeks in advance if and when assets are likely to degrade or fail, distinguishing between normal and abnormal equipment and process behaviour by recognising complex data patterns and uncovering the precise signatures of degradation and failure. They can then alert operators and even prescribe solutions to avoid the impending failure, or at least mitigate the consequences. The leading software constructs are autonomous and self-learning.

Chatbots Can Make as Many Sales as Humans


That was the conclusion of a recent study published in academic journal Marketing Science in which researchers analyzed field data from outbound sales calls between bots or sales reps and 6,200 randomized customers of an anonymous Asia-based financial services company. They found that the customers tended to grow curt when informed upfront of the bot's presence, and that such disclosures led to an 80% drop in sales. "They perceive the disclosed bot as less knowledgeable and less empathetic," the study authors wrote. "The negative disclosure effect seems to be driven by a subjective human perception against machines, despite the objective competence of AI chatbots." The paper raises a moral dilemma for businesses looking to deploy chatbots.

Sampling-Based Robot Motion Planning

Communications of the ACM

In recent years, robots play an active role in everyday life: medical robots assist in complex surgeries; search-and-rescue robots are employed in mining accidents; and low-cost commercial robots clean houses. There is a growing need for sophisticated algorithmic tools enabling stronger capabilities for these robots. One fundamental problem that robotic researchers grapple with is motion planning--which deals with planning a collision-free path for a moving system in an environment cluttered with obstacles.13,29 To a layman, it may seem the wide use of robots in modern life implies that the motion-planning problem has already been solved. This is far from true.

Multi-Device Digital Assistance

Communications of the ACM

The use of multiple digital devices to support people's daily activities has long been discussed.11 Multi-device experiences (MDXs) spanning multiple devices simultaneously are viable for many individuals. Each device has unique strengths in aspects such as display, compute, portability, sensing, communications, and input. Despite the potential to utilize the portfolio of devices at their disposal, people typically use just one device per task; meaning they may need to make compromises in the tasks they attempt or may underperform at the task at hand. It also means the support that digital assistants such as Amazon Alexa, Google Assistant, or Microsoft Cortana can offer is limited to what is possible on the current device.

How Self-Driving Cars "See" the World


Modern cars bear little resemblance to their early ancestors, but the basic action of steering a vehicle has always remained the same. Whether you're behind the wheel of a Tesla or a vintage Model T, turning the wheel dictates the direction of movement. This simple premise, which places humans at the center of control, may be ripe for disruption as tech giants and car companies race toward a future that would render human-controlled vehicles obsolete. How does this next generation of self-driving cars "see" the road? Today's video from TED-Ed explains one of the mind-bending innovations making autonomous vehicles a reality.

Google is taking over DeepMind's NHS contracts – should we be worried?

New Scientist

This month, the NHS signed its first deals with Google. Five NHS trusts have agreed contracts with Google Health, after it swallowed up its UK sister firm DeepMind Health, nearly a year after signalling its intention to do so. New Scientist first revealed the extent of DeepMind's access to the sensitive data of more than a million National Health Service patients back in 2016, in a deal that the UK's data watchdog later found breached the law. The partnership has yielded interesting research, including using artificial intelligence to detect eye disease from scans with an accuracy that matches or exceeds human experts. But is there a material difference now the deals are with the US tech giant rather than DeepMind, and should people who use the NHS be concerned at the change?