Goto

Collaborating Authors

Overview


Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness

#artificialintelligence

Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit. Machine learning (ML), artificial intelligence (AI), and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. The potential uses include improving diagnostic accuracy,1 more reliably predicting prognosis,2 targeting treatments,3 and increasing the operational efficiency of health systems.4 Examples of potentially disruptive technology with early promise include image based diagnostic applications of ML/AI, which have shown the most early clinical promise (eg, deep learning based algorithms improving accuracy in diagnosing retinal pathology compared with that of specialist physicians5), or natural language processing used as a tool to extract information from structured and unstructured (that is, free) text embedded in electronic health records.2 Although we are only just …


AI for Social Impact

Interactive AI Magazine

Recommender systems are among today's most successful application areas of artificial intelligence. However, in the recommender systems research community, we have fallen prey to a McNamara fallacy to a worrying extent: In the majority of our research efforts, we rely almost exclusively on computational measures such as prediction accuracy, which are easier to make than applying other evaluation methods. However, it remains unclear whether small improvements in terms of such computational measures matter greatly and whether they lead us to better systems in practice. A paradigm shift in terms of our research culture and goals is therefore needed. We can no longer focus exclusively on abstract computational measures but must direct our attention to research questions that are more relevant and have more impact in the real world. In this work, we review the various ways of how recommender systems may create value; how they, positively or negatively, impact consumers, businesses, and the society; and how we can measure the resulting effects.


Our map for Montreal's artificial intelligence ecosystem - Bonjour Startup Montréal

#artificialintelligence

Bonjour Startup Montreal unveils a new map to visually represent Montreal's artificial intelligence ecosystem. This map, developed in collaboration with Next AI, IVADO and Montréal International, provides an overview of the organizations that make up the Montreal ecosystem. "Montreal is a well-known global hub in artificial intelligence, it's a fact. Throughout the years, Montreal was able to attract several large companies that in part launched innovative hubs dedicated to artificial intelligence. This expertise led to an increasing number of organizations dedicated to AI and, also, an increasing number of startups that incorporate AI into their business model," says Liette Lamonde, CEO of Bonjour Startup Montreal.


A Guide To Machine Learning: Everything You Need To Know

#artificialintelligence

Artificial Intelligence and other disruptive technology are spreading their wings in the current scenario. Technology has become a mandatory element for all kinds of businesses across all industries around the globe. Let us travel back to 1958 when Frank Rosenblatt created the first artificial neural network that could recognize patterns and shapes. From such a primitive stage we have now reached a place where machine learning is an integral part of almost all softwares and applications. Machine learning is resonating with everything now, be it automated cars, speech recognition, chatbots, smart cities, and whatnot.


A guide to Robotic Process Automation

#artificialintelligence

Robot-led automation has the potential to transform today's workplace as dramatically as the machines of the Industrial Revolution changed the factory floor. Both Robotic Process Automation (RPA) and Intelligent Automation (IA) have the potential to make business processes smarter and more efficient, in very different ways. Both have significant advantages over traditional IT implementations. Robotic process automation tools are best suited for processes with repeatable, predictable interactions with IT applications. These processes typically lack the scale or value to warrant automation via IT transformation.


Microsoft Releases AI Training Library ZeRO-3 Offload

#artificialintelligence

Microsoft recently open-sourced ZeRO-3 Offload, an extension of their DeepSpeed AI training library that improves memory efficiency while training very large deep-learning models. ZeRO-3 Offload allows users to train models with up to 40 billion parameters on a single GPU and over 2 trillion parameters on 512 GPUs. The DeepSpeed team provided an overview of the features and benefits of the release in a recent blog post. ZeRO-3 Offload increases the memory efficiency of distributed training for deep-learning models built on the PyTorch framework, providing super-linear scaling across multiple GPUs. By offloading the storage of some data from the GPU to the CPU, larger model sizes per GPU can be trained, enabling model sizes up to 40B parameters on a single GPU.


Artificial Intelligence Applications in Medicine: A Rapid Overview of Current Paradigms - European Medical Journal

#artificialintelligence

The Merriam-Webster dictionary defines artificial intelligence (AI) as "a branch of computer science dealing with the simulation of intelligent behavior in computers" or "the capability of a machine to imitate intelligent human behavior." The layman may think of AI as mere algorithms and programs; however, there is a distinct difference from the usual programs which are task-specific and written to perform repetitive tasks. Machine learning (ML) refers to a computing machine or system's ability to teach or improve itself using experience without explicit programming for each improvement, using methods of forward chaining of algorithms derived from backward chaining of algorithm deduction from data. Deep learning is a subsection within ML focussed on using artificial neural networks to address highly abstract problems;1 however, this is still a primitive form of AI. When fully developed, it will be capable of sentient and recursive or iterative self-improvement.


Distributed Learning in Wireless Networks: Recent Progress and Future Challenges

#artificialintelligence

The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment, limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources. This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.


An overview of model explainability in modern machine learning

#artificialintelligence

Model explainability is one of the most important problems in machine learning today. It's often the case that certain "black box" models such as deep neural networks are deployed to production and are running critical systems from everything in your workplace security cameras to your smartphone. It's a scary thought that not even the developers of these algorithms understand why exactly the algorithms make the decisions they do -- or even worse, how to prevent an adversary from exploiting them. While there are many challenges facing the designer of a "black box" algorithm, it's not completely hopeless. There are actually many different ways to illuminate the decisions a model makes.


AI-Powered Contextual Banking CX Requires a Radical Paradigm Shift

#artificialintelligence

Though it's rarely discussed, its proper integration determines whether it will make customers' lives better than ever before OR become deadly dangerous if applied without human centricity. A radical paradigm shift is required to ensure that the hyper-personalization of AI banking is not compromised by a lack of expertise in AI, technology or customer banking experience. According to Temenos, 77% of banking leaders strongly believe that AI will be the biggest game changer of all advanced technologies. Amid the pandemic, 88% of customers expect companies to accelerate their digital initiatives, while 68% state that COVID-19 has elevated their expectations of brands' digital capabilities, according to Salesforce. We can see that, prior to COVID-19, experimenting with AI possibilities was more like a tick-box exercise to keep up with the slogan of innovation.