Goto

Collaborating Authors

Results


On the Opportunities and Risks of Foundation Models

arXiv.org Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.


The State of AI Ethics Report (Volume 5)

arXiv.org Artificial Intelligence

This report from the Montreal AI Ethics Institute covers the most salient progress in research and reporting over the second quarter of 2021 in the field of AI ethics with a special emphasis on "Environment and AI", "Creativity and AI", and "Geopolitics and AI." The report also features an exclusive piece titled "Critical Race Quantum Computer" that applies ideas from quantum physics to explain the complexities of human characteristics and how they can and should shape our interactions with each other. The report also features special contributions on the subject of pedagogy in AI ethics, sociology and AI ethics, and organizational challenges to implementing AI ethics in practice. Given MAIEI's mission to highlight scholars from around the world working on AI ethics issues, the report also features two spotlights sharing the work of scholars operating in Singapore and Mexico helping to shape policy measures as they relate to the responsible use of technology. The report also has an extensive section covering the gamut of issues when it comes to the societal impacts of AI covering areas of bias, privacy, transparency, accountability, fairness, interpretability, disinformation, policymaking, law, regulations, and moral philosophy.


ARTIFICIAL INTELLIGENCE, A TRANSFORMATIONAL FORCE FOR THE HEALTHCARE INDUSTRY

#artificialintelligence

Artificial Intelligence is transmuting the system and methods of the healthcare industries. Artificial Intelligence and healthcare were found together over half a century. The healthcare industries use Natural Language Processes to categorize certain data patterns. Artificial Intelligence can be used in clinical trials, to hasten the searches and validation of medical coding. This can help reduce the time to start, improve and accomplish clinical training.


Artificial Intelligence, a Transformational Force for the Healthcare Industry

#artificialintelligence

Artificial Intelligence is transmuting the system and methods of the healthcare industries. Artificial Intelligence and healthcare were found together over half a century. The healthcare industries use Natural Language Processes to categorize certain data patterns. Artificial Intelligence can be used in clinical trials, to hasten the searches and validation of medical coding. This can help reduce the time to start, improve and accomplish clinical training.


The State of AI Ethics Report (January 2021)

arXiv.org Artificial Intelligence

The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.


Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy

arXiv.org Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.


Generative Adversarial Networks Applied to Observational Health Data

arXiv.org Machine Learning

Having been collected for its primary purpose in patient care, Observational Health Data (OHD) can further benefit patient well-being by sustaining the development of health informatics. However, the potential for secondary usage of OHD continues to be hampered by the fiercely private nature of patient-related data. Generative Adversarial Networks (GAN) have Generative Adversarial Networks (GAN) have recently emerged as a groundbreaking approach to efficiently learn generative models that produce realistic Synthetic Data (SD). However, the application of GAN to OHD seems to have been lagging in comparison to other fields. We conducted a review of GAN algorithms for OHD in the published literature, and report our findings here.


Secure and Robust Machine Learning for Healthcare: A Survey

arXiv.org Machine Learning

Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research.


Large expert-curated database for benchmarking document similarity detection in biomedical literature search

#artificialintelligence

Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations.


The Future of AI Part 3

#artificialintelligence

This article will focus on the impact of AI, 5G, Edge Computing on the healthcare sector in the 2020s as well as a section on Quantum Computing's potential impact on AI, healthcare and financial services. The next in the series will deal with how we can use AI in the fight against climate change including the protection of the Amazon, smart cities and AGI. For those who are new to AI, Machine Learning and Deep Learning, I recommend taking a look at the following article entitled "An Introduction to AI." I will refer to Machine Learning and Deep Learning as being subsets of AI. Furthermore, this article is non-exhaustive in relation to potential applications of AI to healthcare and Quantum Computing to various sectors of the economy. The reason for the focus on AI in healthcare is in light of recent articles by a few senior medical practitioners in the US expressing concern about the role of AI in healthcare. Some of the concerns expressed such as the need for improved sharing of data ...