Goto

Collaborating Authors

 Stanford HAI



Policy Brief

Stanford HAI

Artificial intelligence applications are frequently used without any mechanism for external testing or evaluation. Modern machine learning systems are opaque to outside stakeholders, including researchers, who can only probe the system by providing inputs and measuring outputs. Researchers, users, and regulators alike are thus forced to grapple with using, being impacted by, or regulating algorithms they cannot fully observe. This brief reviews the history of algorithm auditing, describes its current state, and offers best practices for conducting algorithm audits today. We identified nine considerations for algorithm auditing, including legal and ethical risks, factors of discrimination and bias, and conducting audits continuously so as to not capture just one moment in time.


10 years later, deep learning 'revolution' rages on, say AI pioneers Hinton, LeCun and Li

Stanford HAI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning "revolution" that began a decade ago, says that the rapid progress in AI will continue to accelerate. In an interview before the 10-year anniversary of key neural network research that led to a major AI breakthrough in 2012, Hinton and other leading AI luminaries fired back at some critics who say deep learning has "hit a wall." "We're going to see big advances in robotics -- dexterous, agile, more compliant robots that do things more efficiently and gently like we do," Hinton said. Other AI pathbreakers, including Yann LeCun, head of AI and chief scientist at Meta and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database -- which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall -- pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop.


Fellowship Programs

Stanford HAI

HAI Fellowship Programs offer opportunities to explore topics, conduct research, and collaborate across disciplines related to AI technologies, applications, or impact. The Institute for Human-Centered Artificial Intelligence (HAI) offers a 2-quarter program for Stanford Graduate Students. The goal of this program is to encourage interdisciplinary research conversations, facilitate new collaborations, and grow the HAI community of graduate scholars who are working in the area of AI, broadly defined. HAI is seeking graduate students to participate in this program. We would like to ensure the cohort is well-rounded across disciplines.


Policy Brief

Stanford HAI

While machine learning applications in healthcare continue to shape patient-care experiences and medical outcomes, discriminatory AI decision-making is concerning. This issue is especially pronounced in a clinical setting, where individuals' well-being and physical safety are on the line, and where medical professionals face life-or-death decisions every day. Until now, the conversation about measuring algorithmic fairness in healthcare has focused on fairness itself--and has not fully taken into account how fairness techniques could impact clinical predictive models, which are often derived from large clinical datasets. This brief seeks to ground this debate in evidence, and suggests the best way forward in developing fairer ML tools for a clinical setting. We studied the trade-offs clinical predictive algorithms face between accuracy and fairness for outcomes like hospital mortality, prolonged stays in the hospital, and 30-day readmissions to the hospital.


Inclusive design will help create AI that works for everyone

Stanford HAI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! A few years ago, a New Jersey man was arrested for shoplifting and spent ten days in jail. He was actually 30 miles away during the time of the incident; police facial recognition software wrongfully identified him. Facial recognition's race and gender failings are well known.


Policy Brief

Stanford HAI

As the development and adoption of AI-enabled healthcare continue to accelerate, regulators and researchers are beginning to confront oversight concerns in the clinical evaluation process that could yield negative consequences on patient health if left unchecked. Since 2015, the United States Food and Drug Administration (FDA) has evaluated and granted clearance for over 100 AI-based medical devices using a fairly rudimentary evaluation process that is in dire need of improvement as these evaluations have not been adapted to address the unique concerns surrounding AI. This brief examined this evaluation process and analyzed how devices were evaluated before approval. We analyzed public records for all 130 FDA-approved medical AI devices between January 2015 and December 2020 and observed significant variety and limitations in test-data rigor and what developers considered appropriate clinical evaluation. When we performed an analysis of a well-established diagnostic task (pneumothorax, or collapsed lung) using three sets of training data, the level of error exhibited between white and Black patients increased dramatically.


Welcome

Stanford HAI

It is with great pleasure that we invite you to the 3rd Annual AIMI Symposium. Our goal is to make the best science accessible to a broad audience of academic, clinical, and industry attendees. Through the AIMI Symposium we hope to address gaps and barriers in the field and catalyze more evidence-based solutions to improve health for all. We are grateful to The Big Data in Biomedicine Fund for their generous support that allows students and fellows to attend the 2022 AIMI Symposium for free.


White Paper

Stanford HAI

Developing responsible, human-centered artificial intelligence (AI) is a complex and resource-intensive task. As governments around the world race to meet the opportunities and challenges of developing AI, there remains an absence of deep, technical international cooperation that allows like-minded countries to leverage one another's resources and competitive advantages to facilitate cutting-edge AI research in a manner that upholds and promotes democratic values. Establishing a Multilateral AI Research Institute (MAIRI) would provide such a venue for force-multiplying AI research and development collaboration. It would also reinforce the United States' leadership as an international hub for basic and applied AI research, the development of AI governance models, and the fostering of AI norms that align with human-centric and democratic values. In its final report published in March 2021, the National Security Commission on Artificial Intelligence (NSCAI) recommended that the United States work closely with key allies and partners to establish a MAIRI and called for congressional authorization and funding to allow the National Science Foundation (NSF) to lead the effort.


2022 HAI Spring Conference on Key Advances in Artificial Intelligence

Stanford HAI

The HAI Spring Conference will explore three key advances in artificial intelligence – accountable AI, foundation models, and embodied AI in virtual and real worlds – as well as what the future of this technology might hold.