Goto

Collaborating Authors

 Stanford HAI


White House technology policy chief says AI bill of rights needs 'teeth' - FedScoop

Stanford HAI

The White House Office of Science and Technology Policy's bill of rights for an artificial intelligence-powered world needs "teeth," in the form of procurement enforcement, said Director Eric Lander on Tuesday. Many AI ethics proposals are little more than a set of basic expectations around governance, privacy, fairness, transparency and explainability, when laws and litigation are needed to back them up, Lander said, during Stanford University's Human-Centered AI Fall Conference. Lander's comments come after the Office of Science and Technology Policy (OSTP) issued a request for information last month on biometrics use cases -- given the technologies' wide adoption for identification, surveillance and behavioral analysis -- to inform development of the AI bill of rights. "We see this as a way not to limit innovation," Lander said. "We see this [as a way] to improve the quality of products by not rewarding people who cut corners and instead setting ground rules to reward people who produce safe, effective, fair, equitable products."


Data-Centric AI Virtual Workshop

Stanford HAI

Creating the appropriate training and evaluation data is often the biggest challenge in developing AI in practice. This workshop will explore challenges and opportunities across the data-for-AI pipeline. We will discuss recent advances in curating, cleaning, annotating and evaluating datasets for AI. We will also investigate questions that arise from data regulations, privacy and ethics. The goal of the workshop is to help build an intellectual foundation for the emerging and critically important discipline of data-centric AI.


Stanford Open Virtual Assistant Lab - First Workshop on the World Wide Voice Web (WWvW)

Stanford HAI

Monica Lam is a Professor in the Computer Science Department at Stanford University since 1988. She is the faculty director of the Open Virtual Assistant Lab (OVAL). She received a B.Sc. from University of British Columbia in 1980 and a Ph.D. in Computer Science from Carnegie Mellon University in 1987. Monica is a Member of the National Academy of Engineering and Association of Computing Machinery (ACM) Fellow. She is a co-author of the popular text Compilers, Principles, Techniques, and Tools (2nd Edition), also known as the Dragon book.


The Human and the Machine

Stanford HAI

Philosophy Talk relies on the support of listeners like you to stay on the air and online. Any contribution, large or small, helps us produce intelligent, reflective radio that questions everything, including our most deeply-held beliefs about science, morality, culture, and the human condition. Please consider making a tax-deductible donation.


AI's Islamophobia problem

Stanford HAI

Imagine that you're asked to finish this sentence: "Two Muslims walked into a โ€ฆ" Which word would you add? "Bar," maybe? It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial intelligence system that generates text, the AI completed the sentence in distinctly unfunny ways. "Two Muslims walked into a synagogue with axes and a bomb," it said. Or, on another try, "Two Muslims walked into a Texas cartoon contest and opened fire."


AI tool streamlines feedback on coding homework

Stanford HAI

This past spring, Stanford University computer scientists unveiled their pandemic brainchild, Code In Place, a project where 1,000 volunteer teachers taught 10,000 students across the globe the content of an introductory Stanford computer science course. Students in Code In Place evaluated the feedback they received using this carefully designed user interface. While the instructors could share their knowledge with hundreds, even thousands, of students at a time during lectures, when it came to homework, large-scale and high-quality feedback on student assignments seemed like an insurmountable task. "It was a free class anyone in the world could take, and we got a whole bunch of humans to help us teach it," said Chris Piech, assistant professor of computer science and co-creator of Code In Place. "But the one thing we couldn't really do is scale the feedback. To solve this problem, Piech worked with Chelsea Finn, assistant professor of computer science and of electrical engineering, and PhD students Mike Wu and Alan Cheng to develop and test a first-of-its-kind artificial intelligence teaching tool capable of assisting educators in grading and providing meaningful, constructive feedback for a high volume of student assignments. Their innovative tool, which is detailed in a Stanford AI Lab blogpost, exceeded their expectations. In education, it can be difficult to get lots of data for a single problem, like hundreds of instructor comments on one homework question. Companies that market online coding courses are often similarly limited, and therefore rely on multiple-choice questions or generic error messages when reviewing students' work. "This task is really hard for machine learning because you don't have a ton of data.


Hoffman-Yee Symposium

Stanford HAI

At the Symposium, the inaugural recipients of Hoffman-Yee Research Grants will present results from their research to date and plans for the future. The Hoffman-Yee grant program is a multiyear initiative to invest in research that leverages artificial intelligence to address significant scientific and/or societal challenges aligned with Stanford HAI's key areas of focus.


The latest chapter in a 100-year study says AI's promises and perils are getting real

Stanford HAI

A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology -- and to the ways in which that technology are being abused. The report, titled "Gathering Strength, Gathering Storms," was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development . AI100 was initiated by Eric Horvitz, Microsoft's chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary. The project's first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies.


Stanford CRFM

Stanford HAI

The Center for Research on Foundation Models (CRFM), a new initiative of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), invites you to the Workshop on Foundation Models from August 23-24, 2021. BERT, GPT-3, DALL-E), we mean a single model that is trained on raw data, potentially across multiple modalities, which can be usefully adapted to a wide range of tasks. These models have demonstrated clear potential, which we see as the beginnings of a sweeping paradigm shift in AI. They represent a dramatic increase in capability in terms of accuracy, generation quality, and extrapolation to new tasks, but they also pose clear risks such as use for widespread disinformation, potential exacerbation of historical inequities, and problematic centralization of power. Given their anticipated impact, we invite you to join us at this workshop, where scholars reflecting a diverse array of perspectives, disciplinary backgrounds (e.g.


Foundation models risk exacerbating ML's ethical challenges

Stanford HAI

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Machine learning is undergoing a paradigm shift with the rise of models trained at massive scale, including Google's BERT, OpenAI's DALL-E, and AI21 Labs' Jurassic-1 Jumbo. Their capabilities and dramatic performance improvements are leading to a new status quo: a single model trained on raw datasets that can be adapted for a wide range of applications. Indeed, OpenAI is reportedly developing a multimodal system trained on images, text, and other data using massive computational resources, which the company's leadership believes is the most promising path toward AGI -- AI that can learn any task a human can. But while the emergence of these "foundational" models presents opportunities, it also poses risks, according to a new study released by the Stanford Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM). CFRM, a new initiative made up of an interdisciplinary team of roughly 160 students, faculty, and researchers, today published a deep dive into the legal ramifications, environmental and economic impact, and ethical issues surrounding foundation models.