Goto

Collaborating Authors

 stanford institute


Aman's AI Journal • Read List

#artificialintelligence

Foundation models is a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence. This paradigm has a host of benefits including: (i) instead of requiring a large, well-labelled dataset for the specific task, foundation models need a great amount unlabeled data and only a limited set of unlabeled data to fine-tune it for different downstream tasks thereby reducing the labeled data requirements dramatically, (ii) since a foundation model can be shared for different downstream tasks, we can save on the resources needed to train task-specific models owing to the knowledge transfer that foundation models bring about (training a relatively large model with billions of parameters roughly has roughly the same carbon footprint as running five cars over their lifetime), and (iii) democratizing AI research by making it much easier for small businesses to deploy AI in a wider range of mission-critical situations owing to the reduced data labeling requirements. Foundation models is a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence. This paradigm has a host of benefits including: (i) instead of requiring a large, well-labelled dataset for the specific task, foundation models need a great amount unlabeled data and only a limited set of unlabeled data to fine-tune it for different downstream tasks thereby reducing the labeled data requirements dramatically, (ii) since a foundation model can be shared for different downstream tasks, we can save on the resources needed to train task-specific models owing to the knowledge transfer that foundation models bring about (training a relatively large model with billions of parameters roughly has roughly the same carbon footprint as running five cars over their lifetime), and (iii) democratizing AI research by making it much easier for small businesses to deploy AI in a wider range of mission-critical situations owing to the reduced data labeling requirements.


Fellowship Programs

Stanford HAI

HAI Fellowship Programs offer opportunities to explore topics, conduct research, and collaborate across disciplines related to AI technologies, applications, or impact. The Institute for Human-Centered Artificial Intelligence (HAI) offers a 2-quarter program for Stanford Graduate Students. The goal of this program is to encourage interdisciplinary research conversations, facilitate new collaborations, and grow the HAI community of graduate scholars who are working in the area of AI, broadly defined. HAI is seeking graduate students to participate in this program. We would like to ensure the cohort is well-rounded across disciplines.


CoAuthor: Stanford experiments with human-AI collaborative writing

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! This article is an existential crisis. It is written by a professional writer writing about artificial intelligence that helps writers write. I mean, shouldn't humans write their own content?


Rob Reich: AI developers need a code of responsible conduct

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Rob Reich wears many hats: political philosopher, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence. In recent years, Reich has delved deeply into the ethical and political issues posed by revolutionary technological advances in artificial intelligence (AI). His work is not always easy for technologists to hear. In his book, System Error: Where Big Tech Went Wrong and How We Can Reboot, Reich and his co-authors (computer scientist Mehran Sahami and social scientist Jeremy M. Weinstein) argued that tech companies and developers are so fixated on "optimization" that they often trample on human values.


Words matter: AI can predict salaries based on the text of online job postings

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. The job landscape in the United States is dramatically shifting: The COVID-19 pandemic has redefined essential work and moved workers out of the office. New technologies are transforming the nature of many occupations. Globalization continues to push jobs to new locations. And climate change concerns are adding jobs in the alternative energy sector while cutting them from the fossil fuel industry.


Why foundation models in AI need to be released responsibly

#artificialintelligence

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University. Humans are not very good at forecasting the future, especially when it comes to technology. Foundation models are a new class of large-scale neural networks with the ability to generate text, audio, video and images. These models will anchor all kinds of applications and hold the power to influence many aspects of society. It's difficult for anyone, even experts, to imagine where this technology will lead in the coming years.


Federal banking agencies trying to ensure AI, ML benefit most rather than the few

#artificialintelligence

As artificial intelligence and machine learning deploy across financial sectors, federal government needs a way to ensure standards for stability and inclusion are followed. Measuring risks and setting benchmarks for emerging fintech is top of mind for agencies such as the National Institute of Standards and Technology and the Commerce Department. In her first public engagement since being sworn in earlier this month, NIST Director Laurie Locascio told an audience at Stanford University on Wednesday that the president's 2023 budget request calls for an additional $80 million to expand and strengthen NIST capabilities for targeting critical and emerging technologies. Listing ways the agency is trying to enable trustworthy AI, she said NIST scientists and engineers are developing taxonomies, terminology and testbeds for measuring AI risks. "NIST is developing a resource center of documents, software and standards and related tools that continue to better understanding and better identification of measurement, and management of various risks associated with AI systems," she said during the Artificial Intelligence and the Economy Conference.


2022 Artificial Intelligence Index Report published

AIHub

The 2022 AI Index Report has been published. Compiled by the Stanford Institute for Human-Centered Artificial Intelligence (HAI), it tracks, summarises and visualises data relating to artificial intelligence. The aim of the report is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. Find out more about the report here. You can access the full pdf version here.


g-f(2)227 The Big Picture of Business Artificial Intelligence (4/17/2021), IEEE Spectrum, 15 Graphs You Need to See to Understand AI in 2021.

#artificialintelligence

The massive document, produced by the Stanford Institute for Human-Centered Artificial Intelligence, is packed full of data and graphs, and we've plucked out 15 that provide a snapshot of the current state of AI. AI research is booming: More than 120,000 peer-reviewed AI papers were published in 2019. The money continues to pour in. Global corporate investment in AI soared to nearly $68 billion in 2020, an increase of 40 percent over the year before. Corporations are steadily increasing their adoption of AI tools in such industries as telecom, financial services, and automotive.


America's global leadership in human-centered AI can't come from industry alone

#artificialintelligence

The Biden administration has followed through on a Congressional mandate to create a National AI Research Resource Task Force. With top experts from the federal government, higher education, and private organizations, the task force is dedicated to strengthening America's foundation and spurring advances in artificial intelligence (AI). As a computer scientist, AI researcher and educator, co-director of the Stanford Institute for Human-Centered AI, and a major supporter of the bipartisan legislation that authorized this endeavor, I am honored to have accepted an offer to serve as a member of the task force. The time has never been more critical for us to come together and cement America's leadership in AI -- a technology that has the potential to drive innovation in every industry, from manufacturing and healthcare to transportation and defense. Like all technologies human civilization has built, AI is a tool that is as good or as bad as those who make and use it.