Goto

Collaborating Authors

 Stanford HAI


Corporate investment in AI down for first time in a decade • The Register

Stanford HAI

Global private investment and the number of AI startups decreased in 2022, while the industry's adoption of the technology has plateaued compared to previous years, according to new data. This revelation hits at a time when AI hype is at an all-time high. Commercial tools capable of generating images, text, code, video, audio, and even music are rapidly improving and becoming increasingly convincing. Companies across different industries are looking to deploy generative AI features to revamp existing products and services or create new ones. Analysts are predicting the boom will increase global productivity and change the labor force, while experts are debating whether the technology poses an existential threat to humanity.


AI is entering an era of corporate control - The Verge

Stanford HAI

Private investment in AI decreased for the first time in a decade. Global private investment in AI has been climbing for years but decreased by 26.7 percent from 2021 to $91.9 billion. in 2022. Training big AI models has environmental costs. A 2022 paper estimates that training a large AI language model called BLOOM emitted 25 times as much carbon as that of flying one passenger from New York to San Francisco and back. By comparison, OpenAI's GPT-3 was estimated to have a carbon cost 20 times that of BLOOM.


The AI arms race is on. But we should slow down AI progress instead. - Vox

Stanford HAI

"Computers need to be accountable to machines," a top Microsoft executive told a roomful of reporters in Washington, DC, on February 10, three days after the company launched its new AI-powered Bing search engine. Computers need to be accountable to people!" he said, and then made sure to clarify, "That was not a Freudian slip." Slip or not, the laughter in the room betrayed a latent anxiety. Progress in artificial intelligence has been moving so unbelievably fast lately that the question is becoming unavoidable: How long until AI dominates our world to the point where we're answering to it rather than it answering to us? First, last year, we got DALL-E 2 and Stable Diffusion, which can turn a few words of text into a stunning image. Then Microsoft-backed OpenAI gave us ChatGPT, which can write essays so convincing that it freaks out everyone from teachers (what if it helps students cheat?) to journalists (could it replace them?) to disinformation experts (will it amplify conspiracy ...


Subscribe to Stanford HAI to Receive the 2023 AI Index Report

Stanford HAI

The AI Index serves as one of the most credible and authoritative sources for data and insights about AI to provide policymakers, researchers, journalists, executives, and the general public a deeper understanding of the field.


Stanford CRFM

Stanford HAI

DALL-E 2, Stable Diffusion, and others transformed the image generation space. We saw more powerful language models, PaLM, and of course ChatGPT. We saw foundation models being developed for speech, music, proteins, and many other data modalities. And, for the first time, these models are now being widely deployed and utilized by consumers to accomplish a wide breadth of useful tasks. What is clear is that while foundation models have opened up unprecedented new possibilities, they are also still raw, imperfect research artifacts that we do not entirely understand. In 2021, we founded the Center for Research on Foundation Models (CRFM), recognizing the critical role of foundation models. CRFM's mission is to understand and improve foundation models from both a technical and societal perspective.


Stanford CRFM

Stanford HAI

DALL-E 2, Stable Diffusion, and others transformed the image generation space. We saw more powerful language models, PaLM, and of course ChatGPT. We saw foundation models being developed for speech, music, proteins, and many other data modalities. And, for the first time, these models are now being widely deployed and utilized by consumers to accomplish a wide breadth of useful tasks. What is clear is that while foundation models have opened up unprecedented new possibilities, they are also still raw, imperfect research artifacts that we do not entirely understand. In 2021, we founded the Center for Research on Foundation Models (CRFM), recognizing the critical role of foundation models.


[2301.11305] DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

Stanford HAI

The fluency and factual knowledge of large language models (LLMs) heightens the need for corresponding systems to detect whether a piece of text is machine-written. For example, students may use LLMs to complete written assignments, leaving instructors unable to accurately assess student learning. In this paper, we first demonstrate that text sampled from an LLM tends to occupy negative curvature regions of the model's log probability function. Leveraging this observation, we then define a new curvature-based criterion for judging if a passage is generated from a given LLM. This approach, which we call DetectGPT, does not require training a separate classifier, collecting a dataset of real or generated passages, or explicitly watermarking generated text. It uses only log probabilities computed by the model of interest and random perturbations of the passage from another generic pre-trained language model (e.g, T5). We find DetectGPT is more discriminative than existing zero-shot methods for model sample detection, notably improving detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT. See https://ericmitchell.ai/detectgpt for code, data, and other project information.


Stanford faculty weigh in on ChatGPT's shake-up in education

Stanford HAI

Faculty from the Stanford Accelerator for Learning are already thinking about the ways in which ChatGPT and other generative artificial intelligence will change and contribute to education in particular. Victor Lee, associate professor of education and the faculty lead for the accelerator initiative on generative AI in education, stresses the importance of educators in harnessing this technology. "If we want generative AI to meaningfully improve education," he says, "there is the obvious step we need to take of listening to the existing expertise in education -- from educators, parents, students, and scholars who have spent years studying education -- and using what we learn to find the most pertinent and valuable use cases for generative AI in a very complicated educational system." Over the next several weeks, the Stanford Accelerator for Learning will launch listening sessions and gatherings with educators to strategize a path for generative AI. Says Lee, "We need the use of this technology to be ethical, equitable, and accountable."


White Paper

Stanford HAI

This White Paper assesses the progress of three pillars of U.S. leadership in AI innovation and trustworthy AI that carry the force of law: (i) the AI in Government Act of 2020; (ii) the Executive Order on "AI Leadership"; and (iii) the Executive Order on "AI in Government." Collectively, these Executive Orders and the AI in Government Act have been critical to defining the U.S. national strategy on AI and envisioning an ecosystem where the U.S. government leads in AI and promotes trustworthy AI. We systematically examined the implementation status of each requirement and performed a comprehensive search across 200 federal agencies to assess implementation of key requirements to identify regulatory authorities pertaining to AI and to enumerate AI use cases. While much progress has been made, our findings are sobering. America's AI innovation ecosystem is threatened by weak and inconsistent implementation of these legal requirements.


Sarah Bana: Unraveling the Language of Work

Stanford HAI

Digital economist Sarah Bana has been helping people find work for a long time. As an undergraduate at the University of California Irvine, she'd sit down with friends to clean up their resumes and brainstorm potential jobs on campus. But not just any job; Bana always wanted to match the right person to the right position. "I started fairly early understanding the campus environment and job space," she says. "I'd ask'Do you want to work at the library where it's quiet or in the dining hall where it's louder but exciting and where you might meet new people? Or do you want to work at the Cross-Cultural Center because you care about diversity and equity on campus?' For me, it's always been intuitive to help people find their fit in an environment where they can work but also thrive."