Goto

Collaborating Authors

 ai2


A New Kind of AI Model Lets Data Owners Take Control

WIRED

A new kind of large language model, developed by researchers at the Allen Institute for AI (Ai2), makes it possible to control how training data is used even after a model has been built. The new model, called FlexOlmo, could challenge the current industry paradigm of big artificial intelligence companies slurping up data from the web, books, and other sources--often with little regard for ownership--and then owning the resulting models entirely. Once data is baked into an AI model today, extracting it from that model is a bit like trying to recover the eggs from a finished cake. "Conventionally, your data is either in or out," says Ali Farhadi, CEO of Ai2, based in Seattle, Washington. "Once I train on that data, you lose control. And you have no way out, unless you force me to go through another multi-million-dollar round of training."


The Most Capable Open Source AI Model Yet Could Supercharge AI Agents

WIRED

The most capable open source AI model with visual abilities yet could see more developers, researchers, and startups develop AI agents that can carry out useful chores on your computers for you. Released today by the Allen Institute for AI (Ai2), the Multimodal Open Language Model, or Molmo, can interpret images as well as converse through a chat interface. This means it can make sense of a computer screen, potentially helping an AI agent perform tasks such as browsing the web, navigating through file directories, and drafting documents. "With this release, many more people can deploy a multimodal model," says Ali Farhadi, CEO of Ai2, a research organization based in Seattle, Washington, and a computer scientist at the University of Washington. "It should be an enabler for next-generation apps."


A tiny new open-source AI model performs as well as powerful big ones

MIT Technology Review

Meanwhile, Ai2 says a smaller Molmo model, with 7 billion parameters, comes close to OpenAI's state-of-the-art model in performance, an achievement it ascribes to vastly more efficient data collection and training methods. What Molmo shows is that open-source AI development is now on par with closed, proprietary models, says Ali Farhadi, the CEO of Ai2. And open-source models have a significant advantage, as their open nature means other people can build applications on top of them. The Molmo demo is available here, and it will be available for developers to tinker with on the Hugging Face website. Other large multimodal language models are trained on vast data sets containing billions of images and text samples that have been hoovered from the internet, and they can include several trillion parameters.


AI2: The next leap toward native language based and explainable machine learning framework

Dessureault, Jean-Sébastien, Massicotte, Daniel

arXiv.org Artificial Intelligence

The machine learning frameworks flourished in the last decades, allowing artificial intelligence to get out of academic circles to be applied to enterprise domains. This field has significantly advanced, but there is still some meaningful improvement to reach the subsequent expectations. The proposed framework, named AI$^{2}$, uses a natural language interface that allows a non-specialist to benefit from machine learning algorithms without necessarily knowing how to program with a programming language. The primary contribution of the AI$^{2}$ framework allows a user to call the machine learning algorithms in English, making its interface usage easier. The second contribution is greenhouse gas (GHG) awareness. It has some strategies to evaluate the GHG generated by the algorithm to be called and to propose alternatives to find a solution without executing the energy-intensive algorithm. Another contribution is a preprocessing module that helps to describe and to load data properly. Using an English text-based chatbot, this module guides the user to define every dataset so that it can be described, normalized, loaded and divided appropriately. The last contribution of this paper is about explainability. For decades, the scientific community has known that machine learning algorithms imply the famous black-box problem. Traditional machine learning methods convert an input into an output without being able to justify this result. The proposed framework explains the algorithm's process with the proper texts, graphics and tables. The results, declined in five cases, present usage applications from the user's English command to the explained output. Ultimately, the AI$^{2}$ framework represents the next leap toward native language-based, human-oriented concerns about machine learning framework.


Could AI help you to write your next paper?

#artificialintelligence

You know that text autocomplete function that makes your smartphone so convenient -- and occasionally frustrating -- to use? Well, now tools based on the same idea have progressed to the point that they are helping researchers to analyse and write scientific papers, generate code and brainstorm ideas. The tools come from natural language processing (NLP), an area of artificial intelligence aimed at helping computers to'understand' and even produce human-readable text. Called large language models (LLMs), these tools have evolved to become not only objects of study but also assistants in research. LLMs are neural networks that have been trained on massive bodies of text to process and, in particular, generate language.


University of Washington computer science professor Yejin Choi wins $800K 'genius grant'

University of Washington Computer Science

Yejin Choi, a University of Washington computer science professor and senior research manager at Seattle's Allen Institute for Artificial Intelligence (AI2), won a $800,000 "genius grant" given annually by the John D. and Catherine T. MacArthur Foundation. Choi, one of 25 MacArthur Fellows for 2022 revealed Wednesday, is an expert in natural language processing. Her work aims to improve the ability of computers and artificial intelligence systems to perform commonsense reasoning and understand implied meaning in human language. "This is such a great honor because there have been only two other researchers in the natural language processing field who have received this award," Choi told UW News. Choi spoke to GeekWire earlier this year about the debate over a robot's ability to have human-like feelings.


AI2's Unified-IO can complete a range of AI tasks – TechCrunch

#artificialintelligence

The Allen Institute for AI (AI2), the division within the nonprofit Allen Institute focused on machine learning research, today published its work on an AI system, called Unified-IO, that it claims is among the first to perform a "large and diverse" set of AI tasks. Unified-IO can process and create images, text and other structured data, a feat that the research team behind it says is a step toward building capable, unified general-purpose AI systems. "We are interested in building task-agnostic [AI systems], which can enable practitioners to train [machine learning] models for new tasks with little to no knowledge of the underlying machinery," Jaisen Lu, a research scientist at AI2 who worked on Unified-IO, told TechCrunch via email. "Such unified architectures alleviate the need for task-specific parameters and system modifications, can be jointly trained to perform a large variety of tasks and can share knowledge across tasks to boost performance." AI2's early efforts in building unified AI systems led to GPV-1 and GPV-2, two general-purpose, "vision-language" systems that supported a handful of workloads including captioning images and answering questions.


Global Big Data Conference

#artificialintelligence

OpenAI's impressive AI language model GPT-3 has plenty of things going it, but with 175 billion parameters no one would claim it's particularly streamlined. The Allen Institute for AI (AI2) has demonstrated a model that performs as well or better than GPT-3 on answering questions, but is a tenth the size. Macaw, AI2's model, emerged from research being done at the nonprofit into creating an AI that performs at human levels on standardized tests. "After we got a very high score they moved on to harder questions," said AI2 head Oren Etzioni. "There's this paradox where sometimes the questions that are easiest for people are the hardest for machines -- and the biggest gap was in common sense." For instance, he said, asking "When did Tom Hanks land on the moon?" GPT-3 says 1995, since that's when the film Apollo 13 came out.


Survey suggests 84% of Americans are illiterate about AI -- so here's a quiz to test your own AI IQ

#artificialintelligence

Can artificial intelligence write its own programs? Is there AI in your TV remote control? Researchers at Seattle's Allen Institute for Artificial Intelligence say that knowing the right answers to such questions is an essential part of being literate in our tech-driven society -- and that most of us would get a failing grade. A national survey, involving 1,547 adult Americans who were given a 20-question quiz about AI's capabilities, found that only 16% of the test takers scored a passing grade of better than 60% on the quiz. "The majority of Americans are AI illiterate," Nicole DeCario and Oren Etzioni report today in a posting to PNW.ai, an information service provided by the institute, also known as AI2.


Is artificial intelligence the key to preventing relapse of severe mental illness?

#artificialintelligence

New AI software developed by researchers at Flinders University shows promise for enabling timely support ahead of relapse in patients with severe mental illness. The AI2 (Actionable Intime Insights) software, developed by a team of digital health researchers at Flinders University, has undergone an eight-month trial with psychiatric patients from the Inner North Community Health Service, located in Gawler, South Australia. The digital tool is tipped to revolutionise consumer-centric timely mental health treatment provision outside hospital, with researchers labelling it as readily available and scalable. In the trial of 304 patients, the AI2 software found that 10% of them were at increased risk of not adhering to treatment plans by failing to take medication or disengaging with health services. This led to interventions which clinicians believe could have prevented the patient from relapsing and experiencing a deterioration of their mental health.