woodward
Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and actions of others
To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of other agents in the environment. By the end of their first year of life, human infants intuitively achieve such common sense, and these cognitive achievements lay the foundation for humans' rich and complex understanding of the mental states of others. Can machines achieve generalizable, commonsense reasoning about other agents like human infants?
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
A Communication-First Account of Explanation
Harding, Jacqueline, Gerstenberg, Tobias, Icard, Thomas
We illustrate the fruitfulness of the account, relative to previous accounts, by showing that widely rec ognized "explanatory virtues" emerge naturally, as do subtle empirical patterns concerning the im pact of norms on causal judgments. This shows the value of a "communication-first" appro ach to explanation: getting clear on explanation's communicative dimension is an important prereq uisite for philosophical work on explanation. The result is a simple but powerful framework f or incorporating insights from the cognitive sciences into philosophical work on explanation, w hich will be useful for philosophers or cognitive scientists interested in explanation.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
From Infants to AI: Incorporating Infant-like Learning in Models Boosts Efficiency and Generalization in Learning Social Prediction Tasks
Early in development, infants learn a range of useful concepts, which can be challenging from a computational standpoint. This early learning comes together with an initial understanding of aspects of the meaning of concepts, e.g., their implications, causality, and using them to predict likely future events. All this is accomplished in many cases with little or no supervision, and from relatively few examples, compared with current network models. In learning about objects and human-object interactions, early acquired and possibly innate concepts are often used in the process of learning additional, more complex concepts. In the current work, we model how early-acquired concepts are used in the learning of subsequent concepts, and compare the results with standard deep network modeling. We focused in particular on the use of the concepts of animacy and goal attribution in learning to predict future events. We show that the use of early concepts in the learning of new concepts leads to better learning (higher accuracy) and more efficient learning (requiring less data). We further show that this integration of early and new concepts shapes the representation of the concepts acquired by the model. The results show that when the concepts were learned in a human-like manner, the emerging representation was more useful, as measured in terms of generalization to novel data and tasks. On a more general level, the results suggest that there are likely to be basic differences in the conceptual structures acquired by current network models compared to human learning.
UN council will hold AI meeting on risks to international peace, security
Hall of Fame tennis coach Rick Macci weighs in on how fans will react to a computer commentator instead of a human one on'Fox & Friends.' The United Nations Security Council is holding its first-ever meeting on the potential risks artificial intelligence poses to the maintenance of international peace and security. Organized by the United Kingdom, U.K. Ambassador Barbara Woodward announced the July 18 gathering on Monday. The talks will include remarks from experts in the emergent field, as well as input from U.N. Secretary-General Antonio Guterres. Last month, he warned that alarm bells over the most advanced form of AI are "deafening."
- Europe > United Kingdom (0.38)
- North America > United States > New York (0.06)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- North America > Haiti > Ouest > Port-au-Prince (0.06)
'It's a living organism, a crazy cacophony of life': Scott A Woodward's best phone picture
Nicknamed the Monster Building, the residential complex in Hong Kong's Quarry Bay is actually made up of five imposing tower blocks. In 2018, Canadian photographer Scott A Woodward had set up camp in the shadow of one, the Yick Cheong building, to shoot an ad campaign for Foot Locker. "It's a heavy, teeming, living organism; a crazy cacophony of life and colour," he says. "There are 10,000 people living there, and people travel from all over to see it." The team was large and busy, vying for space in the courtyard with tourists and Instagrammers drawn to the building after it featured in the films Transformers: Age of Extinction and Ghost in the Shell.
- Media > Film (0.61)
- Leisure & Entertainment (0.61)
Google is taking reservations to talk to its supposedly-sentient chatbot
At the I/O 2022 conference this past May, Google CEO Sundar Pichai announced that the company would, in the coming months, gradually avail its experimental LaMDA 2 conversational AI model to select beta users. On Thursday, researchers at Google's AI division announced that interested users can register to explore the model as access increasingly becomes available. Regular readers will recognize LaMDA as the supposedly sentient natural language processing (NLP) model that a Google researcher got himself fired over. NLPs are a class of AI model designed to parse human speech into actionable commands and are behind the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for realtime translation and subtitle apps. Basically, whenever you're talking to a computer, it's using NLP tech to listen.
Google is beta testing its AI future
It's clear that the future of Google is tied to AI language models. At this year's I/O conference, the company announced a raft of updates that rely on this technology, from new "multisearch" features that let you pair image searches with text queries to improvements for Google Assistant and support for 24 new languages in Google Translate. But Google -- and the field of AI language research in general -- faces major problems. Google itself has seriously mishandled internal criticism, firing employees who raised issues with bias in language models and damaging its reputation with the AI community. And researchers continue to find issues with AI language models, from failings with gender and racial biases to the fact that these models have a tendency to simply make things up (an unnerving finding for anyone who wants to use AI to deliver reliable information).
Counterfactual Instances Explain Little
White, Adam, Garcez, Artur d'Avila
In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide \emph{counterfactual instance explanations}. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Podcast: What's AI doing in your wallet?
Our entire financial system is built on trust. We can exchange otherwise worthless paper bills for fresh groceries, or swipe a piece of plastic for new clothes. But this trust--typically in a central government-backed bank--is changing. As our financial lives are rapidly digitized, the resulting data turns into fodder for AI. Companies like Apple, Facebook and Google see it as an opportunity to disrupt the entire experience of how people think about and engage with their money. But will we as consumers really get more control over our finances? In this first of a series on automation and our wallets, we explore a digital revolution in how we pay for things. This episode was produced by Anthony Green, with help from Jennifer Strong, Karen Hao, Will Douglas Heaven and Emma Cillekens.
- North America > United States (0.69)
- Asia > India (0.05)
- Asia > China (0.05)
- (3 more...)
- Banking & Finance (1.00)
- Information Technology > Services (0.69)
Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and actions of others
Gandhi, Kanishk, Stojnic, Gala, Lake, Brenden M., Dillon, Moira R.
To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of others. Human infants intuitively achieve such common sense by making inferences about the underlying causes of other agents' actions. Directly informed by research on infant cognition, our benchmark BIB challenges machines to achieve generalizable, common-sense reasoning about other agents like human infants do. As in studies on infant cognition, moreover, we use a violation of expectation paradigm in which machines must predict the plausibility of an agent's behavior given a video sequence, making this benchmark appropriate for direct validation with human infants in future studies. We show that recently proposed, deep-learning-based agency reasoning models fail to show infant-like reasoning, leaving BIB an open challenge.
- North America > United States > New York (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)