This article describes a novel approach to expand in run-time the knowledge base of an Artificial Conversational Agent. A technique for automatic knowledge extraction from the user's sentence and four methods to insert the new acquired concepts in the knowledge base have been developed and integrated into a system that has already been tested for knowledge-based conversation between a social humanoid robot and residents of care homes. The run-time addition of new knowledge allows overcoming some limitations that affect most robots and chatbots: the incapability of engaging the user for a long time due to the restricted number of conversation topics. The insertion in the knowledge base of new concepts recognized in the user's sentence is expected to result in a wider range of topics that can be covered during an interaction, making the conversation less repetitive. Two experiments are presented to assess the performance of the knowledge extraction technique, and the efficiency of the developed insertion methods when adding several concepts in the Ontology.
With the proliferation of female robots such as Sophia and the popularity of female virtual assistants such as Siri (Apple), Alexa (Amazon), and Cortana (Microsoft), artificial intelligence seems to have a gender issue. This gender imbalance in AI is a pervasive trend that has drawn sharp criticism in the media (even Unesco warned against the dangers of this practice) because it could reinforce stereotypes about women being objects. But why is femininity injected in artificial intelligent objects? If we want to curb the massive use of female gendering in AI, we need to better understand the deep roots of this phenomenon. In an article published in the journal Psychology & Marketing, we argue that research on what makes people human can provide a new perspective into why feminization is systematically used in AI.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
Partial subtle mirroring of nonverbal behaviors during conversations (also known as mimicking or parallel empathy), is essential for rapport building, which in turn is essential for optimal human-human communication outcomes. Mirroring has been studied in interactions between robots and humans, and in interactions between Embodied Conversational Agents (ECAs) and humans. However, very few studies examine interactions between humans and ECAs that are integrated with robots, and none of them examine the effect of mirroring nonverbal behaviors in such interactions. Our research question is whether integrating an ECA able to mirror its interlocutor's facial expressions and head movements (continuously or intermittently) with a human-service robot will improve the user's experience with the support robot that is able to perform useful mobile manipulative tasks (e.g. at home). Our contribution is the complex integration of an expressive ECA, able to track its interlocutor's face, and to mirror his/her facial expressions and head movements in real time, integrated with a human support robot such that the robot and the agent are fully aware of each others', and of the users', nonverbals cues. We also describe a pilot study we conducted towards answering our research question, which shows promising results for our forthcoming larger user study.
How likely you are to trust a self-driving car or advice from Siri? A University of Kansas interdisciplinary team led by relationship psychologist Omri Gillath has published a new paper in the journal Computers in Human Behavior showing people's trust in artificial intelligence (AI) is tied to their relationship or attachment style. The research indicates for the first time that people who are anxious about their relationships with humans tend to have less trust in AI as well. Importantly, the research also suggests trust in artificial intelligence can be increased by reminding people of their secure relationships with other humans. Grand View Research estimated the global artificial-intelligence market at $39.9 billion in 2019, projected to expand at a compound annual growth rate of 42.2% from 2020 to 2027.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
Walmart customers in need of groceries have had two options in recent years: Go to the store and fill up a shopping cart by hand; or select food items online and have them delivered to their home or picked up in person. Now, the big box retailer has unveiled a third way to order groceries: via voice command. Beginning this month, customers who own a Google assistant can say, "Hey, Google, talk to Walmart" to add items to a virtual grocery cart, Tom Ward, senior vice president, Digital Operations, Walmart U.S., said in a statement Tuesday. The voice commands allow customers to add items to their cart one at a time over a few days -- not necessarily to complete their shopping for the week all at once. As the technology becomes more familiar with customers' shopping habits, Ward said, it will improve over time.
Whether we're working side-by-side with autonomous robots on the factory floor, spreading the happy news about a new addition to the family on Facebook, or asking Siri to help us get from point A to point B as quickly as possible, all aspects of our lives are closely connected to technology in one way or another. Some of those "connections" are downright threatening. The Fourth Industrial Revolution, which includes developments in previously disjointed fields such as artificial intelligence (AI), machine learning, robotics, nanotechnology, 3D printing, and genetics and biotechnology, is expected to cause widespread disruption not only to business models but also to labor markets over the next five years, the World Economic Forum reports, with "enormous change predicted in the skill sets needed to thrive in the new landscape." According to an Oxford study, developed nations can expect to see job loss rates of up to 47% within the next 25 years. Additionally, a Pew Research Center study found that "robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as healthcare, transport and logistics, customer service, and home maintenance."
The world around us is becoming increasingly automated, with many of us leaning on digital assistants such as Cortana, Echo and Siri to run our lives. Before too long it is highly likely that our cars will be driverless, fridges will restock automatically and our homes will heat themselves. Recently, Westworld - the sci-fi thriller about a technologically advanced, Western-themed amusement park populated by androids that malfunction and begin killing the human visitors - became the biggest watched show of all time on Sky Atlantic. Could this fiction be closer to reality than many of us would care to admit? Our recent study asked this question, and for almost two-thirds of respondents, the answer is yes.