unexpected behavior
WatChat: Explaining perplexing programs by debugging mental models
Chandra, Kartik, Li, Tzu-Mao, Nigam, Rachit, Tenenbaum, Joshua, Ragan-Kelley, Jonathan
Often, a good explanation for a program's unexpected behavior is a bug in the programmer's code. But sometimes, an even better explanation is a bug in the programmer's mental model of the language they are using. Instead of merely debugging our current code ("giving the programmer a fish"), what if our tools could directly debug our mental models ("teaching the programmer to fish")? In this paper, we apply ideas from computational cognitive science to do exactly that. Given a perplexing program, we use program synthesis techniques to automatically infer potential misconceptions that might cause the user to be surprised by the program's behavior. By analyzing these misconceptions, we provide succinct, useful explanations of the program's behavior. Our methods can even be inverted to synthesize pedagogical example programs for diagnosing and correcting misconceptions in students.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Slovenia > Drava > Municipality of Maribor > Maribor (0.04)
Age-Appropriate Robot Design: In-The-Wild Child-Robot Interaction Studies of Perseverance Styles and Robot's Unexpected Behavior
Wróbel, Alicja, Źróbek, Karolina, Schaper, Marie-Monique, Zguda, Paulina, Indurkhya, Bipin
As child-robot interactions become more and more common in daily life environment, it is important to examine how robot's errors influence children's behavior. We explored how a robot's unexpected behaviors affect child-robot interactions during two workshops on active reading: one in a modern art museum and one in a school. We observed the behavior and attitudes of 42 children from three age groups: 6-7 years, 8-10 years, and 10-12 years. Through our observations, we identified six different types of surprising robot behaviors: personality, movement malfunctions, inconsistent behavior, mispronunciation, delays, and freezing. Using a qualitative analysis, we examined how children responded to each type of behavior, and we observed similarities and differences between the age groups. Based on our findings, we propose guidelines for designing age-appropriate learning interactions with social robots.
Repairing Deep Neural Networks Based on Behavior Imitation
Liang, Zhen, Wu, Taoran, Zhao, Changyuan, Liu, Wanwei, Xue, Bai, Yang, Wenjing, Wang, Ji
The increasing use of deep neural networks (DNNs) in safety-critical systems has raised concerns about their potential for exhibiting ill-behaviors. While DNN verification and testing provide post hoc conclusions regarding unexpected behaviors, they do not prevent the erroneous behaviors from occurring. To address this issue, DNN repair/patch aims to eliminate unexpected predictions generated by defective DNNs. Two typical DNN repair paradigms are retraining and fine-tuning. However, existing methods focus on the high-level abstract interpretation or inference of state spaces, ignoring the underlying neurons' outputs. This renders patch processes computationally prohibitive and limited to piecewise linear (PWL) activation functions to great extent. To address these shortcomings, we propose a behavior-imitation based repair framework, BIRDNN, which integrates the two repair paradigms for the first time. BIRDNN corrects incorrect predictions of negative samples by imitating the closest expected behaviors of positive samples during the retraining repair procedure. For the fine-tuning repair process, BIRDNN analyzes the behavior differences of neurons on positive and negative samples to identify the most responsible neurons for the erroneous behaviors. To tackle more challenging domain-wise repair problems (DRPs), we synthesize BIRDNN with a domain behavior characterization technique to repair buggy DNNs in a probably approximated correct style. We also implement a prototype tool based on BIRDNN and evaluate it on ACAS Xu DNNs. Our experimental results show that BIRDNN can successfully repair buggy DNNs with significantly higher efficiency than state-of-the-art repair tools. Additionally, BIRDNN is highly compatible with different activation functions.
- Asia > Middle East > Israel > Haifa District > Haifa (0.04)
- Asia > China > Hunan Province > Changsha (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (4 more...)
- Transportation (0.46)
- Information Technology (0.46)
Go-Playing Trick Defeats World-Class Go AI -- but Loses to Human Amateurs
KataGo's world-class AI learned Go by playing millions of games against itself, but that still isn't enough experience to cover every possible scenario, allowing for vulnerabilities from unexpected behavior. In the world of deep-learning artificial intelligence (AI), the ancient board game Go looms large. Until 2016, the best human Go player could still defeat the strongest Go-playing AI. That changed with DeepMind's AlphaGo, which used deep-learning neural networks to teach itself the game at a level humans cannot match. More recently, KataGo has become popular as an open source Go-playing AI that can beat top-ranking human Go players.
Robust Recommendation with Implicit Feedback via Eliminating the Effects of Unexpected Behaviors
Chen, Jie, Jiang, Lifen, Ma, Chunmei, Sun, Huazhi
In the implicit feedback recommendation, incorporating short-term preference into recommender systems has attracted increasing attention in recent years. However, unexpected behaviors in historical interactions like clicking some items by accident don't well reflect users' inherent preferences. Existing studies fail to model the effects of unexpected behaviors, thus achieve inferior recommendation performance. In this paper, we propose a Multi-Preferences Model (MPM) to eliminate the effects of unexpected behaviors. MPM first extracts the users' instant preferences from their recent historical interactions by a fine-grained preference module. Then an unexpected-behaviors detector is trained to judge whether these instant preferences are biased by unexpected behaviors. We also integrate user's general preference in MPM. Finally, an output module is performed to eliminate the effects of unexpected behaviors and integrates all the information to make a final recommendation. We conduct extensive experiments on two datasets of a movie and an e-retailing, demonstrating significant improvements in our model over the state-of-the-art methods. The experimental results show that MPM gets a massive improvement in HR@10 and NDCG@10, which relatively increased by 3.643% and 4.107% compare with AttRec model on average. We publish our code at https://github.com/chenjie04/MPM/.
- Research Report > Promising Solution (0.34)
- Research Report > New Finding (0.34)
AI Learned to Play Hide-and-Seek Using Machine Learning
OpenAI announced its new project in which an AI can learn to play hide and seek. Utilizing the Machine learning language, you can play hide and seek with AI. Initially, a simple version of the game will be released in which'Seekers' will score when a'Hider' is visible to the field. At the start of the game, the'Hider' will be given some time to set up a place for hiding. To add the pinch of interest and fun in the game, 'Seekers' and'Hiders' both can move any object over the field say walls and blocks, to take a lead over others.
Behavior Identification and Prediction for a Probabilistic Risk Framework
Gill, Jasprit Singh, Pisu, Pierluigi, Krovi, Venkat N., Schmid, Matthias J.
Operation in a real world traffic requires autonomous vehicles to be able to plan their motion in complex environments (multiple moving participants). Planning through such environment requires the right search space to be provided for the trajectory or maneuver planners so that the safest motion for the ego vehicle can be identified. Given the current states of the environment and its participants, analyzing the risks based on the predicted trajectories of all the traffic participants provides the necessary search space for the planning of motion. This paper provides a fresh taxonomy of safety / risks that an autonomous vehicle should be able to handle while navigating through traffic. It provides a reference system architecture that needs to be implemented as well as describes a novel way of identifying and predicting the behaviors of the traffic participants using classic Multi Model Adaptive Estimation (MMAE). Preliminary simulation results of the implemented model are included.
- North America > United States > South Carolina > Greenville County > Greenville (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.46)
Competing With the Giants in Race to Build Self-Driving Cars
These algorithms can learn tasks on their own by analyzing vast amounts of data. "It used to be that a real smart Ph.D. sat in a cube for six months, and they would hand-code a detector" that spotted objects on the road, Mr. Urmson said during a recent interview at Aurora's offices. "Now, you gather the right kind of data and feed it to an algorithm, and a day later, you have something that works as well as that six months of work from the Ph.D." The Google self-driving car project first used the technique to detect pedestrians. Since then, it has applied the same method to many other parts of the car, including systems that predict what will happen on the road and plan a route forward. Now, the industry as a whole is moving in the same direction.
- North America > Canada > Ontario > Toronto (0.15)
- North America > United States > Virginia (0.07)
- North America > United States > New York (0.05)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Transportation > Passenger (0.88)
- Information Technology > Robotics & Automation (0.88)
All Robots Are Unpredictable
The heads of more than 100 of the world's top artificial intelligence companies are very alarmed about the development of "killer robots". In an open letter to the UN, these business leaders – including Tesla's Elon Musk and the founders of Google's DeepMind AI firm – warned that autonomous weapon technology could be misused by terrorists and despots or hacked to perform in undesirable ways. But the real threat is much bigger – and not just from human misconduct but from the machines themselves. The research into complex systems shows how behavior can emerge that is much more unpredictable than the sum of individual actions. On one level this means human societies can behave very differently to what you might expect just looking at individual behavior.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > Italy (0.05)