Goto

Collaborating Authors

 situation


Why Do We Laugh? Annotation and Taxonomy Generation for Laughable Contexts in Spontaneous Text Conversation

Inoue, Koji, Elmers, Mikey, Lala, Divesh, Kawahara, Tatsuya

arXiv.org Artificial Intelligence

Laughter serves as a multifaceted communicative signal in human interaction, yet its identification within dialogue presents a significant challenge for conversational AI systems. This study addresses this challenge by annotating laughable contexts in Japanese spontaneous text conversation data and developing a taxonomy to classify the underlying reasons for such contexts. Initially, multiple annotators manually labeled laughable contexts using a binary decision (laughable or non-laughable). Subsequently, an LLM was used to generate explanations for the binary annotations of laughable contexts, which were then categorized into a taxonomy comprising ten categories, including "Empathy and Affinity" and "Humor and Surprise," highlighting the diverse range of laughter-inducing scenarios. The study also evaluated GPT-4's performance in recognizing the majority labels of laughable contexts, achieving an F1 score of 43.14%. These findings contribute to the advancement of conversational AI by establishing a foundation for more nuanced recognition and generation of laughter, ultimately fostering more natural and engaging human-AI interactions.


Quantitative Method for Security Situation of the Power Information Network Based on the Evolutionary Neural Network

Yuan, Quande, Pi, Yuzhen, Kou, Lei, Zhang, Fangfang, Ye, Bo

arXiv.org Artificial Intelligence

Cybersecurity is the security cornerstone of digital transformation of the power grid and construction of new power systems. The traditional network security situation quantification method only analyzes from the perspective of network performance, ignoring the impact of various power application services on the security situation, so the quantification results cannot fully reflect the power information network risk state. This study proposes a method for quantifying security situation of the power information network based on the evolutionary neural network. First, the security posture system architecture is designed by analyzing the business characteristics of power information network applications. Second, combining the importance of power application business, the spatial element index system of coupled interconnection is established from three dimensions of network reliability, threat, and vulnerability. Then, the BP neural network optimized by the genetic evolutionary algorithm is incorporated into the element index calculation process, and the quantitative model of security posture of the power information network based on the evolutionary neural network is constructed. Finally, a simulation experiment environment is built according to a power sector network topology, and the effectiveness and robustness of the method proposed in the study are verified.


Understanding Interpersonal Conflict Types and their Impact on Perception Classification

Welch, Charles, Plepi, Joan, Neuendorf, Béla, Flek, Lucie

arXiv.org Artificial Intelligence

Studies on interpersonal conflict have a long history and contain many suggestions for conflict typology. We use this as the basis of a novel annotation scheme and release a new dataset of situations and conflict aspect annotations. We then build a classifier to predict whether someone will perceive the actions of one individual as right or wrong in a given situation. Our analyses include conflict aspects, but also generated clusters, which are human validated, and show differences in conflict content based on the relationship of participants to the author. Our findings have important implications for understanding conflict and social norms.


The Computational Metaphor and Artificial Intelligence: A Reflective Examination of a Theoretical Falsework

AI Magazine

AI. Specifically, we address three Just how little can be illustrated by the reaction to Winograd and Flores's (1986) recent book Understanding Computers and Cognition. In personal comments, the book and its authors have been savaged. Published comments are, of course, more temperate (Vellino et al. 1987) but still reveal the hypersensitivity of the Penrose's (1989) even more recent book The Emperor's New Mind have been observed. Like Suchman (1987) and Clancey (1987), we feel that insights of significant value are to be gained from an objective consideration of traditional and alternative perspectives. Some efforts in this direction are evident (Haugeland [1985], Hill [1989], and Born [1987], for example), but the issue requires additional and ongoing attention.


Universal Planning: An (Almost) Universally Bad Idea

AI Magazine

To present a sharp criticism of the approach known as universal planning, I begin by giving a precise definition of it. The key idea in this work is that an agent is working to achieve some goal and that to determine what to do next in the pursuit of this goal, the agent finds its current situation in a large table that prescribes the correct action to take. Of course, the action suggested by the table might simply be, "Think about your current situation and decide what to do next." This method is, in many ways, representative of the conventional approach to planning; however, what distinguishes universal plans from conventional plans is that the action suggested by a universal plan is always a primitive one that the agent can execute immediately (Agre and Chapman 1987; Drummond 1988; Kaelbling 1988; Nilsson 1989; Rosenschein and Kaelbling 1986; Schoppers 1987). Several authors have recently suggested that a possible approach to planning in uncertain domains is to analyze all possible situations beforehand and then store information about what to do in each.


Ray Reiter's Knowledge in Action

AI Magazine

What Ray Reiter has done has taken a set of ideas worked out by him and his collaborators over the last 11 years and recrystallized them into a sustained and consistent presentation. This is not a collection of those papers but a complete rewrite that avoids the usual repetition and notational inconsistency that one might expect. It makes one wish everyone as prolific as Reiter would copy his example--but because that's unlikely, we must be grateful for what he has given us. In case you haven't heard, Reiter and his crew, starting with the publication of Reiter (1991), breathed new life into the situation calculus (Mc-Carthy and Hayes 1969) that had gotten the reputation of being of limited expressiveness. The basic concept of the calculus is, of course, the situation, which we can think of as a state of affairs, that is, a complete specification of the truth values of all propositions (in a suitable logical language), although that's closer to McCarthy's and Hayes's traditional formulation than the analysis Reiter settles on (which I describe later).


Research Workshop on Expert Judgment, Human Error, and Intelligent Systems

AI Magazine

This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met before, and the discussion for most participants was novel and lively. Many areas of previously unexamined overlap were identified.


Practically Coordinating

AI Magazine

To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been devoted to developing practical techniques for achieving coordination, typically by limiting the knowledge available to, or necessary for, agents. This article categorizes techniques for keeping agents suitably ignorant so that they can practically coordinate and gives a selective survey of examples of these techniques for illustration. Certainly, people who know much (or think they know much) are sometimes subject to cockiness, confusion, paralysis, resignation, or other unpleasant states.


Using Reactive and Adaptive Behaviors to Play Soccer

AI Magazine

This work deals with designing simple behaviors to allow quadruped robots to play soccer. The robots are fully autonomous; they cannot exchange messages between each other. They are equipped with a charge-coupled-device camera that allows them to detect objects in the scene. In addition to vision problems such as changing lighting conditions and color confusion, legged robots must cope with "bouncing images" because of successive legs hitting the ground. When defining task-driven strategies, the designer has to take into account the influences of the locomotion and vision systems on the behavior.


Steps toward Formalizing Context

AI Magazine

The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion. Although the word context is frequently used in descriptions, explanations, and analyses of computer programs in these areas, its meaning is frequently left to the reader's understanding; that is, it is used in an implicit and intuitive manner. An example of how contexts may help in AI is found in McCarthy's (constructive) criticism (McCarthy 1984) of I wish honorable gentlemen would have the fairness to give the entire context of what I did say, and not pick out detached words (R. Cobden [1849], quoted in Oxford English Dictionary [1978], p. 902). The main motivation for studying formal contexts is to resolve the problem of generality in AI.