Goto

Collaborating Authors

 nao


Supplemental Material A Proof for proposition

Neural Information Processing Systems

Reversing the process is not immediately obvious and thus several schedulers were proposed [23, 26, 31, 58]. In this paper, we employ DDIM [58] scheduler, a popular deterministic scheduler. Other deterministic scheduler would be suitable, and we show in section I below that our method performs well with other schedulers.


Nonlocal Attention Operator: Materializing Hidden Knowledge Towards Interpretable Physics Discovery

Neural Information Processing Systems

Despite recent popularity of attention-based neural architectures in core AI fields like natural language processing (NLP) and computer vision (CV), their potential in modeling complex physical systems remains under-explored. Learning problems in physical systems are often characterized as discovering operators that map between function spaces based on a few instances of function pairs. This task frequently presents a severely ill-posed PDE inverse problem. In this work, we propose a novel neural operator architecture based on the attention mechanism, which we coin Nonlocal Attention Operator (NAO), and explore its capability towards developing a foundation physical model. In particular, we show that the attention mechanism is equivalent to a double integral operator that enables nonlocal interactions among spatial tokens, with a data-dependent kernel characterizing the inverse mapping from data to the hidden parameter field of the underlying operator. As such, the attention mechanism extracts global prior information from training data generated by multiple systems, and suggests the exploratory space in the form of a nonlinear kernel map. Consequently, NAO can address ill-posedness and rank deficiency in inverse PDE problems by encoding regularization and achieving generalizability. Lastly, we empirically demonstrate the advantages of NAO over baseline neural models in terms of the generalizability to unseen data resolutions and system states. Our work not only suggests a novel neural operator architecture for learning an interpretable foundation model of physical systems, but also offers a new perspective towards understanding the attention mechanism.



beed13602b9b0e6ecb5b568ff5058f07-AuthorFeedback.pdf

Neural Information Processing Systems

Thanks for the comments and we will reorganize the paper according to your suggestions. R1 may think NA T as a NAS method. How to get skip connections in VGG? Then, NA T can add skip connections into VGG by replacing the null connections (see more discussions in Section 4.5). Why the generated networks have two inputs "-2" and "-1": "-1" represent the outputs of the second nearest and the most nearest cell in front of the current one, respectively.





The Impact of Adaptive Emotional Alignment on Mental State Attribution and User Empathy in HRI

Buracchio, Giorgia, Callegari, Ariele, Donini, Massimo, Gena, Cristina, Lieto, Antonio, Lillo, Alberto, Mattutino, Claudio, Mazzei, Alessandro, Pigureddu, Linda, Striani, Manuel, Vernero, Fabiana

arXiv.org Artificial Intelligence

The paper presents an experiment on the effects of adaptive emotional alignment between agents, considered a prerequisite for empathic communication, in Human-Robot Interaction (HRI). Using the NAO robot, we investigate the impact of an emotionally aligned, empathic, dialogue on these aspects: (i) the robot's persuasive effectiveness, (ii) the user's communication style, and (iii) the attribution of mental states and empathy to the robot. In an experiment with 42 participants, two conditions were compared: one with neutral communication and another where the robot provided responses adapted to the emotions expressed by the users. The results show that emotional alignment does not influence users' communication styles or have a persuasive effect. However, it significantly influences attribution of mental states to the robot and its perceived empathy


beed13602b9b0e6ecb5b568ff5058f07-AuthorFeedback.pdf

Neural Information Processing Systems

Thanks for the comments and we will reorganize the paper according to your suggestions. R1 may think NA T as a NAS method. How to get skip connections in VGG? Then, NA T can add skip connections into VGG by replacing the null connections (see more discussions in Section 4.5). Why the generated networks have two inputs "-2" and "-1": "-1" represent the outputs of the second nearest and the most nearest cell in front of the current one, respectively.