Goto

Collaborating Authors

 diner


What happened after Tesla opened a diner in Los Angeles?

The Guardian

Inflatable tube men depicting Elon Musk are displayed during the'Tyrant Diner' protest, calling for a boycott of Tesla, outside the Tesla Diner in LA. Inflatable tube men depicting Elon Musk are displayed during the'Tyrant Diner' protest, calling for a boycott of Tesla, outside the Tesla Diner in LA. What happened after Tesla opened a diner in Los Angeles? L ess than six months since it opened, Elon Musk's Tesla Diner has the feel of a ghost town. Gone is the Optimus robot serving popcorn, gone are the carnivore-diet-inspired "Epic Bacon" strips, gone are the hours-long, hundred-person lines wrapped around the block.


Cod and chips could soon be off the menu! Scientists say Britain should replace the popular white flaky fish with saithe...if diners are able to look past the colour

Daily Mail - Science & tech

Leaked recording reveals Campbell's exec's sickening remarks about iconic soup's ingredients How Lauren Sanchez would REALLY look if she'd never had rumored plastic surgery Trump's losing control... MAGA's imploding... and White House insiders tell me why they're REALLY worried: ANDREW NEIL Billionaire family posts VERY unusual obituary after heir, 40, met violent end at $2.8m hunting lodge following marriage scandal These women have lost as much as nine stone WITHOUT jabs: Now they reveal secret to their stunning success, the extraordinary event that brought them together and how it's changed their lives... Judge throws out Comey and James cases as Trump's beauty queen prosecutor is humiliated Her moving videos about the handsome boyfriend who ghosted her went viral and catapulted her to overnight fame. Kate Gosselin's ex Jon is seen at his splashy wedding for the first time as son Collin weighs in on his siblings not attending Fugitive'Slender Man' stabber Morgan Geyser snapped'just Google me' when asked for ID by cops who found her with MUCH older lover It all seems to be falling apart now! Pete Hegseth drops hammer on Democrat senator in'sedition' storm as court martial looms after Trump's execution threat Sabrina Carpenter looks unrecognisable in throwback snap from seven years ago as fans call her rebranding'wild' Neuralink's'Patient 4' feared missing months after getting revolutionary brain chip... now his wife tells the REAL heartbreaking story NFL's first transgender cheerleader makes explosive allegation against Carolina Panthers Slash your cholesterol by a third in just a month... hundreds of thousands are on a new diet that's transforming lives. Cod and chips could soon be off the menu! Scientists say Britain should replace the popular white flaky fish with saithe...if diners are able to look past the colour If you think battered cod doesn't pack quite enough flavour, experts claim to have the perfect alternative.


Evolution of Cooperation in LLM-Agent Societies: A Preliminary Study Using Different Punishment Strategies

Warnakulasuriya, Kavindu, Dissanayake, Prabhash, De Silva, Navindu, Cranefield, Stephen, Savarimuthu, Bastin Tony Roy, Ranathunga, Surangika, de Silva, Nisansa

arXiv.org Artificial Intelligence

The evolution of cooperation has been extensively studied using abstract mathematical models and simulations. Recent advances in Large Language Models (LLMs) and the rise of LLM agents have demonstrated their ability to perform social reasoning, thus providing an opportunity to test the emergence of norms in more realistic agent-based simulations with human-like reasoning using natural language. In this research, we investigate whether the cooperation dynamics presented in Boyd and Richerson's model persist in a more realistic simulation of the Diner's Dilemma using LLM agents compared to the abstract mathematical nature in the work of Boyd and Richerson. Our findings indicate that agents follow the strategies defined in the Boyd and Richerson model, and explicit punishment mechanisms drive norm emergence, reinforcing cooperative behaviour even when the agent strategy configuration varies. Our results suggest that LLM-based Multi-Agent System simulations, in fact, can replicate the evolution of cooperation predicted by the traditional mathematical models. Moreover, our simulations extend beyond the mathematical models by integrating natural language-driven reasoning and a pairwise imitation method for strategy adoption, making them a more realistic testbed for cooperative behaviour in MASs.


"Set It Up": Functional Object Arrangement with Compositional Generative Models (Journal Version)

Xu, Yiqing, Mao, Jiayuan, Li, Linfeng, Du, Yilun, Lozáno-Pérez, Tomas, Kaelbling, Leslie Pack, Hsu, David

arXiv.org Artificial Intelligence

Functional object arrangement (FORM) is the task of arranging objects to fulfill a function, e.g., "set up a dining table for two". One key challenge here is that the instructions for FORM are often under-specified and do not explicitly specify the desired object goal poses. This paper presents SetItUp, a neuro-symbolic framework that learns to specify the goal poses of objects from a few training examples and a structured natural-language task specification. SetItUp uses a grounding graph, which is composed of abstract spatial relations among objects (e.g., left-of), as its intermediate representation. This decomposes the FORM problem into two stages: (i) predicting this graph among objects and (ii) predicting object poses given the grounding graph. For (i), SetItUp leverages large language models (LLMs) to induce Python programs from a task specification and a few training examples. This program can be executed to generate grounding graphs in novel scenarios. For (ii), SetItUp pre-trains a collection of diffusion models to capture primitive spatial relations and online composes these models to predict object poses based on the grounding graph. We evaluated SetItUp on a dataset spanning three distinct task families: arranging tableware on a dining table, organizing items on a bookshelf, and laying out furniture in a bedroom. Experiments show that SetItUp outperforms existing models in generating functional, physically feasible, and aesthetically pleasing object arrangements. This article extends our conference paper published at Robotics: Science and Systems (RSS) 2024.


Elon Musk opened a diner in Hollywood. What could go wrong? I went to find out

The Guardian

It was just before lunchtime on its third day of operation, and the line outside Elon Musk's new Tesla Diner in Hollywood already stretched to nearly 100 people. The restaurant has been billed as a "retro-futuristic" drive-in where you can grab a high-end burger and watch classic films on giant screens, all while charging your Tesla. After months of buildup and controversy, the diner had suddenly opened on Monday, at 4.20pm, the kind of stoner boy joke that Musk is well-known for. Hundreds of fans lined up to try burgers in Cybertruck-shaped boxes, or take photos of the Optimus robot serving popcorn on the roof deck of the gleaming circular diner. But that was for the grand opening.


Tesla opens its first DINER in Hollywood - complete with robot servers, a drive-in cinema, and CyberTruck happy meals

Daily Mail - Science & tech

From flamethrowers to hot pants, Elon Musk has already released a range of weird and wacky products. Now, the billionaire is taking on the likes of McDonald's, Wendy's, and IHOP, with his very first diner. The Tesla Diner is described as a'retro-futuristic diner and drive-in charging experience.' The diner itself has over 250 seats for dining, with dishes on offer ranging from 7 cinnamon rolls to 10 salads. Alternatively, those hoping to relax for a few hours can enjoy a movie on either of the two 66ft LED megascreens outside the diner.


DINER: Debiasing Aspect-based Sentiment Analysis with Multi-variable Causal Inference

Wu, Jialong, Zhang, Linhai, Zhou, Deyu, Xu, Guoqiang

arXiv.org Artificial Intelligence

Though notable progress has been made, neural-based aspect-based sentiment analysis (ABSA) models are prone to learn spurious correlations from annotation biases, resulting in poor robustness on adversarial data transformations. Among the debiasing solutions, causal inference-based methods have attracted much research attention, which can be mainly categorized into causal intervention methods and counterfactual reasoning methods. However, most of the present debiasing methods focus on single-variable causal inference, which is not suitable for ABSA with two input variables (the target aspect and the review). In this paper, we propose a novel framework based on multi-variable causal inference for debiasing ABSA. In this framework, different types of biases are tackled based on different causal intervention methods. For the review branch, the bias is modeled as indirect confounding from context, where backdoor adjustment intervention is employed for debiasing. For the aspect branch, the bias is described as a direct correlation with labels, where counterfactual reasoning is adopted for debiasing. Extensive experiments demonstrate the effectiveness of the proposed method compared to various baselines on the two widely used real-world aspect robustness test set datasets.


How Far Are We on the Decision-Making of LLMs? Evaluating LLMs' Gaming Ability in Multi-Agent Environments

Huang, Jen-tse, Li, Eric John, Lam, Man Ho, Liang, Tian, Wang, Wenxuan, Yuan, Youliang, Jiao, Wenxiang, Wang, Xing, Tu, Zhaopeng, Lyu, Michael R.

arXiv.org Artificial Intelligence

Figure 1: γ-Bench enables various LLMs and humans to participate in multi-agent, multi-round games. The framework includes eight classical games in Game Theory, each categorized into one of three groups. Decision-making, a complicated task requiring various types of abilities, presents an excellent framework for assessing Large Language Models (LLMs). Our research investigates LLMs' decision-making capabilities through the lens of a wellestablished field, Game Theory. We focus specifically on games that support the participation of more than two agents simultaneously. Subsequently, we introduce our framework, γ-Bench, including eight classical multi-agent games. We design a scoring scheme to assess a model's performance in these games quantitatively. Through γ-Bench, we investigate LLMs' robustness, generalizability, and enhancement strategies. Results reveal that while GPT-3.5 shows satisfying robustness, its generalizability is relatively limited. However, its performance can be improved through approaches such as Chain-of-Thought. Additionally, we conduct evaluations across various LLMs and find that GPT-4 outperforms other models on γ-Bench, achieving a score of 60.5. Wenxiang Jiao is the corresponding author. We have recently witnessed the advancements in Artificial Intelligence (AI) made by Large Language Models (LLMs), which have marked a significant breakthrough in the field. Beyond the academic sphere, LLMs have entered diverse aspects of our everyday life, such as education (Baidoo-Anu & Ansah, 2023), legal service (Guha et al., 2023), product design (Lanzi & Loiacono, 2023), and healthcare (Johnson et al., 2023). Given their extensive capabilities, evaluating LLMs demands more than simple, isolated tasks. A comprehensive and multifaceted approach is highly in demand to assess the efficacy of these advanced models.


Unveiling Divergent Inductive Biases of LLMs on Temporal Data

Kishore, Sindhu, He, Hangfeng

arXiv.org Artificial Intelligence

Unraveling the intricate details of events in natural language necessitates a subtle understanding of temporal dynamics. Despite the adeptness of Large Language Models (LLMs) in discerning patterns and relationships from data, their inherent comprehension of temporal dynamics remains a formidable challenge. This research meticulously explores these intrinsic challenges within LLMs, with a specific emphasis on evaluating the performance of GPT-3.5 and GPT-4 models in the analysis of temporal data. Employing two distinct prompt types, namely Question Answering (QA) format and Textual Entailment (TE) format, our analysis probes into both implicit and explicit events. The findings underscore noteworthy trends, revealing disparities in the performance of GPT-3.5 and GPT-4. Notably, biases toward specific temporal relationships come to light, with GPT-3.5 demonstrating a preference for "AFTER'' in the QA format for both implicit and explicit events, while GPT-4 leans towards "BEFORE''. Furthermore, a consistent pattern surfaces wherein GPT-3.5 tends towards "TRUE'', and GPT-4 exhibits a preference for "FALSE'' in the TE format for both implicit and explicit events. This persistent discrepancy between GPT-3.5 and GPT-4 in handling temporal data highlights the intricate nature of inductive bias in LLMs, suggesting that the evolution of these models may not merely mitigate bias but may introduce new layers of complexity.


Rational inference of relative preferences

Neural Information Processing Systems

Statistical decision theory axiomatically assumes that the relative desirability of different options that humans perceive is well described by assigning them optionspecific scalar utility functions. However, this assumption is refuted by observed human behavior, including studies wherein preferences have been shown to change systematically simply through variation in the set of choice options presented. In this paper, we show that interpreting desirability as a relative comparison between available options at any particular decision instance results in a rational theory of value-inference that explains heretofore intractable violations of rational choice behavior in human subjects. Complementarily, we also characterize the conditions under which a rational agent selecting optimal options indicated by dynamic value inference in our framework will behave identically to one whose preferences are encoded using a static ordinal utility function.