Goto

Collaborating Authors

 servant


A Data-driven Investigation of Euphemistic Language: Comparing the usage of "slave" and "servant" in 19th century US newspapers

Park, Jaihyun, Cordell, Ryan

arXiv.org Artificial Intelligence

This study investigates the usage of "slave" and "servant" in the 19th century US newspapers using computational methods. While both terms were used to refer to enslaved African Americans, they were used in distinct ways. In the Chronicling America corpus, we included possible OCR errors by using FastText embedding and excluded text reprints to consider text reprint culture in the 19th century. Word2vec embedding was used to find semantically close words to "slave" and "servant" and log-odds ratio was calculated to identify over-represented discourse words in the Southern and Northern newspapers. We found that "slave" is associated with socio-economic, legal, and administrative words, however, "servant" is linked to religious words in the Northern newspapers while Southern newspapers associated "servant" with domestic and familial words. We further found that slave discourse words in Southern newspapers are more prevalent in Northern newspapers while servant discourse words from each side are prevalent in their own region. This study contributes to the understanding of how newspapers created different discourses around enslaved African Americans in the 19th century US.


Thoughtful Adoption of NLP for Civic Participation: Understanding Differences Among Policymakers

Guridi, Jose A., Cheyre, Cristobal, Yang, Qian

arXiv.org Artificial Intelligence

Natural language processing (NLP) tools have the potential to boost civic participation and enhance democratic processes because they can significantly increase governments' capacity to gather and analyze citizen opinions. However, their adoption in government remains limited, and harnessing their benefits while preventing unintended consequences remains a challenge. While prior work has focused on improving NLP performance, this work examines how different internal government stakeholders influence NLP tools' thoughtful adoption. We interviewed seven politicians (politically appointed officials as heads of government institutions) and thirteen public servants (career government employees who design and administrate policy interventions), inquiring how they choose whether and how to use NLP tools to support civic participation processes. The interviews suggest that policymakers across both groups focused on their needs for career advancement and the need to showcase the legitimacy and fairness of their work when considering NLP tool adoption and use. Because these needs vary between politicians and public servants, their preferred NLP features and tool designs also differ. Interestingly, despite their differing needs and opinions, neither group clearly identifies who should advocate for NLP adoption to enhance civic participation or address the unintended consequences of a poorly considered adoption. This lack of clarity in responsibility might have caused the governments' low adoption of NLP tools. We discuss how these findings reveal new insights for future HCI research. They inform the design of NLP tools for increasing civic participation efficiency and capacity, the design of other tools and methods that ensure thoughtful adoption of AI tools in government, and the design of NLP tools for collaborative use among users with different incentives and needs.


Empirical analysis of Biding Precedent efficiency in the Brazilian Supreme Court via Similar Case Retrieval

Tinarrage, Raphaël, Ennes, Henrique, Resck, Lucas E., Gomes, Lucas T., Ponciano, Jean R., Poco, Jorge

arXiv.org Artificial Intelligence

Binding precedents (S\'umulas Vinculantes) constitute a juridical instrument unique to the Brazilian legal system and whose objectives include the protection of the Federal Supreme Court against repetitive demands. Studies of the effectiveness of these instruments in decreasing the Court's exposure to similar cases, however, indicate that they tend to fail in such a direction, with some of the binding precedents seemingly creating new demands. We empirically assess the legal impact of five binding precedents, 11, 14, 17, 26 and 37, at the highest court level through their effects on the legal subjects they address. This analysis is only possible through the comparison of the Court's ruling about the precedents' themes before they are created, which means that these decisions should be detected through techniques of Similar Case Retrieval. The contributions of this article are therefore twofold: on the mathematical side, we compare the uses of different methods of Natural Language Processing -- TF-IDF, LSTM, BERT, and regex -- for Similar Case Retrieval, whereas on the legal side, we contrast the inefficiency of these binding precedents with a set of hypotheses that may justify their repeated usage. We observe that the deep learning models performed significantly worse in the specific Similar Case Retrieval task and that the reasons for binding precedents to fail in responding to repetitive demand are heterogeneous and case-dependent, making it impossible to single out a specific cause.


InterIntent: Investigating Social Intelligence of LLMs via Intention Understanding in an Interactive Game Context

Liu, Ziyi, Anand, Abhishek, Zhou, Pei, Huang, Jen-tse, Zhao, Jieyu

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated the potential to mimic human social intelligence. However, most studies focus on simplistic and static self-report or performance-based tests, which limits the depth and validity of the analysis. In this paper, we developed a novel framework, InterIntent, to assess LLMs' social intelligence by mapping their ability to understand and manage intentions in a game setting. We focus on four dimensions of social intelligence: situational awareness, self-regulation, self-awareness, and theory of mind. Each dimension is linked to a specific game task: intention selection, intention following, intention summarization, and intention guessing. Our findings indicate that while LLMs exhibit high proficiency in selecting intentions, achieving an accuracy of 88\%, their ability to infer the intentions of others is significantly weaker, trailing human performance by 20\%. Additionally, game performance correlates with intention understanding, highlighting the importance of the four components towards success in this game. These findings underline the crucial role of intention understanding in evaluating LLMs' social intelligence and highlight the potential of using social deduction games as a complex testbed to enhance LLM evaluation. InterIntent contributes a structured approach to bridging the evaluation gap in social intelligence within multiplayer games.


All the World's a (Hyper)Graph: A Data Drama

Coupette, Corinna, Vreeken, Jilles, Rieck, Bastian

arXiv.org Artificial Intelligence

We introduce Hyperbard, a dataset of diverse relational data representations derived from Shakespeare's plays. Our representations range from simple graphs capturing character co-occurrence in single scenes to hypergraphs encoding complex communication settings and character contributions as hyperedges with edge-specific node weights. By making multiple intuitive representations readily available for experimentation, we facilitate rigorous representation robustness checks in graph learning, graph mining, and network analysis, highlighting the advantages and drawbacks of specific representations. Leveraging the data released in Hyperbard, we demonstrate that many solutions to popular graph mining problems are highly dependent on the representation choice, thus calling current graph curation practices into question. As an homage to our data source, and asserting that science can also be art, we present all our points in the form of a play.


AvalonBench: Evaluating LLMs Playing the Game of Avalon

Light, Jonathan, Cai, Min, Shen, Sheng, Hu, Ziniu

arXiv.org Artificial Intelligence

In this paper, we explore the potential of Large Language Models (LLMs) Agents in playing the strategic social deduction game, Resistance Avalon. Players in Avalon are challenged not only to make informed decisions based on dynamically evolving game phases, but also to engage in discussions where they must deceive, deduce, and negotiate with other players. These characteristics make Avalon a compelling test-bed to study the decision-making and language-processing capabilities of LLM Agents. To facilitate research in this line, we introduce AvalonBench - a comprehensive game environment tailored for evaluating multi-agent LLM Agents. This benchmark incorporates: (1) a game environment for Avalon, (2) rule-based bots as baseline opponents, and (3) ReAct-style LLM agents with tailored prompts for each role. Notably, our evaluations based on AvalonBench highlight a clear capability gap. For instance, models like ChatGPT playing good-role got a win rate of 22.2% against rule-based bots playing evil, while good-role bot achieves 38.2% win rate in the same setting. We envision AvalonBench could be a good test-bed for developing more advanced LLMs (with self-playing) and agent frameworks that can effectively model the layered complexities of such game environments.


Council Post: AI Versus Humans: Who'd Lose?

#artificialintelligence

Spoiler alert, the answer is both. Human-versus-computer chess matches, particularly Kasparov versus Deep Blue, contributed to mainstreaming the whole man-versus-machine discourse. Those games, in which some of the best humans to have ever played chess lost to computers, helped entrench a narrative of contest/conquest that continues to shape how we view our relationship with computers. Isaac Asimov's three laws of robotics laid the groundwork for this man-versus-machine narrative. Even still, it framed the relationship in a slave-master context where one only existed to be subjugated by the other.


Robots predicted to rule the world by 2060 humans forced to be servants

#artificialintelligence

"The Sun", "Sun", "Sun Online" are registered trademarks or trade names of News Group Newspapers Limited. This service is provided on News Group Newspapers' Limited's Standard Terms and Conditions in accordance with our Privacy & Cookie Policy. To inquire about a licence to reproduce material, visit our Syndication site. To see all content on The Sun, please use the Site Map. Our journalists strive for accuracy but on occasion we make mistakes.

  Country: Europe > United Kingdom > England (0.12)
  Industry:

Surveillance, Companionship, and Entertainment: The Ancient History of Intelligent Machines

#artificialintelligence

Robots have histories that extend far back into the past. Artificial servants, autonomous killing machines, surveillance systems, and sex robots all find expression from the human imagination in works and contexts beyond Ovid (43 BCE to 17 CE) and the story of Pygmalion in cultures across Eurasia and North Africa. This long history of our human-machine relationships also reminds us that our aspirations, fears, and fantasies about emergent technologies are not new, even as the circumstances in which they appear differ widely. Situating these objects, and the desires that create them, within deeper and broader contexts of time and space reveals continuities and divergences that, in turn, provide opportunities to critique and question contemporary ideas and desires about robots and artificial intelligence (AI). As early as 3,000 years ago we encounter interest in intelligent machines and AI that perform different servile functions.


Artificial Intelligence or Human Intelligence?

#artificialintelligence

They are not necessarily the same thing and, what's more, the former without the latter could lead us – or is leading us – to a cliff with unpredictable consequences. "But if we humans invented it", you might say. "One without the other, they just don't go," you might add. I would say that we have only glimpsed the possibilities of human intelligence and that without a qualitatively superior effort, without a real mental revolution, we will not be able to make one of the most complex machines created by mankind, artificial intelligence, truly intelligent and at the service of the evolution of humanity. Of particular concern is the swarm of hard and soft technologies (hardware and software) that, from proprietary, monopolistic and global platforms, seem to have hijacked our privacy, interaction and imagination.