Goto

Collaborating Authors

 robbery


'Astonishingly lethal': BBC reports from site of Russian strike in Kyiv

BBC News

At least six people have been killed in a wave of Russia strikes on Kyiv, which the Ukrainian President Volodymyr Zelensky has condemned as a heinous attack. The BBC's James Landale visited the scene of one attack in eastern Kyiv where a drone rammed through a block of flats and left six people dead. Several other regions were also targeted. A drone attack on a market at Chornomorsk in the south of the country killed two people. Catherine Connolly has'never believed more' in the spirit of Ireland New Irish President Catherine Connolly says she has been given a powerful mandate to articulate a vision for a new republic.


Watch: Russia's AI robot falls seconds after being unveiled

BBC News

Watch: Russia's AI robot falls seconds after being unveiled Footage shows the moment Russia's first anthropomorphic robot, AIdol, fell just seconds after its debut at a technology event in Moscow. The robot was being led on stage to the soundtrack from the film'Rocky', before it suddenly lost its balance and fell. Assistants could then be seen scrambling to cover it with a cloth - which ended up tangling in the process. Catherine Connolly has'never believed more' in the spirit of Ireland New Irish President Catherine Connolly says she has been given a powerful mandate to articulate a vision for a new republic. The online shopping giant opened its first physical shop in the world - in a Parisian department store.


Indiana Jones: There Are Always Some Useful Ancient Relics

Ding, Junchen, Zhang, Jiahao, Liu, Yi, Ding, Ziqi, Deng, Gelei, Li, Yuekang

arXiv.org Artificial Intelligence

This paper introduces Indiana Jones, an innovative approach to jailbreaking Large Language Models (LLMs) by leveraging inter-model dialogues and keyword-driven prompts. Through orchestrating interactions among three specialised LLMs, the method achieves near-perfect success rates in bypassing content safeguards in both white-box and black-box LLMs. The research exposes systemic vulnerabilities within contemporary models, particularly their susceptibility to producing harmful or unethical outputs when guided by ostensibly innocuous prompts framed in historical or contextual contexts. Experimental evaluations highlight the efficacy and adaptability of Indiana Jones, demonstrating its superiority over existing jailbreak methods. These findings emphasise the urgent need for enhanced ethical safeguards and robust security measures in the development of LLMs. Moreover, this work provides a critical foundation for future studies aimed at fortifying LLMs against adversarial exploitation while preserving their utility and flexibility.


Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews

Chan, Samantha, Pataranutaporn, Pat, Suri, Aditya, Zulfikar, Wazeer, Maes, Pattie, Loftus, Elizabeth F.

arXiv.org Artificial Intelligence

This study examines the impact of AI on human false memories -- recollections of events that did not occur or deviate from actual occurrences. It explores false memory induction through suggestive questioning in Human-AI interactions, simulating crime witness interviews. Four conditions were tested: control, survey-based, pre-scripted chatbot, and generative chatbot using a large language model (LLM). Participants (N=200) watched a crime video, then interacted with their assigned AI interviewer or survey, answering questions including five misleading ones. False memories were assessed immediately and after one week. Results show the generative chatbot condition significantly increased false memory formation, inducing over 3 times more immediate false memories than the control and 1.7 times more than the survey method. 36.4% of users' responses to the generative chatbot were misled through the interaction. After one week, the number of false memories induced by generative chatbots remained constant. However, confidence in these false memories remained higher than the control after one week. Moderating factors were explored: users who were less familiar with chatbots but more familiar with AI technology, and more interested in crime investigations, were more susceptible to false memories. These findings highlight the potential risks of using advanced AI in sensitive contexts, like police interviews, emphasizing the need for ethical considerations.


Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory

Dai, Gordon, Zhang, Weijia, Li, Jinhan, Yang, Siqi, lbe, Chidera Onochie, Rao, Srihas, Caetano, Arthur, Sra, Misha

arXiv.org Artificial Intelligence

The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We analyze whether, as the theory postulates, agents seek to escape a brutish "state of nature" by surrendering rights to an absolute sovereign in exchange for order and security. Our experiments unveil an alignment: Initially, agents engage in unrestrained conflict, mirroring Hobbes's depiction of the state of nature. However, as the simulation progresses, social contracts emerge, leading to the authorization of an absolute sovereign and the establishment of a peaceful commonwealth founded on mutual cooperation. This congruence between our LLM agent society's evolutionary trajectory and Hobbes's theoretical account indicates LLMs' capability to model intricate social dynamics and potentially replicate forces that shape human societies. By enabling such insights into group behavior and emergent societal phenomena, LLM-driven multi-agent simulations, while unable to simulate all the nuances of human behavior, may hold potential for advancing our understanding of social structures, group dynamics, and complex human systems.


Facial recognition used after Sunglass Hut robbery led to man's wrongful jailing, says suit

The Guardian > Technology

A 61-year-old man is suing Macy's and the parent company of Sunglass Hut over the stores' alleged use of a facial recognition system that misidentified him as the culprit behind an armed robbery and led to his wrongful arrest. While in jail, he was beaten and raped, according to his suit. Harvey Eugene Murphy Jr was accused and arrested on charges of robbing a Houston-area Sunglass Hut of thousands of dollars of merchandise in January 2022, though his attorneys say he was living in California at the time of the robbery. He was arrested on 20 October 2023, according to his lawyers. According to Murphy's lawsuit, an employee of EssilorLuxottica, Sunglass Hut's parent company, worked with its retail partner Macy's and used facial recognition software to identify Murphy as the robber.


Facial recognition used after Sunglass Hut robbery led to man's wrongful jailing, says suit

The Guardian

A 61-year-old man is suing Macy's and the parent company of Sunglass Hut over the stores' alleged use of a facial recognition system that misidentified him as the culprit behind an armed robbery that led to his wrongful arrest. While in jail, he was beaten and raped, according to his suit. Harvey Eugene Murphy Jr was accused and arrested on charges of robbing a Houston-area Sunglass Hut of thousands of dollars of merchandise in January 2022, though his attorneys say he was living in California at the time of the robbery. He was arrested on 20 October 2023, according to his lawyers. According to Murphy's lawsuit, an employee of EssilorLuxottica, Sunglass Hut's parent company, worked with its retail partner Macy's and used facial recognition software to identify Murphy as the robber.


US Embassy warns Americans not to use dating apps in Colombia after eight 'suspicious deaths'

FOX News

Rep. Cory Mills, R-Fla., sits down with'FOX & Friends Weekend' to discuss Ukraine funding, Biden's border policies and attacks on U.S. bases in the Middle East. The U.S. Embassy in Bogota, Colombia, is warning Americans traveling to the country not to use dating apps after eight "suspicious deaths" of private U.S. citizens. According to the embassy, the deaths -- potentially involuntary drug overdoes or suspected homicides -- took place in Medellin between November 1 and December 31, 2023. "Over the last year, the Embassy has seen an increase in reports of incidents involving the use of online dating applications to lure victims, typically foreigners, for robbery by force or using sedatives to drug and rob individuals," the embassy said. The Embassy said it regularly receives reports of such incidents occurring in major cities, like Medellin, Cartagena, and Bogota.


The Best TV Shows You Missed in 2023--and Where to Watch Them

WIRED

Even if you believe, as some do, that the world has moved from Peak TV to Trough TV, there are still more shows released in any given year than any one person could consume (trust us, we tried). Between major networks, cable television channels, and streaming services, there's just too much to watch. You're bound to miss your new favorite binge-watch. Below are our picks for the best TV shows you might have missed in 2023. If you buy something using links in our stories, we may earn a commission. This helps support our journalism.


Lawyer LLaMA Technical Report

Huang, Quzhe, Tao, Mingxu, Zhang, Chen, An, Zhenwei, Jiang, Cong, Chen, Zhibin, Wu, Zirui, Feng, Yansong

arXiv.org Artificial Intelligence

Large Language Models (LLMs), like LLaMA, have exhibited remarkable performance across various tasks. Nevertheless, when deployed to specific domains such as law or medicine, the models still confront the challenge of a deficiency in domain-specific knowledge and an inadequate capability to leverage that knowledge to resolve domain-related problems. In this paper, we propose a new framework to adapt LLMs to specific domains and build Lawyer LLaMA, a legal domain LLM, based on this framework. Specifically, we inject domain knowledge during the continual training stage and teach the model to learn professional skills using properly designed supervised fine-tuning tasks. Moreover, to alleviate the hallucination problem during the model's generation, we add a retrieval module and extract relevant legal articles before the model answers any queries. When learning domain-specific skills, we find that experts' experience is much more useful than experiences distilled from ChatGPT, where hundreds of expert-written data outperform tens of thousands of ChatGPT-generated ones. We will release our model and data.