Goto

Collaborating Authors

 pretending


Punching Bag vs. Punching Person: Motion Transferability in Videos

Abdullah, Raiyaan, Claypoole, Jared, Cogswell, Michael, Divakaran, Ajay, Rawat, Yogesh

arXiv.org Artificial Intelligence

Action recognition models demonstrate strong generalization, but can they effectively transfer high-level motion concepts across diverse contexts, even within similar distributions? F or example, can a model recognize the broad action "punching" when presented with an unseen variation such as "punching person"? T o explore this, we introduce a motion transferability framework with three datasets: (1) Syn-TA, a synthetic dataset with 3D object motions; (2) Kinetics400-TA; and (3) Something-Something-v2-TA, both adapted from natural video datasets. W e evaluate 13 state-of-the-art models on these benchmarks and observe a significant drop in performance when recognizing high-level actions in novel contexts. Our analysis reveals: 1) Multimodal models struggle more with fine-grained unknown actions than with coarse ones; 2) The bias-free Syn-TA proves as challenging as real-world datasets, with models showing greater performance drops in controlled settings; 3) Larger models improve transferability when spatial cues dominate but struggle with intensive temporal reasoning, while reliance on object and background cues hinders generalization. W e further explore how disentangling coarse and fine motions can improve recognition in temporally challenging datasets. W e believe this study establishes a crucial benchmark for assessing motion transferability in action recognition.


SoK: Prompt Hacking of Large Language Models

Rababah, Baha, Shang, null, Wu, null, Kwiatkowski, Matthew, Leung, Carson, Akcora, Cuneyt Gurcan

arXiv.org Artificial Intelligence

The safety and robustness of large language models (LLMs) based applications remain critical challenges in artificial intelligence. Among the key threats to these applications are prompt hacking attacks, which can significantly undermine the security and reliability of LLM-based systems. In this work, we offer a comprehensive and systematic overview of three distinct types of prompt hacking: jailbreaking, leaking, and injection, addressing the nuances that differentiate them despite their overlapping characteristics. To enhance the evaluation of LLM-based applications, we propose a novel framework that categorizes LLM responses into five distinct classes, moving beyond the traditional binary classification. This approach provides more granular insights into the AI's behavior, improving diagnostic precision and enabling more targeted enhancements to the system's safety and robustness.


Help! My Mom Is Catfishing a Guy Online--By Pretending to Be Me.

Slate

Our advice columnists have heard it all over the years. Each Sunday, we dive into the Dear Prudie archives and share a selection of classic letters with our readers. For the past few months, my mom has been catfishing a guy online and I don't know what to do. Earlier this year, I decided to give online dating a try and signed up for a free online dating site. My mom was very supportive and interested in me finding someone, and, unbeknownst to me, created a fake profile to scope out the site.


Few-Shot Action Recognition with Compromised Metric via Optimal Transport

Lu, Su, Ye, Han-Jia, Zhan, De-Chuan

arXiv.org Artificial Intelligence

Although vital to computer vision systems, few-shot action recognition is still not mature despite the wide research of few-shot image classification. Popular few-shot learning algorithms extract a transferable embedding from seen classes and reuse it on unseen classes by constructing a metric-based classifier. One main obstacle to applying these algorithms in action recognition is the complex structure of videos. Some existing solutions sample frames from a video and aggregate their embeddings to form a video-level representation, neglecting important temporal relations. Others perform an explicit sequence matching between two videos and define their distance as matching cost, imposing too strong restrictions on sequence ordering. In this paper, we propose Compromised Metric via Optimal Transport (CMOT) to combine the advantages of these two solutions. CMOT simultaneously considers semantic and temporal information in videos under Optimal Transport framework, and is discriminative for both content-sensitive and ordering-sensitive tasks. In detail, given two videos, we sample segments from them and cast the calculation of their distance as an optimal transport problem between two segment sequences. To preserve the inherent temporal ordering information, we additionally amend the ground cost matrix by penalizing it with the positional distance between a pair of segments. Empirical results on benchmark datasets demonstrate the superiority of CMOT.


AI Might Trick Us By Pretending To Be Dimwitted, Including For Self-Driving Cars

#artificialintelligence

Will full AI try to evade being revealed for what it is. AI is not yet akin to human intelligence and the odds are that we are a long way distant from the promise of such vaunted capabilities. Those touting the use of Machine Learning (ML) and Deep Learning (DL) are hoping that the advent of ML/DL might be a path toward full AI, though right now ML/DL is mainly a stew of computationally impressive pattern matching and we don't know if it will scale-up to anything approaching an equivalent of the human brain. The struggle and earnestness toward achieving full AI is nonetheless still a constant drumbeat of those steeped in AI and the belief is that we will eventually craft or invent a machine-based artificial intelligence made entirely out of software and hardware. One question often posed about reaching full AI is whether or not there will be a need to attain sentience.


New Research Suggests Robots Appear More Persuasive When Pretending to be Human

#artificialintelligence

Recent technological breakthroughs in artificial intelligence have made it possible for machines, or bots, to pass as humans. A team of researchers led by Talal Rahwan, associate professor of Computer Science at NYU Abu Dhabi, conducted an experiment to study how people interact with bots whom they believe to be human, and how such interactions are affected once bots reveal their identity. The researchers found that bots are more efficient than humans at certain human–machine interactions, but only if they are allowed to hide their non-human nature. In their paper titled "Behavioral Evidence for a Transparency-Efficiency Tradeoff in Human-Machine Cooperation" published in Nature Machine Intelligence, the researchers presented their experiment in which participants were asked to play a cooperation game with either a human associate or a bot associate. This game, called the Iterated Prisoner's Dilemma, was designed to capture situations in which each of the interacting parties can either act selfishly in an attempt to exploit the other, or act cooperatively in an attempt to attain a mutually beneficial outcome.


The Next Hot Job: Pretending to Be a Robot

#artificialintelligence

Michael Niedermayer used to fly drones for the U.S. Army and the Central Intelligence Agency, gathering real-time, life-and-death intelligence on battlefields in Iraq. Now he pilots delivery robots for a San Francisco Bay Area startup that wants to disrupt burrito delivery. Postmates, which in mid-August received a permit to operate its Serve delivery robot in San Francisco and is already testing it for food delivery in Los Angeles, employs a growing team of "pilots" to remotely oversee, and at times steer, these four-wheeled food ferries. "We will probably see a drastic increase in our workforce over the next five years," says Postmates Chief Executive Bastian Lehmann. Across industries, engineers are building atop work done a generation ago by designers of military drones.


Pretending to give a robot citizenship helps no one

#artificialintelligence

Sophia the robot has been on a roll lately. Earlier in the year, its creator David Hanson told Jimmy Fallon that the bot is "basically alive." At the beginning of October, it showed up at the United Nations, announcing to delegates: "I am here to help humanity create the future." And just last week, Sophia was awarded an honorary citizenship by Saudi Arabia. "Sophia the robot becomes first humanoid Saudi citizen."