Goto

Collaborating Authors

 bait


The Missing Layer of AGI: From Pattern Alchemy to Coordination Physics

Chang, Edward Y.

arXiv.org Artificial Intelligence

Influential critiques argue that Large Language Models (LLMs) are a dead end for AGI: "mere pattern matchers" structurally incapable of reasoning or planning. We argue this conclusion misidentifies the bottleneck: it confuses the ocean with the net. Pattern repositories are the necessary System-1 substrate; the missing component is a System-2 coordination layer that selects, constrains, and binds these patterns. We formalize this layer via UCCT, a theory of semantic anchoring that models reasoning as a phase transition governed by effective support (rho_d), representational mismatch (d_r), and an adaptive anchoring budget (gamma log k). Under this lens, ungrounded generation is simply an unbaited retrieval of the substrate's maximum likelihood prior, while "reasoning" emerges when anchors shift the posterior toward goal-directed constraints. We translate UCCT into architecture with MACI, a coordination stack that implements baiting (behavior-modulated debate), filtering (Socratic judging), and persistence (transactional memory). By reframing common objections as testable coordination failures, we argue that the path to AGI runs through LLMs, not around them.


AI can easily impersonate you. This trick helps thwart scammers

PCWorld

AI's rapidly expanding capabilities include convincing impersonations--that is, audio and video that sounds and looks like you. Sometimes these deepfakes can be harmless, part of a joke or meme that involves a celebrity, politician, or other public figure. But as you might guess, scammers also use them to steal money from the unsuspecting. Most of the time, this style of scheme–often called a "grandparent scam"–catches people off-guard. Because they don't realize how easy and sophisticated this technology has become.


Fast Fishing: Approximating BAIT for Efficient and Scalable Deep Active Image Classification

Huseljic, Denis, Hahn, Paul, Herde, Marek, Rauch, Lukas, Sick, Bernhard

arXiv.org Artificial Intelligence

Deep active learning (AL) seeks to minimize the annotation costs for training deep neural networks. Bait, a recently proposed AL strategy based on the Fisher Information, has demonstrated impressive performance across various datasets. However, Bait's high computational and memory requirements hinder its applicability on large-scale classification tasks, resulting in current research neglecting Bait in their evaluation. This paper introduces two methods to enhance Bait's computational efficiency and scalability. Notably, we significantly reduce its time complexity by approximating the Fisher Information. In particular, we adapt the original formulation by i) taking the expectation over the most probable classes, and ii) constructing a binary classification task, leading to an alternative likelihood for gradient computations. Consequently, this allows the efficient use of Bait on large-scale datasets, including ImageNet. Our unified and comprehensive evaluation across a variety of datasets demonstrates that our approximations achieve strong performance with considerably reduced time complexity.


TikToker sounds alarm on this scary online trend that turns your children into bait for predators

FOX News

A TikToker warned of a growing trend involving child predators who use artificial intelligence to turn photos and videos of kids into explicit content. Posting imagery of children on social media can invite "digital kidnappers" to steal their likeness and use them in exploitative AI-generated videos, Alex Hoffman said in a viral TikTok video. "Digital kidnapping is when somebody steals the photos of your minor from the internet, usually a social media platform, and either pretends to be the child or pretends to be the child's parents," she said. "Oftentimes digital kidnappers will take normal photos of a child on the internet and alter them to look explicit or show the child doing something inappropriate." "Digital kidnappers can also take photos of a child and make them into an inappropriate video using AI materials," said Hoffman, a law student who has worked with the government investigating online sex crimes against children.


BaIT: Barometer for Information Trustworthiness

Nolan, Oisín, van Mourik, Jeroen, Tilbury, Callum Rhys

arXiv.org Artificial Intelligence

This paper presents a new approach to the FNC-1 fake news classification task which involves employing pre-trained encoder models from similar NLP tasks, namely sentence similarity and natural language inference, and two neural network architectures using this approach are proposed. Methods in data augmentation are explored as a means of tackling class imbalance in the dataset, employing common pre-existing methods and proposing a method for sample generation in the under-represented class using a novel sentence negation algorithm. Comparable overall performance with existing baselines is achieved, while significantly increasing accuracy on an under-represented but nonetheless important class for FNC-1.


And You Thought Poisoning Feral Pigs Would Be Easy?

Mother Jones

This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration. Early one winter morning in 2020, Kurt VerCauteren discovered a cluster of dead birds in a barren field in northwest Texas. They were small birds, mostly dark-eyed juncos, but also a smattering of white-crowned sparrows. VerCauteren's team had poisoned them, inadvertently. The clues were clear, the death uncomplicated: The birds had flown in before dawn to scavenge deadly morsels of a contaminated peanut paste, left behind after a sounder of wild hogs had torn through the area in a feeding frenzy. The birds likely died within minutes of eating. "I couldn't even see the crumbs," says VerCauteren, a wildlife biologist at the US Department of Agriculture in Fort Collins, Colorado, who has spent years developing and testing pig poisons. The birds were the unintended victims of a field experiment to test a toxicant--one intended for feral pigs, but no other animals--that had been developed in Australia.


Adaptive Agent Architecture for Real-time Human-Agent Teaming

Ni, Tianwei, Li, Huao, Agrawal, Siddharth, Raja, Suhas, Jia, Fan, Gui, Yikang, Hughes, Dana, Lewis, Michael, Sycara, Katia

arXiv.org Artificial Intelligence

Teamwork is a set of interrelated reasoning, actions and behaviors of team members that facilitate common objectives. Teamwork theory and experiments have resulted in a set of states and processes for team effectiveness in both human-human and agent-agent teams. However, human-agent teaming is less well studied because it is so new and involves asymmetry in policy and intent not present in human teams. To optimize team performance in human-agent teaming, it is critical that agents infer human intent and adapt their polices for smooth coordination. Most literature in human-agent teaming builds agents referencing a learned human model. Though these agents are guaranteed to perform well with the learned model, they lay heavy assumptions on human policy such as optimality and consistency, which is unlikely in many real-world scenarios. In this paper, we propose a novel adaptive agent architecture in human-model-free setting on a two-player cooperative game, namely Team Space Fortress (TSF). Previous human-human team research have shown complementary policies in TSF game and diversity in human players' skill, which encourages us to relax the assumptions on human policy. Therefore, we discard learning human models from human data, and instead use an adaptation strategy on a pre-trained library of exemplar policies composed of RL algorithms or rule-based methods with minimal assumptions of human behavior. The adaptation strategy relies on a novel similarity metric to infer human policy and then selects the most complementary policy in our library to maximize the team performance. The adaptive agent architecture can be deployed in real-time and generalize to any off-the-shelf static agents. We conducted human-agent experiments to evaluate the proposed adaptive agent framework, and demonstrated the suboptimality, diversity, and adaptability of human policies in human-agent teams.


With Pop Star as Bait, China Nabs Suspects Using Facial Recognition

WSJ.com: WSJD - Technology

The arrests spurred a splash of publicity from state media, who are crowning Mr. Cheung--one of the Hong Kong megastars known as the "Four Heavenly Kings"--with a new title: "The Nemesis of Fugitives." China's police departments have been openly touting their use of technology to nab lawbreakers--a campaign that rights activists say is aimed at winning public support for growing state surveillance. This is the first widely reported indication that Chinese police are using facial-recognition at major musical events. Concert organizers in China have also increasingly deployed facial-recognition systems to curb scalping by verifying the identities of ticket-holders. Surveillance companies and local security agencies have experimented with deploying the technology at events around the country in recent years.


CES 2017: emotional cars, sick bags and a 'listening' hairbrush

The Guardian

If this year's CES continues to predict future tech trends, then we can soon expect to have emotional relationships with our cars, virtual reality devices so realistic you need a sick bag, and products so pricey most people won't be able to afford them. One of the main themes this year at the premier electronics convention, which is held annually in Vegas, is that in the future everything will have a relationship with everything. The Faraday Future FF 91, a family-sized electric vehicle with the acceleration of a Formula 1 car and a "brain" that will apparently be capable of learning from its driver, was unveiled at a media event on Tuesday, before Toyota and Honda took the concept of an intelligent car even further. Toyota showed off its "Concept-i" concept car, which it described as: "More than a machine. It will become our friend".