chai
CHAI: Command Hijacking against embodied AI
Burbano, Luis, Ortiz, Diego, Sun, Qi, Yang, Siwei, Tu, Haoqin, Xie, Cihang, Cao, Yinzhi, Cardenas, Alvaro A
Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness.
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- North America > United States > California > Orange County > Anaheim (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- Asia > Middle East > Republic of Türkiye > Karaman Province > Karaman (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.46)
CHAI: Clustered Head Attention for Efficient LLM Inference
Agarwal, Saurabh, Acun, Bilge, Hosmer, Basil, Elhoushi, Mostafa, Lee, Yejin, Venkataraman, Shivaram, Papailiopoulos, Dimitris, Wu, Carole-Jean
Large Language Models (LLMs) with hundreds of billions of parameters have transformed the field of machine learning. However, serving these models at inference time is both compute and memory intensive, where a single request can require multiple GPUs and tens of Gigabytes of memory. Multi-Head Attention is one of the key components of LLMs, which can account for over 50% of LLMs memory and compute requirement. We observe that there is a high amount of redundancy across heads on which tokens they pay attention to. Based on this insight, we propose Clustered Head Attention (CHAI). CHAI combines heads with a high amount of correlation for self-attention at runtime, thus reducing both memory and compute. In our experiments, we show that CHAI is able to reduce the memory requirements for storing K,V cache by up to 21.4% and inference time latency by up to 1.73x without any fine-tuning required. CHAI achieves this with a maximum 3.2% deviation in accuracy across 3 different models (i.e. OPT-66B, LLAMA-7B, LLAMA-33B) and 5 different evaluation datasets.
The Chai Platform's AI Safety Framework
Lu, Xiaoding, Korshuk, Aleksey, Liu, Zongyi, Beauchamp, William
Chai empowers users to create and interact with customized chatbots, offering unique and engaging experiences. Despite the exciting prospects, the work recognizes the inherent challenges of a commitment to modern safety standards. Therefore, this paper presents the integrated AI safety principles into Chai to prioritize user safety, data protection, and ethical technology use. The paper specifically explores the multidimensional domain of AI safety research, demonstrating its application in Chai's conversational chatbot platform. It presents Chai's AI safety principles, informed by well-established AI research centres and adapted for chat AI. This work proposes the following safety framework: Content Safeguarding; Stability and Robustness; and Operational Transparency and Traceability. The subsequent implementation of these principles is outlined, followed by an experimental analysis of Chai's AI safety framework's real-world impact. We emphasise the significance of conscientious application of AI safety principles and robust safety measures. The successful implementation of the safe AI framework in Chai indicates the practicality of mitigating potential risks for responsible and ethical use of AI technologies. The ultimate vision is a transformative AI tool fostering progress and innovation while prioritizing user safety and ethical standards.
- North America > United States > Virginia (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia (0.04)
- Overview (0.46)
- Research Report (0.40)
As Alibaba unveils ChatGPT rival, China flags new AI rules
China's technology giant Alibaba has unveiled a generative artificial intelligence model – its version of the technology that powers chatbot sensation ChatGPT – and said it would be integrated into all of the company's apps in the near future. The unveiling on Tuesday was swiftly followed by the Chinese government's publication of draft rules outlining how generative artificial intelligence services should be managed. In a demonstration, the AI language model named Tongyi Qianwen – which means "truth from a thousand questions" – drafted invitation letters, planned trip itineraries, and advised shoppers on types of makeup to buy. Tongyi Qianwen will initially be integrated into DingTalk, Alibaba's workplace messaging app and can be used to summarise meeting notes, write emails, and draft business proposals. It will also be added to Tmall Genie, Alibaba's voice assistant. The technology "will bring about big changes to the way we produce, the way we work, and the way we live our lives", CEO Daniel Zhang told the livestreamed event.
Widow Blames Husband's Death on Artificial Intelligence
A distraught Belgian man who turned to a chatbot for comfort committed suicide, and his wife blames artificial intelligence. Via Vice comes a report originally published Belgium-based La Libre of a man referred to as Pierre, who killed himself after using an app called Chai--which offered what Vice termed a "bespoke AI language model" that was rooted in an open-source alternative to GPT-4 called GPT-J. Chai has around 5 million users, Vice reports, and its default persona is called "Eliza." Interestingly, a phenomenon discovered in the late 1960s may have come into play here: the "ELIZA Effect." It was pointed out by an MIT scientist who created a conversational program called ELIZA and then noticed that people would develop a relationship with the program, treating its words as expressions of real emotion rather than coding.
Can I outsource my life to AI?
AI has officially taken over the world. Depending on who you ask, ChatGPT and Midjourney are saviour of work, art, journalism, law and ethics – or the destroyer of them. Right now, consumer AI is in no man's land, with computer-generated art mostly showing us how Mr Blobby would fare in the Napoleonic Wars. But that hasn't stopped AI start-ups from securing big money investment, and websites using ChatGPT to create personalised content. Which got me thinking: if multi-million dollar companies can wrangle AI to lighten their workloads, why can't I? If'real' jobs will be made obsolete once the machines take over, why resist it?
- Information Technology (0.30)
- Health & Medicine (0.30)
Hi-Fi Rush review – a brawler set to the beat of a drum
The video game industry's hype cycles are typically measured in months and years, not minutes and seconds. So the simultaneous announcement and release in late January of Hi-Fi Rush – the kind of "go and buy it right now" revelation Apple is known for – feels breezily countercultural. So too does its bright, cartoonish styling: this game is as brazenly colourful as a Jet Set Radio fever dream, and even as plastic Guitar Hero instruments clog up the nation's cupboards, it's refreshing to play a game that is so unashamedly music-centred. This is a brawler set to the beat of a drum. You play as Chai, an ebullient teenage boy who enrols in a biological augmentation program with a shady pharmaceutical company.
Hi-Fi Rush is a colorful beat-em-up that lacks variety, but oozes personality
Yesterday's Xbox Developer Direct presentation was a bit of a staid affair. But there was one standout. Hi-Fi Rush is a pop-rock breath of fresh air, a rhythm-based beat-em-up with all the color and attitude of a post-Pokemon kid's anime. And you can play it…right now, on PC and Xbox. Hi-Fi Rush comes from Tango Gameworks, a Japanese studio best known for The Evil Within and Ghotswire: Tokyo.
Artificial Intelligence's Paradoxes: Easy But Hard To Implement, Lacking Talent But Easing Talent Shortages
Does Ai solve more problems than it creates? Listen to the experts and vendors discuss the state of artificial intelligence these days, and one can be forgiven for feeling confused about what it takes to bring AI to the table in a realistic way. Is it a complex undertaking that requires profound planning, or something that is becoming inherent in just about every solution now available? Is it too hard to find talent to create AI, or is AI filling talent gaps? Is AI driving digital transformation, or does digital transformation spur AI adoption?
Does Ethical AI Development Rely On The "Algorithmically" Underserved? CHAI's Mission
For AI to flourish in healthcare, the industry must focus on the "algorithmically underserved," said John D. Halamka, M.D., M.S., president of Mayo Clinic Platform, at the HLTH 2022 conference this month in Las Vegas. Giving visibility to the algorithmically underserved -- individuals who do not generate enough data/are not well represented enough in health data sets for AI to make a determination -- is just one requirement to overcome the prospect of AI bias in healthcare. And identifying and fixing sources of AI bias must be a focus area for an industry that's striving for ethical and equitable AI development, shared Halamka. Dr. John Halamka is President of Mayo Clinic Platform, and a founding member of the Coalition for ... [ ] Health AI For example, what if there was a national registry that hosted all the metadata needed to power the responsible development of algorithms for use in healthcare? Building this kind of standardization into the relatively black box nature of AI development is among the priorities of The Coalition for Health AI (CHAI), which launched earlier this year.
- North America > United States > Nevada > Clark County > Las Vegas (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.05)