precaution
CDC warns of 'enhanced' virus risk for travelers amid outbreak spread by mosquitoes
Fox News senior medical analyst Dr. Marc Siegel shares his perspective on whether the mosquito-borne virus in China will spread to the United States and how AI can be detrimental to children's and young adults' mental health on'Fox Report.' The U.S. Centers for Disease Control and Prevention (CDC) is warning that travelers to China face an "enhanced" risk of contracting a virus spread by mosquitoes. There has been an outbreak of chikungunya in Guangdong Province, which can cause fever, joint pain, headache, muscle pain, joint swelling, and rash. Recently, the CDC raised the warning related to chikungunya in China from Level 1: "Practice Usual Precautions" to Level 2: "Practice Enhanced Precautions." The CDC says there are no medicines to treat chikungunya, and recommends preventing it by wearing insect repellent, wearing long sleeves and pants, or staying in places that have air conditioning or screens on the windows and doors.
- North America > United States > Kansas (0.06)
- Asia > China > Guangdong Province > Shenzhen (0.06)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Musculoskeletal (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
Safe + Safe = Unsafe? Exploring How Safe Images Can Be Exploited to Jailbreak Large Vision-Language Models
Cui, Chenhang, Deng, Gelei, Zhang, An, Zheng, Jingnan, Li, Yicong, Gao, Lianli, Zhang, Tianwei, Chua, Tat-Seng
Recent advances in Large Vision-Language Models (LVLMs) have showcased strong reasoning abilities across multiple modalities, achieving significant breakthroughs in various real-world applications. Despite this great success, the safety guardrail of LVLMs may not cover the unforeseen domains introduced by the visual modality. Existing studies primarily focus on eliciting LVLMs to generate harmful responses via carefully crafted image-based jailbreaks designed to bypass alignment defenses. In this study, we reveal that a safe image can be exploited to achieve the same jailbreak consequence when combined with additional safe images and prompts. This stems from two fundamental properties of LVLMs: universal reasoning capabilities and safety snowball effect. Building on these insights, we propose Safety Snowball Agent (SSA), a novel agent-based framework leveraging agents' autonomous and tool-using abilities to jailbreak LVLMs. SSA operates through two principal stages: (1) initial response generation, where tools generate or retrieve jailbreak images based on potential harmful intents, and (2) harmful snowballing, where refined subsequent prompts induce progressively harmful outputs. Our experiments demonstrate that \ours can use nearly any image to induce LVLMs to produce unsafe content, achieving high success jailbreaking rates against the latest LVLMs. Unlike prior works that exploit alignment flaws, \ours leverages the inherent properties of LVLMs, presenting a profound challenge for enforcing safety in generative multimodal systems. Our code is avaliable at \url{https://github.com/gzcch/Safety_Snowball_Agent}.
- North America > United States (1.00)
- Asia > Russia (1.00)
- Europe > Switzerland > Zürich > Zürich (0.14)
- (7 more...)
- Media > Music (1.00)
- Materials > Chemicals (1.00)
- Leisure & Entertainment (1.00)
- (11 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Targeted aspect-based emotion analysis to detect opportunities and precaution in financial Twitter messages
García-Méndez, Silvia, de Arriba-Pérez, Francisco, Barros-Vila, Ana, González-Castaño, Francisco J.
Microblogging platforms, of which Twitter is a representative example, are valuable information sources for market screening and financial models. In them, users voluntarily provide relevant information, including educated knowledge on investments, reacting to the state of the stock markets in real-time and, often, influencing this state. We are interested in the user forecasts in financial, social media messages expressing opportunities and precautions about assets. We propose a novel Targeted Aspect-Based Emotion Analysis (TABEA) system that can individually discern the financial emotions (positive and negative forecasts) on the different stock market assets in the same tweet (instead of making an overall guess about that whole tweet). It is based on Natural Language Processing (NLP) techniques and Machine Learning streaming algorithms. The system comprises a constituency parsing module for parsing the tweets and splitting them into simpler declarative clauses; an offline data processing module to engineer textual, numerical and categorical features and analyse and select them based on their relevance; and a stream classification module to continuously process tweets on-the-fly. Experimental results on a labelled data set endorse our solution. It achieves over 90% precision for the target emotions, financial opportunity, and precaution on Twitter. To the best of our knowledge, no prior work in the literature has addressed this problem despite its practical interest in decision-making, and we are not aware of any previous NLP nor online Machine Learning approaches to TABEA.
- Europe > Montenegro (0.04)
- Europe > Spain (0.04)
- South America > Brazil > Ceará > Fortaleza (0.04)
- (4 more...)
- Research Report (0.50)
- Instructional Material > Online (0.34)
- Banking & Finance > Trading (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- Information Technology > Services > e-Commerce Services (0.46)
- Education > Educational Setting > Online (0.46)
Your Vision-Language Model Itself Is a Strong Filter: Towards High-Quality Instruction Tuning with Data Selection
Chen, Ruibo, Wu, Yihan, Chen, Lichang, Liu, Guodong, He, Qi, Xiong, Tianyi, Liu, Chenxi, Guo, Junfeng, Huang, Heng
Data selection in instruction tuning emerges as a pivotal process for acquiring high-quality data and training instruction-following large language models (LLMs), but it is still a new and unexplored research area for vision-language models (VLMs). Existing data selection approaches on LLMs either rely on single unreliable scores, or use downstream tasks for selection, which is time-consuming and can lead to potential over-fitting on the chosen evaluation datasets. To address this challenge, we introduce a novel dataset selection method, Self-Filter, that utilizes the VLM itself as a filter. This approach is inspired by the observation that VLMs benefit from training with the most challenging instructions. Self-Filter operates in two stages. In the first stage, we devise a scoring network to evaluate the difficulty of training instructions, which is co-trained with the VLM. In the second stage, we use the trained score net to measure the difficulty of each instruction, select the most challenging samples, and penalize similar samples to encourage diversity. Comprehensive experiments on LLaVA and MiniGPT-4 show that Self-Filter can reach better results compared to full data settings with merely about 15% samples, and can achieve superior performance against competitive baselines.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Africa > Rwanda > Kigali > Kigali (0.04)
- (8 more...)
Tech firms sign 'reasonable precautions' to stop AI-generated election chaos
Major technology companies signed a pact Friday to voluntarily adopt "reasonable precautions" to prevent artificial intelligence tools from being used to disrupt democratic elections around the world. Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk's X – are also signing on to the accord. "Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit. The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video "that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote".
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.25)
- North America > United States > New Hampshire (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (5 more...)
- Information Technology (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (0.69)
Exclusive: California Bill Proposes Regulating AI at State Level
A senior California lawmaker will introduce a new artificial intelligence (AI) bill to the state's senate on Wednesday, adding to national and global efforts to regulate the fast-accelerating technology. Although there are several attempts in Congress to draft AI legislation, the state of California--home to Silicon Valley, where most of the world's top AI companies are based--has a role to play in setting guardrails on the industry, according to state Senator Scott Wiener, (D--San Francisco) who drafted the bill. "In an ideal world we would have a strong federal AI regulatory scheme," Wiener said in an interview with TIME on Tuesday, adding that he supports attempts in Congress and the White House to regulate the technology. "But California has a history of acting when the federal government is moving either too slowly or not acting." He added: "We need to get ahead of these risks, not do what we've done in the past around social media or other technology, where we do nothing before it's potentially too late."
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Europe (0.05)
Smart Roads: How AI in Transportation Keeps Drivers Safe
Has road technology reached sophistication? We have already seen perfectly smooth and durable asphalt, which is appropriate for any transport type. How else can the road be improved? Vancouver, for example, has proposed adding recycled plastic particles to asphalt, which increases durability, and allows to partially reuse it during resurfacing. But it only improves the existing pavement.
Why Responsible AI Development Needs Cooperation on Safety
We've written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and benefits, technical collaboration, increased transparency, and incentivizing standards. Our analysis shows that industry cooperation on safety will be instrumental in ensuring that AI systems are safe and beneficial, but competitive pressures could lead to a collective action problem, potentially causing AI companies to under-invest in safety. We hope these strategies will encourage greater cooperation on the safe development of AI and lead to better global outcomes of AI. It's important to ensure that it's in the economic interest of companies to build and release AI systems that are safe, secure, and socially beneficial. This is true even if we think AI companies and their employees have an independent desire to do this, since AI systems are more likely to be safe and beneficial if the economic interests of AI companies are not in tension with their desire to build their systems responsibly.
Ethics in Ai -- Current issues, existing precautions, and probable solutions
Introduction- Most of the Artificial Intelligent (Ai) Systems are developed as black boxes, especially Machine Learning and Deep Learning-based systems. Nowadays, these Machine and Deep Learning-based systems make decisions for our daily life, and should be explainable and should not be taken for granted to the end-users. The implication of such systems is rarely explored for the efficiency in the public usage (i.e., usage in -- Agriculture, Air Combat, Military Training, Education, Finance, Health Care, Human Resources, Customer Service, Autonomous Vehicles, Social Media, and several others[1]-[9]). Not only these, but the future might also be relying on Ai based system that will do our laundry, mow our lawn, fight wars [9]. Thus, there is so much room to improve the transparency of the systems along with fairness and accountability. There are some works that already stated the necessity of guidelines and governance of the Ai based systems, but more exposure is required in each area of application.
- North America > United States > New York (0.05)
- Europe > France (0.05)
- Information Technology > Security & Privacy (0.98)
- Government > Military (0.88)