security concern
Identifying and Addressing User-level Security Concerns in Smart Homes Using "Smaller" LLMs
Chowdhury, Hafijul Hoque, Anonto, Riad Ahmed, Jajodia, Sourov, Majumdar, Suryadipta, Hossain, Md. Shohrab
With the rapid growth of smart home IoT devices, users are increasingly exposed to various security risks, as evident from recent studies. While seeking answers to know more on those security concerns, users are mostly left with their own discretion while going through various sources, such as online blogs and technical manuals, which may render higher complexity to regular users trying to extract the necessary information. This requirement does not go along with the common mindsets of smart home users and hence threatens the security of smart homes furthermore. In this paper, we aim to identify and address the major user-level security concerns in smart homes. Specifically, we develop a novel dataset of Q&A from public forums, capturing practical security challenges faced by smart home users. We extract major security concerns in smart homes from our dataset by leveraging the Latent Dirichlet Allocation (LDA). We fine-tune relatively "smaller" transformer models, such as T5 and Flan-T5, on this dataset to build a QA system tailored for smart home security. Unlike larger models like GPT and Gemini, which are powerful but often resource hungry and require data sharing, smaller models are more feasible for deployment in resource-constrained or privacy-sensitive environments like smart homes. The dataset is manually curated and supplemented with synthetic data to explore its potential impact on model performance. This approach significantly improves the system's ability to deliver accurate and relevant answers, helping users address common security concerns with smart home IoT devices. Our experiments on real-world user concerns show that our work improves the performance of the base models.
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Asia > Middle East > Jordan (0.04)
Security Concerns for Large Language Models: A Survey
Li, Miles Q., Fung, Benjamin C. M.
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing (NLP), including text generation, translation, summarization, and code synthesis, as a consequence of which revolutionizing a wide range of AI applications [10, 56, 45]. Models such as OpenAI's ChatGPT series, Google's Gemini, and Anthropic's Claude have been widely deployed in commercial systems, including search engines, customer support, software development tools, and personal assistants [45, 55, 3]. However, as their capabilities grow, so do their attack surfaces and the potential for misuse [51, 77, 50]. While the scale and specific nature of these vulnerabilities are new, the fundamental challenge of ensuring that powerful AI systems operate safely and align with human intent is a longstanding concern in the AI community. Foundational work, such as the identification of concrete problems in AI safety long before the current LLM era, laid the groundwork for understanding issues like reward hacking and negative side effects that remain highly relevant today [1]. The susceptibility arises because the models are trained on vast, yet imperfectly curated, datasets containing potentially harmful content, and because they interact with users through open-ended prompts that can be manipulated [48, 17, 16]. Researchers and practitioners are increasingly concerned that these systems can be manipulated, misused, or even behave in misaligned and potentially deceptive ways [25, 42, 6]. Consequently, the security and alignment of LLMs have become critical areas of study, requiring an understanding of emergent threats and robust, multi-faceted defenses [17, 70, 43].
- North America > Canada > Quebec > Montreal (0.14)
- North America > Canada > Ontario (0.04)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Research Report > Promising Solution (0.67)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Media (0.93)
Quantifying Security Vulnerabilities: A Metric-Driven Security Analysis of Gaps in Current AI Standards
Madhavan, Keerthana, Yazdinejad, Abbas, Zarrinkalam, Fattane, Dehghantanha, Ali
As AI systems integrate into critical infrastructure, security gaps in AI compliance frameworks demand urgent attention. This paper audits and quantifies security risks in three major AI governance standards: NIST AI RMF 1.0, UK's AI and Data Protection Risk Toolkit, and the EU's ALTAI. Using a novel risk assessment methodology, we develop four key metrics: Risk Severity Index (RSI), Attack Potential Index (AVPI), Compliance-Security Gap Percentage (CSGP), and Root Cause Vulnerability Score (RCVS). Our analysis identifies 136 concerns across the frameworks, exposing significant gaps. NIST fails to address 69.23 percent of identified risks, ALTAI has the highest attack vector vulnerability (AVPI = 0.51) and the ICO Toolkit has the largest compliance-security gap, with 80.00 percent of high-risk concerns remaining unresolved. Root cause analysis highlights under-defined processes (ALTAI RCVS = 033) and weak implementation guidance (NIST and ICO RCVS = 0.25) as critical weaknesses. These findings emphasize the need for stronger, enforceable security controls in AI compliance. We offer targeted recommendations to enhance security posture and bridge the gap between compliance and real-world AI risks.
- North America > United States > District of Columbia > Washington (0.05)
- South America > Ecuador (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (6 more...)
Which countries have banned DeepSeek and why?
This week, government agencies in countries including South Korea and Australia have blocked access to Chinese artificial intelligence (AI) startup DeepSeek's new AI chatbot programme, mostly for government employees. Other countries, including the United States, have said they may also seek to block DeepSeek from government employees' mobile devices, according to media reports. All cite "security concerns" about the Chinese technology and a lack of clarity about how users' personal information is handled by the operator. Last month, DeepSeek made headlines after it caused share prices in US tech companies to plummet, after it claimed that its model would cost only a fraction of the money its competitors had spent on their own AI programmes to build. The news caused social media users to joke: "I can't believe ChatGPT lost its job to AI." Here's what we know about DeepSeek and why countries are banning it.
- Oceania > Australia (0.36)
- Europe > Italy (0.05)
- North America > United States > California (0.05)
- (5 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government (1.00)
Why DeepSeek Is Sparking Debates Over National Security, Just Like TikTok
The fast-rising Chinese AI lab DeepSeek is sparking national security concerns in the U.S., over fears that its AI models could be used by the Chinese government to spy on American civilians, learn proprietary secrets, and wage influence campaigns. In her first press briefing, White House Press Secretary Karoline Leavitt said that the National Security Council was "looking into" the potential security implications of DeepSeek. This comes amid news that the U.S. Navy has banned use of DeepSeek among its ranks due to "potential security and ethical concerns." DeepSeek, which currently tops the Apple App Store in the U.S., marks a major inflection point in the AI arms race between the U.S. and China. For the last couple years, many leading technologists and political leaders have argued that whichever country developed AI the fastest will have a huge economic and military advantage over its rivals. DeepSeek shows that China's AI has developed much faster than many had believed, despite efforts from American policymakers to slow its progress.
- North America > United States (1.00)
- Asia > China (1.00)
- Asia > Taiwan (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Security Threats in Agentic AI System
Khan, Raihan, Sarkar, Sayak, Mahata, Sainik Kumar, Jose, Edwin
Artificial Intelligence (AI) agents have become increasingly prevalent in various applications, from virtual assistants to complex data analysis systems. However, their direct access to databases raises significant concerns regarding privacy and security. This paper examines these critical issues, focusing on the potential risks posed by unrestricted AI access to sensitive data. The rapid advancement of AI technologies has resulted in systems capable of processing vast amounts of data and generating human-like responses. While this progress has provided numerous benefits, it has also introduced new challenges in ensuring data privacy and security. AI agents with direct access to databases may inadvertently expose confidential information, or they may be exploited by malicious actors to access or manipulate sensitive data. Additionally, AI systems' ability to analyze large datasets increases the risk of unintended privacy violations, making them prime targets for attacks aimed at extracting or misusing data. This paper explores the current landscape of AI agent interactions with databases and analyzes the associated risks. It discusses the potential threats to privacy protection and data security as AI agents become more integrated into various applications.
- North America > United States > Michigan (0.05)
- North America > United States > California (0.04)
- Europe > Netherlands > Drenthe > Assen (0.04)
- Asia > India > West Bengal > Kolkata (0.04)
- Research Report (1.00)
- Overview (1.00)
The Morning After: OpenAI's week of security issues
Perhaps unsurprisingly, July 4th was a quiet day for news, but we've still got editorials on e-ink writing, the most-delayed video game ever and more bad news from the makers of ChatGPT. Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into OpenAI's Mac ChatGPT app and found that it was storing user conversations locally in plain text, rather than encrypting them. Because that app is only available from OpenAI's website, and since it's not available on the App Store, it doesn't have to follow Apple's sandboxing requirements. OpenAI released an update that added encryption to locally stored chats. Then, more bad news stemmed from issues in 2023. Last spring, a hacker obtained information about OpenAI after illicitly accessing the company's internal messaging systems.
- Law (1.00)
- Information Technology > Security & Privacy (0.54)
- Leisure & Entertainment > Games > Computer Games (0.40)
- Government > Regional Government > North America Government > United States Government (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
OpenAI hit by two big security issues this week
OpenAI seems to make headlines every day and this time it's for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity. Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI's website, and since it's not available on the App Store, it doesn't have to follow Apple's sandboxing requirements. Vieito's work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Global Data Constraints: Ethical and Effectiveness Challenges in Large Language Model
Yang, Jin, Wang, Zhiqiang, Lin, Yanbin, Zhao, Zunduo
The efficacy and ethical integrity of large language models (LLMs) are profoundly influenced by the diversity and quality of their training datasets. However, the global landscape of data accessibility presents significant challenges, particularly in regions with stringent data privacy laws or limited open-source information. This paper examines the multifaceted challenges associated with acquiring high-quality training data for LLMs, focusing on data scarcity, bias, and low-quality content across various linguistic contexts. We highlight the technical and ethical implications of relying on publicly available but potentially biased or irrelevant data sources, which can lead to the generation of biased or hallucinatory content by LLMs. Through a series of evaluations using GPT-4 and GPT-4o, we demonstrate how these data constraints adversely affect model performance and ethical alignment. We propose and validate several mitigation strategies designed to enhance data quality and model robustness, including advanced data filtering techniques and ethical data collection practices. Our findings underscore the need for a proactive approach in developing LLMs that considers both the effectiveness and ethical implications of data constraints, aiming to foster the creation of more reliable and universally applicable AI systems.
- Asia > China (0.04)
- North America > United States > New York (0.04)
Microsoft briefly blocked employees from using ChatGPT over security concerns
Microsoft temporarily prohibited its employees from using ChatGPT "due to security and data concerns," according to CNBC. The company announced the rule in an internal website and even blocked corporate devices from being able to access the AI chatbot. While several tech companies had prohibited -- or had at least discouraged -- the internal use of ChatGPT in the past, Microsoft doing the same thing was certainly curious, seeing as it's OpenAI's biggest and most prominent investor. In January, Microsoft pledged to invest $10 billion in ChatGPT's developer over the next few years after pouring $3 billion into the company in the past. The AI-powered tools it rolled out for its products, such as Bing's chatbot, also use OpenAI's large language model.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.53)