Goto

Collaborating Authors

 London


Enabling AutoML for Zero-Touch Network Security: Use-Case Driven Analysis

arXiv.org Artificial Intelligence

Zero-Touch Networks (ZTNs) represent a state-of-the-art paradigm shift towards fully automated and intelligent network management, enabling the automation and intelligence required to manage the complexity, scale, and dynamic nature of next-generation (6G) networks. ZTNs leverage Artificial Intelligence (AI) and Machine Learning (ML) to enhance operational efficiency, support intelligent decision-making, and ensure effective resource allocation. However, the implementation of ZTNs is subject to security challenges that need to be resolved to achieve their full potential. In particular, two critical challenges arise: the need for human expertise in developing AI/ML-based security mechanisms, and the threat of adversarial attacks targeting AI/ML models. In this survey paper, we provide a comprehensive review of current security issues in ZTNs, emphasizing the need for advanced AI/ML-based security mechanisms that require minimal human intervention and protect AI/ML models themselves. Furthermore, we explore the potential of Automated ML (AutoML) technologies in developing robust security solutions for ZTNs. Through case studies, we illustrate practical approaches to securing ZTNs against both conventional and AI/ML-specific threats, including the development of autonomous intrusion detection systems and strategies to combat Adversarial ML (AML) attacks. The paper concludes with a discussion of the future research directions for the development of ZTN security approaches.


Towards Zero Touch Networks: Cross-Layer Automated Security Solutions for 6G Wireless Networks

arXiv.org Artificial Intelligence

The transition from 5G to 6G mobile networks necessitates network automation to meet the escalating demands for high data rates, ultra-low latency, and integrated technology. Recently, Zero-Touch Networks (ZTNs), driven by Artificial Intelligence (AI) and Machine Learning (ML), are designed to automate the entire lifecycle of network operations with minimal human intervention, presenting a promising solution for enhancing automation in 5G/6G networks. However, the implementation of ZTNs brings forth the need for autonomous and robust cybersecurity solutions, as ZTNs rely heavily on automation. AI/ML algorithms are widely used to develop cybersecurity mechanisms, but require substantial specialized expertise and encounter model drift issues, posing significant challenges in developing autonomous cybersecurity measures. Therefore, this paper proposes an automated security framework targeting Physical Layer Authentication (PLA) and Cross-Layer Intrusion Detection Systems (CLIDS) to address security concerns at multiple Internet protocol layers. The proposed framework employs drift-adaptive online learning techniques and a novel enhanced Successive Halving (SH)-based Automated ML (AutoML) method to automatically generate optimized ML models for dynamic networking environments. Experimental results illustrate that the proposed framework achieves high performance on the public Radio Frequency (RF) fingerprinting and the Canadian Institute for CICIDS2017 datasets, showcasing its effectiveness in addressing PLA and CLIDS tasks within dynamic and complex networking environments. Furthermore, the paper explores open challenges and research directions in the 5G/6G cybersecurity domain. This framework represents a significant advancement towards fully autonomous and secure 6G networks, paving the way for future innovations in network automation and cybersecurity.


Leveraging Hypernetworks and Learnable Kernels for Consumer Energy Forecasting Across Diverse Consumer Types

arXiv.org Artificial Intelligence

Consumer energy forecasting is essential for managing energy consumption and planning, directly influencing operational efficiency, cost reduction, personalized energy management, and sustainability efforts. In recent years, deep learning techniques, especially LSTMs and transformers, have been greatly successful in the field of energy consumption forecasting. Nevertheless, these techniques have difficulties in capturing complex and sudden variations, and, moreover, they are commonly examined only on a specific type of consumer (e.g., only offices, only schools). Consequently, this paper proposes HyperEnergy, a consumer energy forecasting strategy that leverages hypernetworks for improved modeling of complex patterns applicable across a diversity of consumers. Hypernetwork is responsible for predicting the parameters of the primary prediction network, in our case LSTM. A learnable adaptable kernel, comprised of polynomial and radial basis function kernels, is incorporated to enhance performance. The proposed HyperEnergy was evaluated on diverse consumers including, student residences, detached homes, a home with electric vehicle charging, and a townhouse. Across all consumer types, HyperEnergy consistently outperformed 10 other techniques, including state-of-the-art models such as LSTM, AttentionLSTM, and transformer.


Evaluating Blocking Biases in Entity Matching

arXiv.org Artificial Intelligence

Entity Matching (EM) is crucial for identifying equivalent data entities across different sources, a task that becomes increasingly challenging with the growth and heterogeneity of data. Blocking techniques, which reduce the computational complexity of EM, play a vital role in making this process scalable. Despite advancements in blocking methods, the issue of fairness; where blocking may inadvertently favor certain demographic groups; has been largely overlooked. This study extends traditional blocking metrics to incorporate fairness, providing a framework for assessing bias in blocking techniques. Through experimental analysis, we evaluate the effectiveness and fairness of various blocking methods, offering insights into their potential biases. Our findings highlight the importance of considering fairness in EM, particularly in the blocking phase, to ensure equitable outcomes in data integration tasks.


An Adaptive End-to-End IoT Security Framework Using Explainable AI and LLMs

arXiv.org Artificial Intelligence

The exponential growth of the Internet of Things (IoT) has significantly increased the complexity and volume of cybersecurity threats, necessitating the development of advanced, scalable, and interpretable security frameworks. This paper presents an innovative, comprehensive framework for real-time IoT attack detection and response that leverages Machine Learning (ML), Explainable AI (XAI), and Large Language Models (LLM). By integrating XAI techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) with a model-independent architecture, we ensure our framework's adaptability across various ML algorithms. Additionally, the incorporation of LLMs enhances the interpretability and accessibility of detection decisions, providing system administrators with actionable, human-understandable explanations of detected threats. Our end-to-end framework not only facilitates a seamless transition from model development to deployment but also represents a real-world application capability that is often lacking in existing research. Based on our experiments with the CIC-IOT-2023 dataset \cite{neto2023ciciot2023}, Gemini and OPENAI LLMS demonstrate unique strengths in attack mitigation: Gemini offers precise, focused strategies, while OPENAI provides extensive, in-depth security measures. Incorporating SHAP and LIME algorithms within XAI provides comprehensive insights into attack detection, emphasizing opportunities for model improvement through detailed feature analysis, fine-tuning, and the adaptation of misclassifications to enhance accuracy.


World leaders congratulate Starmer after stunning election win

Al Jazeera

Keir Starmer will be Britain's new prime minister, as his centre-left opposition Labour Party swept to a landslide victory, ending 14 years of Conservative rule. At a triumphant party rally in central London on Friday, Starmer, 61, told cheering activists that "change begins here" and promised a "decade of national renewal", putting "country first, party second". We will continue the work begun with the UK for our bilateral cooperation, for peace and security in Europe, for the climate and for AI," Macron posted on X. We will continue the work begun with the UK for our bilateral cooperation, for peace and security in Europe, for the climate and for AI. "Keir Starmer has brought the Labour Party a comprehensive victory โ€ฆ The relationship between Ireland and the UK is deeply consequential for all people across these islands," Harris said in a statement. "I look forward to early engagement with the incoming Prime Minister." "Ukraine and the United Kingdom have been and will continue to be reliable allies through thick and thin.


Threshold-Independent Fair Matching through Score Calibration

arXiv.org Artificial Intelligence

Entity Matching (EM) is a critical task in numerous fields, such as healthcare, finance, and public administration, as it identifies records that refer to the same entity within or across different databases. EM faces considerable challenges, particularly with false positives and negatives. These are typically addressed by generating matching scores and apply thresholds to balance false positives and negatives in various contexts. However, adjusting these thresholds can affect the fairness of the outcomes, a critical factor that remains largely overlooked in current fair EM research. The existing body of research on fair EM tends to concentrate on static thresholds, neglecting their critical impact on fairness. To address this, we introduce a new approach in EM using recent metrics for evaluating biases in score based binary classification, particularly through the lens of distributional parity. This approach enables the application of various bias metrics like equalized odds, equal opportunity, and demographic parity without depending on threshold settings. Our experiments with leading matching methods reveal potential biases, and by applying a calibration technique for EM scores using Wasserstein barycenters, we not only mitigate these biases but also preserve accuracy across real world datasets. This paper contributes to the field of fairness in data cleaning, especially within EM, which is a central task in data cleaning, by promoting a method for generating matching scores that reduce biases across different thresholds.


Protesters Are Fighting to Stop AI, but They're Split on How to Do It

WIRED

On a side street outside the headquarters of the Department of Science, Innovation and Technology in the center of London on Monday, 20 or so protesters are getting their chants in order. When do we want it?" These protesters are part of Pause AI, a group of activists petitioning for companies to pause development of large AI models which they fear could pose a risk to the future of humanity. Other PauseAI protests are taking place across the globe: In San Francisco, New York, Berlin, Rome, Ottawa, and a handful of other cities. Their aim is to grab the attention of voters and politicians ahead of the AI Seoul Summit--a follow-up to the AI Safety Summit held in the UK in November 2023. But the loosely organized group of protesters itself is still figuring out exactly the best way to communicate its message. "The Summit didn't actually lead to meaningful regulations," says Joep Meindertsma, the founder of PauseAI. The attendees at the conference agreed to the "Bletchley Declaration," but that agreement doesn't mean much, Meindertsma says. "It's only a small first step, and what we need are binding international treaties." The group's main demand is for a pause on the training of AI systems more powerful than GPT-4--it's calling for all countries to implement this measure, but specifically calls out the United States as the home of most leading AI labs. The group also wants all UN member states to sign a treaty that sets up an international AI safety agency with responsibility for granting new deployments of AI systems and training runs of large models. Their protests are taking place on the same day as OpenAI announced a new version of ChatGPT to make the chatbot act more like a human. "We have banned technology internationally before," says Meindertsma, pointing to the Montreal Protocol, a global agreement finalized in 1987 that saw the phaseout of CFCs and other chemicals known to deplete the ozone layer. "We've got treaties that ban blinding laser weapons.


Ireland looking to send asylum seekers back to UK: Report

Al Jazeera

The Republic of Ireland is looking to amend the law to allow the return of asylum seekers to the United Kingdom, according to broadcaster RTE, after an influx over the border with Northern Ireland, which is part of the UK. Dublin's Minister of Justice Helen McEntee, who will visit London on Monday, told a parliamentary committee this week that she estimates 80 percent of those applying for asylum in the republic came over the land border with Northern Ireland. UK Prime Minister Rishi Sunak told Sky News it was evidence that London's plan to send asylum seekers to Rwanda is acting as a deterrent. "What it shows, I think, is that the deterrent is โ€ฆ already having an impact because people are worried about coming here," he said. In response, a spokesperson for Ireland's Prime Minister Simon Harris said the leader "does not comment on the migration policies of any other country but he is very clear about the importance of protecting the integrity of the migration system in Ireland", RTE reported.


Meta's Nick Clegg plays down AI's threat to global democracy

The Guardian

Generative AI is overblown as an election risk, according to Meta's Nick Clegg, who claims the technology is more useful for defending democracy than attacking it. Speaking at the Meta AI Day event in London on Tuesday, the social network's global affairs chief said that the evidence from major elections that have already been run this year around the world is that technology such as large language models, image and video generators, and speech synthesis tools aren't being used in practice to subvert democracy. "It is right that we should be alert and we should be vigilant," Clegg said. "But of the major elections which have taken place already this year, in Taiwan, Pakistan, Bangladesh and Indonesia, it is striking how little these tools have been used in a systematic basis to really try to subvert and disrupt the elections. "I would urge everyone to think of AI as a sword, not just a shield, when it comes to bad content.