iat
An Approach To Enhance IoT Security In 6G Networks Through Explainable AI
Wireless communication has evolved significantly, with 6G offering groundbreaking capabilities, particularly for IoT. However, the integration of IoT into 6G presents new security challenges, expanding the attack surface due to vulnerabilities introduced by advanced technologies such as open RAN, terahertz (THz) communication, IRS, massive MIMO, and AI. Emerging threats like AI exploitation, virtualization risks, and evolving attacks, including data manipulation and signal interference, further complicate security efforts. As 6G standards are set to be finalized by 2030, work continues to align security measures with technological advances. However, substantial gaps remain in frameworks designed to secure integrated IoT and 6G systems. Our research addresses these challenges by utilizing tree-based machine learning algorithms to manage complex datasets and evaluate feature importance. We apply data balancing techniques to ensure fair attack representation and use SHAP and LIME to improve model transparency. By aligning feature importance with XAI methods and cross-validating for consistency, we boost model accuracy and enhance IoT security within the 6G ecosystem.
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- North America > Canada > New Brunswick > Fredericton (0.04)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning
Wang, Xinyi, Wieting, John, Clark, Jonathan H.
Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning. Each of these comes with their own trade-offs based on available data, model size, compute cost, ease-of-use, and final quality with neither solution performing well across-the-board. In this article, we first describe ICL and fine-tuning paradigms in a way that highlights their natural connections. Some of their most exciting capabilities, such as producing logical reasoning to solve a problem, are found to emerge only when the model size is over a certain threshold, often hundreds of billions of parameters (Wei et al., 2022b;a). The impressive capabilities of these models to produce high-quality responses without any task-specific tuning along with the very high cost of further tuning such models has led much recent work to focus on the paradigm of In-Context Learning (ICL)--placing a few task-specific examples and instructions into the model's input (Brown et al., 2020; Chowdhery et al., 2022; Google et al., 2023; OpenAI, 2023). Although prior work has seen that fine-tuning a model on task data can often lead to superior performance on the downstream task compared to ICL (Scao & Rush, 2021; Schick & Schütze, 2020a;b; Asai et al., 2023), there are significantly fewer recent efforts on fine-tuning models for tasks with limited data, perhaps because the time and compute costs associated with tuning a very large model drives practitioners toward smaller models, abandoning the ability to take advantage of emergent model capabilities.
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Middle East > Cyprus > Nicosia > Nicosia (0.04)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
AI-Assisted Ethics? Considerations of AI Simulation for the Ethical Assessment and Design of Assistive Technologies
Schicktanz, Silke, Welsch, Johannes, Schweda, Mark, Hein, Andreas, Rieger, Jochem W., Kirste, Thomas
Current ethical debates on the use of artificial intelligence (AI) in health care treat AI as a product of technology in three ways: First, by assessing risks and potential benefits of currently developed AI-enabled products with ethical checklists; second, by proposing ex ante lists of ethical values seen as relevant for the design and development of assisting technology, and third, by promoting AI technology to use moral reasoning as part of the automation process. Subsequently, we propose a fourth approach to AI, namely as a methodological tool to assist ethical reflection. We provide a concept of an AI-simulation informed by three separate elements: 1) stochastic human behavior models based on behavioral data for simulating realistic settings, 2) qualitative empirical data on value statements regarding internal policy, and 3) visualization components that aid in understanding the impact of changes in these variables. The potential of this approach is to inform an interdisciplinary field about anticipated ethical challenges or ethical trade-offs in concrete settings and, hence, to spark a re-evaluation of design and implementation plans. This may be particularly useful for applications that deal with extremely complex values and behavior or with limitations on the communication resources of affected persons (e.g., persons with dementia care or for care of persons with cognitive impairment). Simulation does not replace ethical reflection but does allow for detailed, context-sensitive analysis during the design process and prior to implementation. Finally, we discuss the inherently quantitative methods of analysis afforded by stochastic simulations as well as the potential for ethical discussions and how simulations with AI can improve traditional forms of thought experiments and future-oriented technology assessment.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (5 more...)
- Research Report (1.00)
- Overview (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Dementia (0.70)
Predicting the performance of hybrid ventilation in buildings using a multivariate attention-based biLSTM Encoder-Decoder neural network
Chaudhary, Gaurav, Johra, Hicham, Georges, Laurent, Austbø, Bjørn
Hybrid ventilation is an energy-efficient solution to provide fresh air for most climates, given that it has a reliable control system. To operate such systems optimally, a high-fidelity control-oriented modesl is required. It should enable near-real time forecast of the indoor air temperature based on operational conditions such as window opening and HVAC operating schedules. However, physics-based control-oriented models (i.e., white-box models) are labour-intensive and computationally expensive. Alternatively, black-box models based on artificial neural networks can be trained to be good estimators for building dynamics. This paper investigates the capabilities of a deep neural network (DNN), which is a multivariate multi-head attention-based long short-term memory (LSTM) encoder-decoder neural network, to predict indoor air temperature when windows are opened or closed. Training and test data are generated from a detailed multi-zone office building model (EnergyPlus). Pseudo-random signals are used for the indoor air temperature setpoints and window opening instances. The results indicate that the DNN is able to accurately predict the indoor air temperature of five zones whenever windows are opened or closed. The prediction error plateaus after the 24th step ahead prediction (6 hr ahead prediction).
- Construction & Engineering > HVAC (0.90)
- Energy > Oil & Gas > Upstream (0.35)
Towards Measuring Ethicality of an Intelligent Assistive System
Shaukat, M. Salman, Põder, J. -C., Bader, Sebastian, Kirste, Thomas
Artificial intelligence (AI) based assistive systems, so called intelligent assistive technology (IAT) are becoming increasingly ubiquitous by each day. IAT helps people in improving their quality of life by providing intelligent assistance based on the provided data. A few examples of such IATs include self-driving cars, robot assistants and smart-health management solutions. However, the presence of such autonomous entities poses ethical challenges concerning the stakeholders involved in using these systems. There is a lack of research when it comes to analysing how such IAT adheres to provided ethical regulations due to ethical, logistic and cost issues associated with such an analysis. In the light of the above-mentioned problem statement and issues, we present a method to measure the ethicality of an assistive system. To perform this task, we utilised our simulation tool that focuses on modelling navigation and assistance of Persons with Dementia (PwD) in indoor environments. By utilising this tool, we analyse how well different assistive strategies adhere to provided ethical regulations such as autonomy, justice and beneficence of the stakeholders.
Attesting Biases and Discrimination using Language Semantics
Aran, Xavier Ferrer, Such, Jose M., Criado, Natalia
AI agents are increasingly deployed and used to make automated decisions that affect our lives on a daily basis. It is imperative to ensure that these systems embed ethical principles and respect human values. We focus on how we can attest to whether AI agents treat users fairly without discriminating against particular individuals or groups through biases in language. In particular, we discuss human unconscious biases, how they are embedded in language, and how AI systems inherit those biases by learning from and processing human language. Then, we outline a roadmap for future research to better understand and attest problematic AI biases derived from language.
- North America > United States (0.14)
- Europe > United Kingdom > Wales (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
The 1999 Asia-Pacific Conference on Intelligent-Agent Technology
IAT'99 was the first meeting in this new series and was held in Hong Kong from 14 to 17 December. It was sponsored by Hong Kong Baptist University, the Croucher Foundation, the Epson Foundation, The MIT Press, the Association for Computing Machinery (ACM) Hong Kong, and the Institute of Electrical and Electronics Engineers Hong Kong Section Computer Chapter and in cooperation with ACM Special Interest Groups in Artificial Intelligence (SIGART), Knowledge Discovery in Data (SIGKDD), and Computer-Human Interaction (SIGCHI). Jiming Liu (Hong Kong Baptist University) and Ning Zhong (Yamaguchi University, Japan) were the program chairs, and Setsuo Ohsuga (Waseda University) and Ernest Lam (Hong Kong Baptist University) were the general chairs. IAT'99 successfully brought together over 150 researchers and practitioners to share their original research results and practical development experiences in intelligent-agent technology. The participants were from Australia, Austria, Belgium, ...
Princeton researchers discover why AI become racist and sexist
Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.
Princeton researchers discover why AI become racist and sexist
Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.
Princeton researchers discover why AI become racist and sexist
Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus--created by millions of people typing away online--might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes. People taking the IAT are asked to put words into two categories.