Goto

Collaborating Authors

Results


Breakthrough proof clears path for quantum AI

#artificialintelligence

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is managed by Triad, a public service oriented, national security science organization equally owned by its three founding members: Battelle Memorial Institute (Battelle), the Texas A&M University System (TAMUS), and the Regents of the University of California (UC) for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.


Quantum Machine Learning Hits a Limit, LANL Research Shows

#artificialintelligence

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is managed by Triad, a public service oriented, national security science organization equally owned by its three founding members: Battelle Memorial Institute (Battelle), the Texas A&M University System (TAMUS), and the Regents of the University of California (UC) for the Department of Energy's National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.


SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things and Cyber-Physical Systems based on Machine Learning

arXiv.org Artificial Intelligence

Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities, ranging from healthcare devices and wearables to critical infrastructures, e.g., nuclear power plants, autonomous vehicles, smart cities, and smart homes. These devices are inherently not secure across their comprehensive software, hardware, and network stacks, thus presenting a large attack surface that can be exploited by hackers. In this article, we present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response when such vulnerabilities are exploited. The novelty of this approach lies in extracting intelligence from known real-world CPS/IoT attacks, representing them in the form of regular expressions, and employing machine learning (ML) techniques on this ensemble of regular expressions to generate new attack vectors and security vulnerabilities. Our results show that 10 new attack vectors and 122 new vulnerability exploits can be successfully generated that have the potential to exploit a CPS or an IoT ecosystem. The ML methodology achieves an accuracy of 97.4% and enables us to predict these attacks efficiently with an 87.2% reduction in the search space. We demonstrate the application of our method to the hacking of the in-vehicle network of a connected car. To defend against the known attacks and possible novel exploits, we discuss a defense-in-depth mechanism for various classes of attacks and the classification of data targeted by such attacks. This defense mechanism optimizes the cost of security measures based on the sensitivity of the protected resource, thus incentivizing its adoption in real-world CPS/IoT by cybersecurity practitioners.


Secretary Perry Addresses the National Security Commission on Artificial Intelligence

#artificialintelligence

Thank you for that introduction, Yll [ILL-ee] (Bajraktari, NSCAI Executive Director) [Bah-j-Rock-Tar-ee]. And let me also thank Representative Stefanik [Steff-ON-ick]…for her leadership and passion for AI's nexus with national security. It is truly an honor to address all of you at the National Security Commission on Artificial Intelligence Conference … on the future of AI and national security. Today's theme is strength through innovation … and that is exactly as it should be. For innovation is the lifeblood of our country… and a vital source for our security.


Mining News Events from Comparable News Corpora: A Multi-Attribute Proximity Network Modeling Approach

arXiv.org Machine Learning

We present ProxiModel, a novel event mining framework for extracting high-quality structured event knowledge from large, redundant, and noisy news data sources. The proposed model differentiates itself from other approaches by modeling both the event correlation within each individual document as well as across the corpus. To facilitate this, we introduce the concept of a proximity-network, a novel space-efficient data structure to facilitate scalable event mining. This proximity network captures the corpus-level co-occurence statistics for candidate event descriptors, event attributes, as well as their connections. We probabilistically model the proximity network as a generative process with sparsity-inducing regularization. This allows us to efficiently and effectively extract high-quality and interpretable news events. Experiments on three different news corpora demonstrate that the proposed method is effective and robust at generating high-quality event descriptors and attributes. We briefly detail many interesting applications from our proposed framework such as news summarization, event tracking and multi-dimensional analysis on news. Finally, we explore a case study on visualizing the events for a Japan Tsunami news corpus and demonstrate ProxiModel's ability to automatically summarize emerging news events.


Artificial intelligence can help stop nuclear proliferation

#artificialintelligence

The international nuclear arms control regime is approaching a critical juncture. If new nuclear weapons treaties are to be negotiated, ratified and enforced, they will need to be underpinned by strong technical monitoring capabilities. The Department of Energy's National Nuclear Security Administration is leveraging its expertise and technology to meet this challenge, understanding that in nuclear nonproliferation, you can't verify what you can't see. The United States is placing renewed urgency on developing the science and technology required to monitor our adversaries' nuclear activity -- specifically by harnessing the power of artificial intelligence and the unmatched, high-performance computing capabilities found at DOE's national laboratories. DOE houses four of the world's top 10 fastest supercomputers, including the top two, and we are already at work on developing three next-generation, exascale machines, able to conduct a billion billion calculations per second.


How Artificial Intelligence Could Increase the Risk of Nuclear War

#artificialintelligence

Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through satellite data, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983. A single word flashed on the screen in front of him. The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.


How AI Could Increase The Risk Of Nuclear War

#artificialintelligence

Could artificial intelligence upend concepts of nuclear deterrence that have helped spare the world from nuclear war since 1945? Stunning advances in AI--coupled with a proliferation of drones, satellites, and other sensors--raise the possibility that countries could find and threaten each other's nuclear forces, escalating tensions. Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through satellite data, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983.


How Artificial Intelligence Could Increase the Risk of Nuclear War

#artificialintelligence

Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through satellite data, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983. A single word flashed on the screen in front of him. The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.


How Artificial Intelligence Could Increase the Risk of Nuclear War

#artificialintelligence

Lt. Col. Stanislav Petrov settled into the commander's chair in a secret bunker outside Moscow. His job that night was simple: Monitor the computers that were sifting through data from satellites and radar, watching the United States for any sign of a missile launch. It was just after midnight, Sept. 26, 1983. A single word flashed on the screen in front of him. The fear that computers, by mistake or malice, might lead humanity to the brink of nuclear annihilation has haunted imaginations since the earliest days of the Cold War.