Goto

Collaborating Authors

 command and control


Discovering Command and Control (C2) Channels on Tor and Public Networks Using Reinforcement Learning

Wang, Cheng, Redino, Christopher, Rahman, Abdul, Clark, Ryan, Radke, Daniel, Cody, Tyler, Nandakumar, Dhruv, Bowen, Edward

arXiv.org Artificial Intelligence

Command and control (C2) channels are an essential component of many types of cyber attacks, as they enable attackers to remotely control their malware-infected machines and execute harmful actions, such as propagating malicious code across networks, exfiltrating confidential data, or initiating distributed denial of service (DDoS) attacks. Identifying these C2 channels is therefore crucial in helping to mitigate and prevent cyber attacks. However, identifying C2 channels typically involves a manual process, requiring deep knowledge and expertise in cyber operations. In this paper, we propose a reinforcement learning (RL) based approach to automatically emulate C2 attack campaigns using both the normal (public) and the Tor networks. In addition, payload size and network firewalls are configured to simulate real-world attack scenarios. Results on a typical network configuration show that the RL agent can automatically discover resilient C2 attack paths utilizing both Tor-based and conventional communication channels, while also bypassing network firewalls.


Re-Envisioning Command and Control

McDowell, Kaleb, Novoseller, Ellen, Madison, Anna, Goecks, Vinicius G., Kelshaw, Christopher

arXiv.org Artificial Intelligence

Future warfare will require Command and Control (C2) decision-making to occur in more complex, fast-paced, ill-structured, and demanding conditions. C2 will be further complicated by operational challenges such as Denied, Degraded, Intermittent, and Limited (DDIL) communications and the need to account for many data streams, potentially across multiple domains of operation. Yet, current C2 practices -- which stem from the industrial era rather than the emerging intelligence era -- are linear and time-consuming. Critically, these approaches may fail to maintain overmatch against adversaries on the future battlefield. To address these challenges, we propose a vision for future C2 based on robust partnerships between humans and artificial intelligence (AI) systems. This future vision is encapsulated in three operational impacts: streamlining the C2 operations process, maintaining unity of effort, and developing adaptive collective knowledge systems. This paper illustrates the envisaged future C2 capabilities, discusses the assumptions that shaped them, and describes how the proposed developments could transform C2 in future warfare.


Never Give Artificial Intelligence the Nuclear Codes

The Atlantic - Technology

No technology since the atomic bomb has inspired the apocalyptic imagination like artificial intelligence. Ever since ChatGPT began exhibiting glints of logical reasoning in November, the internet has been awash in doomsday scenarios. Many are self-consciously fanciful--they're meant to jar us into envisioning how badly things could go wrong if an emerging intelligence comes to understand the world, and its own goals, even a little differently from how its human creators do. One scenario, however, requires less imagination, because the first steps toward it are arguably already being taken--the gradual integration of AI into the most destructive technologies we possess today. Check out more from this issue and find your next story to read. The world's major military powers have begun a race to wire AI into warfare.


AI-Driven Weapons Systems Lead Today's Arms Race

#artificialintelligence

When it comes to advanced artificial intelligence, much of the debate has focused on whether white-collar workers are now facing the sort of extinction-level threat that the working class once did with robotics. And while it's suddenly likely that AI will be capable of duplicating a good part of what lawyers, accountants, teachers, programmers, and--yes--journalists do, that's not even where the most significant revolution is likely to occur. The latest AI--known as generative pre-trained transformers (GPT)--promises to utterly transform the geopolitics of war and deterrence. It will do so in ways that are not necessarily comforting, and which may even turn existential. On one hand, this technology could make war less lethal and possibly strengthen deterrence. By dramatically expanding the role of AI-directed drones in air forces, navies and armies, human lives could be spared.


China is preparing for a full-spectrum AI war. India is still 15 years behind

#artificialintelligence

In his new book The Last War: How AI Will Shape India's Final Showdown With China, Pravin Sawhney, the editor of FORCE magazine, disquietingly forebodes a grim scenario for 2024: "If India and China were to fight a war in the near future, India faces the prospect of losing the war within 10 days. China could take Arunachal Pradesh and Ladakh with a minimum loss of life, and there is very little that India could do about it." Is it the imagination of a defence analyst running wild? Far from it -- such scenarios have been predicted by other analysts too. A US military blog, Mad Scientist, which looks at the future of warfare, visualised a similar scenario for 2035 in February 2020, wherein China, in collusion with Pakistan, defeats India in Jammu and Kashmir and Ladakh.


Artificial Intelligence Is Strengthening the U.S. Navy From Within

#artificialintelligence

The Navy is progressively phasing artificial intelligence (AI) into its ship systems, weapons, networks, and command and control infrastructure as computer automation becomes more reliable and advanced algorithms make once-impossible discernments and analyses. Previously segmented data streams on ships, drones, aircraft, and even submarines are now increasingly able to share organized data in real-time, in large measure due to breakthrough advances in AI and machine learning. AI can, for instance, enable command and control systems to identify moments of operational relevance from among hours or days or surveillance data in milliseconds, something which saves time, maximizes efficiency, and performs time-consuming procedural tasks autonomously at an exponentially faster speed. "Multiple data bytes of information will be passed around on the networks here in the near future. So as we think about big data, and how do we handle all that data and turn it into information without getting overloaded, this will be a key part of AI, then we're talking about handling decentralized systems," Nathan Husted of the Naval Surface Warfare Center, Carderock told an audience at the 2022 Sea Air Space Symposium.


Hitting the Books: The Soviets once tasked an AI with our mutually assured destruction

Engadget

Barely a month into its already floundering invasion of Ukraine and Russia is rattling its nuclear saber and threatening to drastically escalate the regional conflict into all out world war. But the Russians are no stranger to nuclear brinksmanship. In the excerpt below from Ben Buchanan and Andrew Imbrie's latest book, we can see how closely humanity came to an atomic holocaust in 1983 and why an increasing reliance on automation -- on both sides of the Iron Curtain -- only served to heighten the likelihood of an accidental launch. The New Fire looks at the rapidly expanding roles of automated machine learning systems in national defense and how increasingly ubiquitous AI technologies (as examined through the thematic lenses of "data, algorithms, and computing power") are transforming how nations wage war both domestically and abroad. As the tensions between the United States and the Soviet Union reached their apex in the fall of 1983, the nuclear war began.

  Country:
  Genre: Summary/Review (0.34)
  Industry: Government > Military (1.00)

Techniques for Ransomware Detection

#artificialintelligence

The NCC Group Annual Threat Monitor report states that ransomware attacks rose 93 percent in 2021. While there were 1,389 such attacks in 2020, they soared to 2,690 in 2021. The U.S. was the prime target, accounting for more than half of the attacks, followed by Europe at 30 percent. The industrial and public sectors were the most popular verticals (both at 19.35 percent) followed by consumer cyclicals (16.13 percent). NCC Group added that ransomware accounts for 65.38 percent of all incidents its global cyber incident response team dealt with for the year.


Enabling Artificial Intelligence at the Combatant Commands

#artificialintelligence

The Department of Defense's Office of the Chief Information Officer, or DoD CIO, is pursuing several efforts to make sure the U.S. combatant commands have the fundamental tools to enable artificial intelligence and machine learning to aid their operational command and control. The DoD CIO's efforts naturally hinge on data and data management, an appropriate transport layer and future cloud capabilities, solutions that will benefit a broad range of warfighters not just at the commands, said Kelly Fletcher, who is performing the duties of the department's chief information officer on behalf of John Sherman, the nominated CIO who is currently going through his confirmation process for the position and testifying tomorrow in front of the U.S. Senate. A senior executive service official, Fletcher has been working in the office since 2020. She presented a keynote address during AFCEA International's TechNet Cyber conference in Baltimore on October 27. Fletcher emphasized that the DoD CIO's office supports more than 40 major combatant commands, services and agencies, "and they all have unique requirements," she said.


Agile, Antifragile, Artificial-Intelligence-Enabled, Command and Control

Simpson, Jacob, Oosthuizen, Rudolph, Sawah, Sondoss El, Abbass, Hussein

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.