weaponization
Graph of Effort: Quantifying Risk of AI Usage for Vulnerability Assessment
Mehra, Anket, Aßmuth, Andreas, Prieß, Malte
With AI-based software becoming widely available, the risk of exploiting its capabilities, such as high automation and complex pattern recognition, could significantly increase. An AI used offensively to attack non-AI assets is referred to as offensive AI. Current research explores how offensive AI can be utilized and how its usage can be classified. Additionally, methods for threat modeling are being developed for AI-based assets within organizations. However, there are gaps that need to be addressed. Firstly, there is a need to quantify the factors contributing to the AI threat. Secondly, there is a requirement to create threat models that analyze the risk of being attacked by AI for vulnerability assessment across all assets of an organization. This is particularly crucial and challenging in cloud environments, where sophisticated infrastructure and access control landscapes are prevalent. The ability to quantify and further analyze the threat posed by offensive AI enables analysts to rank vulnerabilities and prioritize the implementation of proactive countermeasures. To address these gaps, this paper introduces the Graph of Effort, an intuitive, flexible, and effective threat modeling method for analyzing the effort required to use offensive AI for vulnerability exploitation by an adversary. While the threat model is functional and provides valuable support, its design choices need further empirical validation in future work.
- Asia > China > Hong Kong (0.04)
- Europe > Spain > Valencian Community > Valencia Province > Valencia (0.04)
- Europe > Netherlands > Drenthe > Assen (0.04)
- Europe > Germany > Schleswig-Holstein > Kiel (0.04)
- Workflow (0.46)
- Research Report (0.40)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.69)
Former House China hawk warns Americans about the dangers of the CCP's growing technological dominance
The former chairman of the House Select Committee on the Chinese Communist Party warned about a fast-moving software and technology race between the United States and China, arguing the weaponization of supply chains could force a showdown between the free world and its totalitarian rivals. Former Rep. Mike Gallagher, R-Wis., told Fox News chief political anchor Bret Baier about a Wall Street Journal (WSJ) op-ed he wrote Sunday, outlining his concerns about China's growing technological dominance. "On the modern battlefield, we need to not only know our adversary but know ourselves and map our supply chain in great detail," he said Monday on "Special Report." Gallagher, the head of defense for Palantir Technologies, a Denver-based software company, highlighted how China could use its manufactured port cranes across the world to disrupt international commerce if the United States were to get into a conflict with China over Taiwan. "The Biden administration recently warned that Chinese-made port cranes could be'controlled... from remote locations.' European companies found that Chinese groups may have gained access to the systems that control cargo ships. Billions of endpoints connect to the internet, including sensors and devices that physically interact with critical infrastructure. Anyone with control over a portion of the technology stack such as semiconductors, cellular modules, or hardware devices, can use it to snoop, incapacitate or kill," he wrote in the WSJ.
- Asia > Taiwan (0.26)
- Asia > China > Jiangxi Province > Nanchang (0.07)
- North America > United States > Indiana > Boone County > Lebanon (0.06)
- (4 more...)
Cream Skimming the Underground: Identifying Relevant Information Points from Online Forums
Moreno-Vera, Felipe, Nogueira, Mateus, Figueiredo, Cainã, Menasché, Daniel Sadoc, Bicudo, Miguel, Woiwood, Ashton, Lovat, Enrico, Kocheturov, Anton, de Aguiar, Leandro Pfleger
This paper proposes a machine learning-based approach for detecting the exploitation of vulnerabilities in the wild by monitoring underground hacking forums. The increasing volume of posts discussing exploitation in the wild calls for an automatic approach to process threads and posts that will eventually trigger alarms depending on their content. To illustrate the proposed system, we use the CrimeBB dataset, which contains data scraped from multiple underground forums, and develop a supervised machine learning model that can filter threads citing CVEs and label them as Proof-of-Concept, Weaponization, or Exploitation. Leveraging random forests, we indicate that accuracy, precision and recall above 0.99 are attainable for the classification task. Additionally, we provide insights into the difference in nature between weaponization and exploitation, e.g., interpreting the output of a decision tree, and analyze the profits and other aspects related to the hacking communities. Overall, our work sheds insight into the exploitation of vulnerabilities in the wild and can be used to provide additional ground truth to models such as EPSS and Expected Exploitability.
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- Europe > Italy > Molise > Campobasso Province > Campobasso (0.04)
James Cameron says AI 'weaponization' is 'biggest danger': '"I warned you guys in 1984'
Justine Bateman told Fox News Digital that using artificial intelligence to write a script is not solving any problems, as there is no lack of talent in the industry. Oscar-winning filmmaker James Cameron says he believes the future "weaponization" of artificial intelligence is the "biggest danger." "I think the weaponization of AI is the biggest danger," the "Titanic" director told Canadian CTV on Tuesday. "I think that we will get into the equivalent of a nuclear arms race with AI, and if we don't build it, the other guys are for sure going to build it, and so then it'll escalate," Cameron explained. "You could imagine an AI in a combat theater, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate," he continued.
- Oceania > Australia > New South Wales > Sydney (0.06)
- North America > Canada > Ontario > Middlesex County > London (0.06)
- Asia > South Korea > Seoul > Seoul (0.06)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
GPT Chat and the weaponization of disinformation
The team behind GPT's new Chatbot has clearly done what they can to stop it spreading disinformation, but it is also quite clear at a level where we can say that that any nefarious commercial or governmental organization who wanted to weaponize these technologies for disinformation absolutely could. The first thing that GPT Chat demonstrates is a real confidence it its error, which is what we should expect from a machine. This is perfect troll behaviour. Not simply getting something wrong, but then (incorrectly) linking some elements to reinforce its point. Now with GPT Chat this is accidental, but it shows how you could easily bias the training data to support a proscribed position.
national academy of sciences address ethics of ai and robotics. who manages weaponization - Google Search
The debate is just beginning, and this essay attempts to address the broad ethical issues potentially associated with the development of autonomous weapons, a--... Oct 9, 2022 -- One area of particular concern is weaponization. We believe that adding weapons to robots that are remotely or autonomously operated, widely--... Who is responsible for AI ethics? What is the weaponization of artificial intelligence? Who wrote the article the ethical dilemma of robotics? How do you address ethical issues in AI? Sep 13, 2018 -- In this paper, I examine five AI ethical dilemmas: weapons and military-related applications, law and border enforcement,--...
General purpose robots should not be weaponized: An open letter to the robotics industry and our communities
Over the course of the past year Open Robotics has taken time from our day-to-day efforts to work with our colleagues in the field to consider how the technology we develop could negatively impact society as a whole. In particular we were concerned with the weaponization of mobile robots. After a lot of thoughtful discussion, deliberation, and debate with our colleagues at organizations like Boston Dynamics, Clearpath Robotics, Agility Robotics, AnyBotics, and Unitree, we have co-authored and signed an open letter to the robotics community entitled, "General Purpose Robots Should Not Be Weaponized." You can read the letter, in its entirety, here. Additional media coverage of the letter can be found in Axios, and The Robot Report.
Please Don't Give the Robots Guns, Pleads Boston Dynamics
By now, everyone's seen the videos of Boston Dynamics robot dog, Spot. It can walk, run, hop on two legs and even dance -- it's mighty impressive. But with every video released by the American robotics firm, it felt like we were edging closer to the ultimate goal of four-legged drones that could be equipped for battle and replace soldiers. However, Boston Dynamics has come together with a coalition of other robotics experts to plead with companies across the sector to please never give the robots guns. The letter, which was first reported by Axios, has been signed by Boston Dynamics, Agility Robotics, ANYbotics, Unitree, Clearpath and Open Robotics.
- North America > United States (0.53)
- North America > Mexico (0.06)
- Government > Military > Army (0.37)
- Government > Regional Government > North America Government > United States Government (0.33)
Boston Dynamics Promises Not to Make a Robocop
Boston Dynamics, the DARPA-backed robotics company known for uncomfortable videos where nearly 200-pound humanoid robots perform backflips, uncomfortable dances, and various forms of horrifyingly aggressive parkour, says it isn't interested in weaponizing its robots. In an open letter this week, Boston Dynamics Dynamics joined five other robotics makers in a pledge not to weaponize their advanced-mobility, general-purpose robots, or the software that makes them tick. The companies said they would carefully review their customers' intended application for the bots "when possible" and pledged to explore features that could somehow mitigate risks. Stating the obvious, the companies wrote that weaponization of advanced robotics "raises new risks of harm and serious ethical issues," and could harm public trust in the technology. The robot makers went on to encourage policymakers to explore ways to promote the safe use of robots and encouraged other researchers and developers to join the pledge. "We are convinced that the benefits for humanity of these technologies strongly outweigh the risk of misuse, and we are excited about a bright future in which humans and robots work side by side to tackle some of the world's challenges," the companies wrote.
- North America > United States > New York (0.06)
- North America > United States > Massachusetts (0.06)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.75)
- Government > Military (0.50)
Boston Dynamics, other companies pledge not to 'weaponize' robots
That was the crux of the message a coalition of robotics companies including the famed Boston Dynamics put out in an open letter with the eye-catching subject line "General Purpose Robots Should Not Be Weaponized." The Waltham-headquartered Boston Dynamics is attempting to, well, terminate the idea that its internet-celebrity robot "dogs" and other automatons will be armed to the teeth, and it, alongside other major robotics companies, is encouraging others to do the same. "We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues," the companies wrote in the joint letter posted online on Thursday. "Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society. For these reasons, we do not support the weaponization of our advanced-mobility general-purpose robots."