Goto

Collaborating Authors

 artificial intelligence and security


CybORG++: An Enhanced Gym for the Development of Autonomous Cyber Agents

Emerson, Harry, Bates, Liz, Hicks, Chris, Mavroudis, Vasilios

arXiv.org Artificial Intelligence

CybORG++ is an advanced toolkit for reinforcement learning research focused on network defence. Building on the CAGE 2 CybORG environment, it introduces key improvements, including enhanced debugging capabilities, refined agent implementation support, and a streamlined environment that enables faster training and easier customisation. Along with addressing several software bugs from its predecessor, CybORG++ introduces MiniCAGE, a lightweight version of CAGE 2, which improves performance dramatically, up to 1000x faster execution in parallel iterations, without sacrificing accuracy or core functionality. CybORG++ serves as a robust platform for developing and evaluating defensive agents, making it a valuable resource for advancing enterprise network defence research.


Artificial Intelligence and Security: What You Should Know

#artificialintelligence

In March 2019, Norsk Hydro, a Norwegian renewable energy and aluminum manufacturing company, faced a ransomware attack. Rather than paying the ransom, a cybersecurity team used artificial intelligence to identify the corruption in the computer system and rebuild the operations in an uncorrupted parallel system. LockerGoga ransomware was eventually identified as the culprit, which spread via Windows-based systems. While Norsk avoided paying the ransom, the attack still forced it to operate without computer systems for an extended period of time (weeks to months), while the security team isolated and scanned thousands of employee accounts for malicious activity. Signature-based detection is an approach in which a unique identifier is established about a known threat so that it can be identified in the future.


Artificial Intelligence and Security. How Security is Adopted by Artificial Intelligence

#artificialintelligence

Artificial intelligence is described as having machines do "smart" or "intelligent" matters on their very own barring human guidance. AI security entails leveraging AI to become conscious of and provides up cyber threats with much less human intervention than is usually predicted or wished with normal protection approaches. AI safety equipment are regularly used to pick out "good" versus "bad" with the aid of evaluating the behaviours of entities throughout surroundings to these in a comparable environment. This system allows the gadget to mechanically examine about and flag changes. Often known as unsupervised gaining knowledge of or "pattern of life" learning, this technique effects in giant numbers of false positives and negatives.


13th ACM Workshop on Artificial Intelligence and Security (AISec 2020)

#artificialintelligence

A backdoor is a covert functionality in a machine learning model that causes it to produce incorrect outputs on inputs with a certain "trigger" feature. Recent research on data-poisoning and trojaning attacks has shown how backdoors can be introduced into ML models -- but only for backdoors that act as universal adversarial perturbations (UAPs) and in an inferior threat model that requires the attacker to poison the model and then modify the input at inference time. I will describe a new technique for backdooring ML models based on poisoning the loss-value computation, and demonstrate that it can introduce new types of backdoors which are different and more powerful than UAPs, including (1) single-pixel and physically realizable backdoors in ImageNet, (2) backdoors that switch the model to an entirely different, privacy-violating functionality, e.g., cause a model that counts the number of faces in a photo to covertly recognize specific individuals; and (3) semantic backdoors that do not require the attacker to modify the input at inference time. Oh, and they evade all known defenses, too.


Artificial Intelligence and Security

#artificialintelligence

One of the biggest advantages of AI from a Zero Outage perspective is the ability to predict power failures or maintenance requirements so that the operation can adapt to them. But the risks arising in particular from the opacity of communication patterns due to automated provisioning lead to various complexities. The safety of AI plays a particularly important role when it comes to the availability of products and production. The effect of an erroneous statement on the part of the AI can lead both to a time delay and to defects of the machines due to signal overlay. In automated processes in particular, it is often necessary to consider with caution how decisions influence production.


Artificial intelligence and security top priority for Google: Sundar Pichai

#artificialintelligence

Internet search giant Google on Tuesday announced artificial intelligence (AI) and security as its top priority among the host of services it offers as it kicked off its annual event - Next - for the third successive year here in San Francisco. The company announced several updates to Google Cloud apps and services with back-to-back addresses by its top honchos, including a surprise dash by Google CEO Sundar Pichai and Cloud top executive Diane Greene at the keynote address. The software giant referred itself as a "modern enterprise company" and its primary business as "information", saying its hallmark service Google Cloud was built to efficiently take in information, organise it, and give back intelligence, in order to drive their business - supercharged information. "Our mission is to organise world's information. And make it universally accessible and useful. I am always talking of being fortunate as a company as a timeless machine. One that feels as it did 20 years ago," Pichai told a 20,000-estimated attendance at Moscone Center in San Francisco.


Artificial Intelligence and Security: Current Applications and Tomorrow's Potentials -

#artificialintelligence

Security is a broad term, and in industry and government there are a myriad of "security" contexts on a variety of levels – from the individual to nation-wide. Artificial intelligence and machine learning technologies are being applied and developed across this spectrum. While many of these technologies have the potential and have greatly benefited society (helping reduce credit card fraud, for example), the evolving social contexts and applications of these technologies often leave more questions than answers – in terms of rules, regulations and moral judgments – in their wake. Artificial intelligence and security were – in many ways – made for each other, and the modern approaches of machine learning seem to be arriving just in time to fill in the gaps of previous rule-based data security systems. The purpose of this article is to shed light on current trends and applications, in industry and government, at the intersection of artificial intelligence and the security field.