security


Former Michigan CISO: Don't Ignore Security Predictions

#artificialintelligence

It seems like every vendor in the data security industry makes predictions this time of year. Which ones should you pay attention to? All of them, says Dan Lohrmann, who formerly served as the state of Michigan's CISO and CTO. See Also: IoT is Happening Now: Are You Prepared? "I really view it as something that professionals need to widen their perspectives," Lohrmann says in an interview with Information Security Media Group.


How AI can help you stay ahead of cybersecurity threats

#artificialintelligence

Since the 2013 Target breach, it's been clear that companies need to respond better to security alerts even as volumes have gone up. With this year's fast-spreading ransomware attacks and ever-tightening compliance requirements, response must be much faster. Adding staff is tough with the cybersecurity hiring crunch, so companies are turning to machine learning and artificial intelligence (AI) to automate tasks and better detect bad behavior. In a cybersecurity context, AI is software that perceives its environment well enough to identify events and take action against a predefined purpose. AI is particularly good at recognizing patterns and anomalies within them, which makes it an excellent tool to detect threats.


How AI is transforming the future of fintech

#artificialintelligence

WIRED Money takes place in Studio Spaces, London on May 18, 2017. For more details and to purchase your ticket visit wiredevent.co.uk "Breaking: Two Explosions in the White House and Barack Obama is injured." At the time of the tweet, AP's account had around two million followers. The post was favourited, retweeted, and spread. At 13:13, AP confirmed the tweet was fake.


How to use machine learning and AI in cyber security

#artificialintelligence

Cyber criminals are constantly seeking new ways to perpetrate a breach but thanks to artificial intelligence (AI) and its subset machine learning, it's becoming possible to fight off these attacks automatically. The secret is in machine learning's ability to monitor network traffic and learn what's normal within a system, using this information to flag up any suspicious activity. As the technology's name suggests, it's able to use the vast amounts of security data collected by businesses every day to become more effective over time. At the moment, when the machine spots an anomaly, it sends an alert to a human – usually a security analyst – to decide if an action needs to be taken. But some machine learning systems are already able to respond themselves, by restricting access for certain users, for example.


AMD's Radeon Vega GPU is headed everywhere, even to machine learning

#artificialintelligence

While we don't know much about the Radeon Vega Mobile GPU yet, it's not exactly a surprising announcement. Gamers have been waiting eagerly to see when AMD's new graphics hardware would make it into high-powered laptops. In October, the company revealed that Vega was coming to its new Ryzen mobile processors. It was only a matter of time until it had a more powerful dedicated offering. AMD is also positioning it as something you'd find in ultrathin notebooks, and not just chunky gaming machines.


Using Game Theory for Los Angeles Airport Security

AI Magazine

Limited security resources prevent full security coverage at all times, which allows adversaries to observe and exploit patterns in selective patrolling or mon itoring; for example, they can plan an attack avoiding existing pa trols. Hence, randomized patrolling or monitoring is impor tant, but randomization must provide distinct weights to dif ferent actions based on their complex costs and benefits. To this end, this article describes a promising transition of the lat est in multiagent algorithms into a deployed application. In particular, it describes a software assistant agent called AR MOR (assistant for randomized monitoring over routes) that casts this patrolling and monitoring problem as a Bayesian Stackelberg game, allowing the agent to appropriately weigh the different actions in randomization, as well as uncertainty over adversary types. ARMOR combines two key features.


Reports of the AAAI 2012 Spring Symposia

AI Magazine

The six symposia held were AI, the Fundamental Social Aggregation Challenge (cochaired by W. F. Lawless, Don Sofge, Mark Klein, and Laurent Chaudron); Designing Intelligent Robots (cochaired by George Konidaris, Byron Boots, Stephen Hart, Todd Hester, Sarah Osentoski, and David Wingate); Game Theory for Security, Sustainability, and Health (cochaired by Bo An and Manish Jain); Intelligent Web Services Meet Social Computing (cochaired by Tomas Vitvar, Harith Alani, and David Martin); Self-Tracking and Collective Intelligence for Personal Wellness (cochaired by Takashi Kido and Keiki Takadama); and Wisdom of the Crowd (cochaired by Caroline Pantofaru, Sonia Chernova, and Alex Sorokin). The papers of the six symposia were published in the AAAI technical report series. The focus of the AI, The Fundamental Social Aggregation Challenge, and the Autonomy of Hybrid Agent Groups symposium was to explore issues associated with the control of teams of humans, autonomous machines, and robots working together as hybrid agent groups. Bill Lawless of Paine College kicked off the meeting by pointing out the need for a new theory of social dynamics. He showed that majority rule is far better than consensus for group decision processes and proposed a new mathematical model for characterizing social group dynamics based on interdependence.


PROTECT -- A Deployed Game-Theoretic System for Strategic Security Allocation for the United States Coast Guard

AI Magazine

Toward that end, this article presents PROTECT, a game-theoretic system deployed by the United States Coast Guard (USCG) in the Port of Boston for scheduling its patrols. USCG has termed the deployment of PROTECT in Boston a success; PROTECT is currently being tested in the Port of New York, with the potential for nationwide deployment. PROTECT is premised on an attackerdefender Stackelberg game model and offers five key innovations. First, this system is a departure from the assumption of perfect adversary rationality noted in previous work, relying instead on a quantal response (QR) model of the adversary's behavior -- to the best of our knowledge, this is the first real-world deployment of the QR model. Second, to improve PROTECT's efficiency, we generate a compact representation of the defender's strategy space, exploiting equivalence and dominance.


Research Priorities for Robust and Beneficial Artificial Intelligence

AI Magazine

This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial. In this context, the criterion for intelligence is related to statistical and economic notions of rationality -- colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic representations and statistical learning methods has led to a large degree of integration and crossfertilization between AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance have significant economic value, prompting greater investments in research.


Identifying Terrorist Activity with AI Plan-Recognition Technology

AI Magazine

We describe the application of plan-recognition techniques to support human intelligence analysts in processing national security alerts. Our approach is designed to take the noisy results of traditional data-mining tools and exploit causal knowledge about attacks to relate activities and uncover the intent underlying them. Identifying intent enables us to both prioritize and explain alert sets to analysts in a readily digestible format. Our empirical evaluation demonstrates that the approach can handle alert sets of as many as 20 elements and can readily distinguish between false and true alarms. We discuss the important opportunities for future work that will increase the cardinality of the alert sets to the level demanded by a deployable application.