How We Feel About Robots That Feel

MIT Technology Review

Octavia, a humanoid robot designed to fight fires on Navy ships, has mastered an impressive range of facial expressions. When she's turned off, she looks like a human-size doll. She has a smooth white face with a snub nose. Her plastic eyebrows sit evenly on her forehead like two little capsized canoes. When she's on, however, her eyelids fly open and she begins to display emotion.


Choose the right AI method for the job

#artificialintelligence

It's hard to remember the days when artificial intelligence seemed like an intangible, futuristic concept. This has been decades in the making, however, and the past 90 years have seen both renaissances and winters for the field of study. At present, AI is launching a persistent infiltration into our personal lives with the rise of self-driving cars and intelligent personal assistants. In the enterprise, we likewise see AI rearing its head in adaptive marketing and cybersecurity. The rise of AI is exciting, but people often throw the term around in an attempt to win buzzword bingo, rather than to accurately reflect technological capabilities.


IBM built a voice assistant for cybersecurity

#artificialintelligence

In this week's This Feels A Little Like Skynet: IBM built a new voice assistant using artificial intelligence called Hayvn, focused on cybersecurity. Think of it as Amazon Alexa, but instead of ordering soap, it's helping you manage threats. Sure, this might sound like it's ripped straight out of the plot for Terminator 3: Rise of the Machines, in which the military unleashes a new AI called Skynet to fight a virus that's been disrupting worldwide networks. And yes, that AI then becomes sentient, launches nukes and begins "Judgement Day." Here in the real world, IBM said Hayvn is being used to help cybersecurity pros comb through the hundreds of alerts they receive each day.


Detecting Cyberattack Entities from Audit Data via Multi-View Anomaly Detection with Feedback

AAAI Conferences

In this paper, we consider the problem of detecting unknown cyberattacks from audit data of system-level events. A key challenge is that different cyberattacks will have different suspicion indicators, which are not known beforehand. To address this we consider a multi-view anomaly detection framework, where multiple expert-designed ``views" of the data are created for capturing features that may serve as potential indicators. Anomaly detectors are then applied to each view and the results are combined to yield an overall suspiciousness ranking of system entities. Unfortunately, there is often a mismatch between what anomaly detection algorithms find and what is actually malicious, which can result in many false positives. This problem is made even worse in the multi-view setting, where only a small subset of the views may be relevant to detecting a particular cyberattack. To help reduce the false positive rate, a key contribution of this paper is to incorporate feedback from security analysts about whether proposed suspicious entities are of interest or likely benign. This feedback is incorporated into subsequent anomaly detection in order to improve the suspiciousness ranking toward entities that are truly of interest to the analyst. For this purpose, we propose an easy to implement variant of the perceptron learning algorithm, which is shown to be quite effective on benchmark datasets. We evaluate our overall approach on real attack data from a DARPA red team exercise, which include multiple attacks on multiple operating systems. The results show that the incorporation of feedback can significantly reduce the time required to identify malicious system entities.


Tech Advances Make It Easier to Assign Blame for Cyberattacks

WSJ.com: WSJD - Technology

"All you have to do is look at the attacks that have taken place recently--WannaCry, NotPetya and others--and see how quickly the industry and government is coming out and assigning responsibility to nation states such as North Korea, Russia and Iran," said Dmitri Alperovitch, chief technology officer at CrowdStrike Inc., a cybersecurity company that has investigated a number of state-sponsored hacks. The White House and other countries took roughly six months to blame North Korea and Russia for the WannaCry and NotPetya attacks, respectively, while it took about three years for U.S. authorities to indict a North Korean hacker for the 2014 attack against Sony . Forensic systems are gathering and analyzing vast amounts of data from digital databases and registries to glean clues about an attacker's infrastructure. These clues, which may include obfuscation techniques and domain names used for hacking, can add up to what amounts to a unique footprint, said Chris Bell, chief executive of Diskin Advanced Technologies, a startup that uses machine learning to attribute cyberattacks. Additionally, the increasing amount of data related to cyberattacks--including virus signatures, the time of day the attack took place, IP addresses and domain names--makes it easier for investigators to track organized hacking groups and draw conclusions about them.