Goto

Collaborating Authors

Results


Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art

arXiv.org Artificial Intelligence

The malware has been being one of the most damaging threats to computers that span across multiple operating systems and various file formats. To defend against the ever-increasing and ever-evolving threats of malware, tremendous efforts have been made to propose a variety of malware detection methods that attempt to effectively and efficiently detect malware. Recent studies have shown that, on the one hand, existing ML and DL enable the superior detection of newly emerging and previously unseen malware. However, on the other hand, ML and DL models are inherently vulnerable to adversarial attacks in the form of adversarial examples, which are maliciously generated by slightly and carefully perturbing the legitimate inputs to confuse the targeted models. Basically, adversarial attacks are initially extensively studied in the domain of computer vision, and some quickly expanded to other domains, including NLP, speech recognition and even malware detection. In this paper, we focus on malware with the file format of portable executable (PE) in the family of Windows operating systems, namely Windows PE malware, as a representative case to study the adversarial attack methods in such adversarial settings. To be specific, we start by first outlining the general learning framework of Windows PE malware detection based on ML/DL and subsequently highlighting three unique challenges of performing adversarial attacks in the context of PE malware. We then conduct a comprehensive and systematic review to categorize the state-of-the-art adversarial attacks against PE malware detection, as well as corresponding defenses to increase the robustness of PE malware detection. We conclude the paper by first presenting other related attacks against Windows PE malware detection beyond the adversarial attacks and then shedding light on future research directions and opportunities.


Robotics Today latest talks – Raia Hadsell (DeepMind), Koushil Sreenath (UC Berkeley) and Antonio Bicchi (Istituto Italiano di Tecnologia)

Robohub

Bio: Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving a Ph.D. from the University of Bologna, he spent a few years at the MIT AI Lab of Cambridge before becoming Professor in Robotics at the University of Pisa. In 2009 he founded the Soft Robotics Laboratory at the Italian Institute of Technology in Genoa. Since 2013 he is Adjunct Professor at Arizona State University, Tempe, AZ. He has coordinated many international projects, including four grants from the European Research Council (ERC).


Developing a more human-like response is an increasing feature of AI

#artificialintelligence

When an Uber autonomous test car killed pedestrian Elaine Herzberg in Tempe, Arizona, in March 2018, it sent alarm bells around the world of artificial intelligence (AI) and machine learning. Walking her bicycle, Herzberg had strayed on to the road, resulting in a fatal collision with the vehicle. While there were other contributory factors in the accident, the incident highlighted a key flaw in the algorithm powering the car. It was not trained to cope with jay-walkers nor could it recognise whether it was dealing with a bicycle or a pedestrian. Confused, it ultimately failed to default quickly to the safety option of slowing the vehicle and potentially saving Herzberg's life.


Artificial Intelligence and Ethics

#artificialintelligence

On March 18, 2018, at around 10 p.m., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system--artificial intelligence--was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system's programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg's death? "Artificial intelligence" refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches.


Inside the lab where Waymo is building the brains for its driverless cars

#artificialintelligence

Right now, a minivan with no one behind the steering wheel is driving through a suburb of Phoenix, Arizona. And while that may seem alarming, the company that built the "brain" powering the car's autonomy wants to assure you that it's totally safe. Waymo, the self-driving unit of Alphabet, is the only company in the world to have fully driverless vehicles on public roads today. That was made possible by a sophisticated set of neural networks powered by machine learning about which very is little is known -- until now. For the first time, Waymo is lifting the curtain on what is arguably the most important (and most difficult-to-understand) piece of its technology stack. The company, which is ahead in the self-driving car race by most metrics, confidently asserts that its cars have the most advanced brains on the road today. Anyone can buy a bunch of cameras and LIDAR sensors, slap them on a car, and call it autonomous. But training a self-driving car to behave like a human driver, or, more importantly, to drive better than a human, is on the bleeding edge of artificial intelligence research. Waymo's engineers are modeling not only how cars recognize objects in the road, for example, but how human behavior affects how cars should behave. And they're using deep learning to interpret, predict, and respond to data accrued from its 6 million miles driven on public roads and 5 billion driven in simulation.


Inside the lab where Waymo is building the brains for its driverless cars

#artificialintelligence

Right now, a minivan with no one behind the steering wheel is driving through a suburb of Phoenix, Arizona. And while that may seem alarming, the company that built the "brain" powering the car's autonomy wants to assure you that it's totally safe. Waymo, the self-driving unit of Alphabet, is the only company in the world to have fully driverless vehicles on public roads today. That was made possible by a sophisticated set of neural networks powered by machine learning about which very is little is known -- until now. For the first time, Waymo is lifting the curtain on what is arguably the most important (and most difficult-to-understand) piece of its technology stack. The company, which is ahead in the self-driving car race by most metrics, confidently asserts that its cars have the most advanced brains on the road today. Anyone can buy a bunch of cameras and LIDAR sensors, slap them on a car, and call it autonomous. But training a self-driving car to behave like a human driver, or, more importantly, to drive better than a human, is on the bleeding edge of artificial intelligence research.


Knowledge Graphs

arXiv.org Artificial Intelligence

In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data. After a general introduction, we motivate and contrast various graph-based data models and query languages that are used for knowledge graphs. We discuss the roles of schema, identity, and context in knowledge graphs. We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques. We summarise methods for the creation, enrichment, quality assessment, refinement, and publication of knowledge graphs. We provide an overview of prominent open knowledge graphs and enterprise knowledge graphs, their applications, and how they use the aforementioned techniques. We conclude with high-level future research directions for knowledge graphs.


Deep Residual Dense U-Net for Resolution Enhancement in Accelerated MRI Acquisition

arXiv.org Machine Learning

Typical Magnetic Resonance Imaging (MRI) scan may take 20 to 60 minutes. Reducing MRI scan time is beneficial for both patient experience and cost considerations. Accelerated MRI scan may be achieved by acquiring less amount of k-space data (down-sampling in the k-space). However, this leads to lower resolution and aliasing artifacts for the reconstructed images. There are many existing approaches for attempting to reconstruct high-quality images from down-sampled k-space data, with varying complexity and performance. In recent years, deep-learning approaches have been proposed for this task, and promising results have been reported. Still, the problem remains challenging especially because of the high fidelity requirement in most medical applications employing reconstructed MRI images. In this work, we propose a deep-learning approach, aiming at reconstructing high-quality images from accelerated MRI acquisition. Specifically, we use Convolutional Neural Network (CNN) to learn the differences between the aliased images and the original images, employing a U-Net-like architecture. Further, a micro-architecture termed Residual Dense Block (RDB) is introduced for learning a better feature representation than the plain U-Net. Considering the peculiarity of the down-sampled k-space data, we introduce a new term to the loss function in learning, which effectively employs the given k-space data during training to provide additional regularization on the update of the network weights. To evaluate the proposed approach, we compare it with other state-of-the-art methods. In both visual inspection and evaluation using standard metrics, the proposed approach is able to deliver improved performance, demonstrating its potential for providing an effective solution.


System 2 deep learning: The next step toward artificial general intelligence

#artificialintelligence

Say you've been driving on the roads of Phoenix, Arizona, all your life, and then you move to New York. Do you need to learn driving all over again? You just have to drive a bit more cautiously and adapt yourself to the new environment. The same can't be said about deep learning algorithms, the cutting edge of artificial intelligence, which are also one of the main components of autonomous driving. Despite having propelled the field of AI forward in recent years, deep learning, and its underlying technology, deep neural networks, suffer from fundamental problems that prevent them from replicating some of the most basic functions of the human brain.


Machine Learning and VR Are Driving Prosthetics Research

#artificialintelligence

Fitting a patient for a prosthetic limb is normally a painstaking and time-consuming process. In some cases, trying to determine how capable a patient may be of operating a prosthetic limb even before fitting one has also been a problem. However, using virtual reality and reinforcement learning, researchers in North Carolina and Arizona are revealing new technologies and techniques to make prosthetic fitting more convenient for both patients and clinicians: In Charlotte, surgeons at OrthoCarolina used VR to demonstrate that patients born without hands had inborn abilities to control prosthetic hands without prerequisite targeted muscle re-innervation surgery (as often required by traumatic amputee patients). In Raleigh, Chapel Hill, and Tempe,AZ., engineering professors demonstrated a tuning algorithm based on reinforcement learning could reduce the time needed to fit a robotic knee from hours to about 10 minutes. The researchers say the breakthroughs indicate a new era of convenience and optimism may be in the offing for amputees.