AI distinguishes living eyeballs from dead ones

#artificialintelligence

It's a plot straight out of science fiction: Bad guys dispose of an unlucky security guard, scoop out one of the guy's (or gal's) eyeballs, and hold it up to an iris scanner, fooling it into disarming a security system. As it turns out, post-mortem eyes can be used for biometric identification hours or even days after death, studies show. But if researchers at Warsaw University of Technology in Poland have their way, that might not be the case for much longer. In a paper ("Presentation Attack Detection for Cadaver Irises") published on the preprint server Arxiv.org, the team proposed a neural network that can tell the difference between living irises and dead ones with 99 percent accuracy. "With increasing importance that biometric authentication gains in our daily lives, fears are increasingly common among users, regarding the possibility of unauthorized access to our data, identity, or assets after our demise," the researchers wrote.


Adversarial Perturbations Against Real-Time Video Classification Systems

arXiv.org Machine Learning

Recent research has demonstrated the brittleness of machine learning systems to adversarial perturbations. However, the studies have been mostly limited to perturbations on images and more generally, classification that does not deal with temporally varying inputs. In this paper we ask "Are adversarial perturbations possible in real-time video classification systems and if so, what properties must they satisfy?" Such systems find application in surveillance applications, smart vehicles, and smart elderly care and thus, misclassification could be particularly harmful (e.g., a mishap at an elderly care facility may be missed). We show that accounting for temporal structure is key to generating adversarial examples in such systems. We exploit recent advances in generative adversarial network (GAN) architectures to account for temporal correlations and generate adversarial samples that can cause misclassification rates of over 80% for targeted activities. More importantly, the samples also leave other activities largely unaffected making them extremely stealthy. Finally, we also surprisingly find that in many scenarios, the same perturbation can be applied to every frame in a video clip that makes the adversary's ability to achieve misclassification relatively easy.


Five Providers of Computer Vision Software Named IDC Innovators

#artificialintelligence

International Data Corporation (IDC) recently published an IDC Innovators report profiling five companies that offer compelling and differentiated computer vision software. The five companies are Algolux, Deep Vision AI, Sighthound, ViSenze, and Umbo CV. Computer vision is an AI technology that allows computers to understand and label images. Use cases include video surveillance, driverless car testing, daily medical diagnostics, and monitoring the health of crops and livestock. AI is used for pattern recognition and learning techniques driven largely by machine learning (ML) and deep learning (DL) algorithms that bring visual understanding capabilities in a growing variety of hardware and software applications.


Global Study Finds Artificial Intelligence is Key Cybersecurity Weapon in the IoT Era

#artificialintelligence

As businesses struggle to combat increasingly sophisticated cybersecurity attacks, the severity of which is exacerbated by both the vanishing IT perimeters in today's mobile and IoT era, coupled with an acute shortage of skilled security professionals, IT security teams need both a new approach and powerful new tools to protect data and other high-value assets. Increasingly, they are looking to artificial intelligence (AI) as a key weapon to win the battle against stealthy threats inside their IT infrastructures, according to a new global research study conducted by the Ponemon Institute on behalf of Aruba, a Hewlett Packard Enterprise company HPE, 1.66% This press release features multimedia. The Ponemon Institute study, entitled "Closing the IT Security Gap with Automation & AI in the Era of IoT," surveyed 4,000 security and IT professionals across the Americas, Europe and Asia to understand what makes security deficiencies so hard to fix, and what types of technologies and processes are needed to stay a step ahead of bad actors within the new threat landscape. The research revealed that in the quest to protect data and other high-value assets, security systems incorporating machine learning and other AI-based technologies are essential for detecting and stopping attacks that target users and IoT devices.


Crime fighting robots could soon replace security guards

Daily Mail - Science & tech

At just 5ft, they may not seem like the most imposing security guards, but they could soon be patrolling shopping centres around the world. The Knightscope K5 robots are the creation of a Silicon Valley startup firm and have been specially designed for fighting crime. And the company says it has just signed a deal which will see the droids roll out across 16 cities. Knightscope has just signed a deal which will see its K5 security droids (pictured) roll out to shopping malls across 16 cities in the United States. The K5 crime-fighting robots are 5ft tall and come with GPS, lasers, and heat-detecting technology.