Goto

Collaborating Authors

 researcher demonstrate


Researchers demonstrate that malware can be hidden inside AI models

#artificialintelligence

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui published a paper last Monday demonstrating a new technique for slipping malware past automated detection tools--in this case, by hiding it inside a neural network. The three embedded 36.9MiB of malware into a 178MiB AlexNet model without significantly altering the function of the model itself. The malware-embedded model classified images with near-identical accuracy, within 1% of the malware-free model. Just as importantly, squirreling the malware away into the model broke it up in ways that prevented detection by standard antivirus engines. VirusTotal, a service that "inspects items with over 70 antivirus scanners and URL/domain blocklisting services, in addition to a myriad of tools to extract signals from the studied content," did not raise any suspicions about the malware-embedded model.


Researchers Demonstrate Less-than-One Shot Machine Learning

#artificialintelligence

We're accustomed to thinking that bigger is better in machine learning. If 10 samples are good, then 100 samples must be even better. However, researchers from the University of Waterloo recently demonstrated the feasibility of "less than one-shot" learning, or a model that can learn to identify something, even if it's never seen an example of it. In their September paper, titled "'Less Than One'-Shot Learning: Learning N Classes From M N Samples," researchers Ilia Sucholutsky and Matthias Schonlau explain how they created a machine learning model that can learn to classify something when trained with less than one example per class. For example, consider an alien zoologist who lands on earth and is instructed to capture a unicorn.


Researchers demonstrate that Google's cloud video AI is easily duped

#artificialintelligence

It's not yet possible for an artificial intelligence to properly classify videos based only on their content, and so we need to keep using our own brains. While artificial intelligence is an incredibly important field that's growing by leaps and bounds, perhaps its most interesting lessons concerns just how incredible the human brain is at performing certain functions. While computers might be better at performing math and looking dozens of chess moves into the future, they can't yet compete with the human brain at figuring out things like a video's topic. A recent research project demonstrated just that fact by feeding videos to Google's Cloud Video Intelligence API and seeing if it could determine exactly what a given video was about. Apparently, this seemingly simple task is a challenge for Google's AI and points out the difficulty of creating automatic systems to categorize video, as Motherboard reports. The research team in question works at the University of Washington, and the team used some trickery to see how smart the Google API really is.