Software star-up, Z Advanced Computing, Inc. (ZAC), has received funding from the U.S. Air Force to incorporate the company's 3D image recognition technology into unmanned aerial vehicles (UAVs) and drones for aerial image and object recognition. ZAC's in-house image recognition software is based on Explainable-AI (XAI), where computer-generated image results can be understood by human experts. ZAC – based in Potomac, Maryland – is the first to demonstrate XAI, where various attributes and details of 3D objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," says Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as deep Convolutional Neural Networks (CNNs), even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," adds Dr. Bijan Tadayon, CEO of ZAC.
It's hard to remember the days when artificial intelligence seemed like an intangible, futuristic concept. This has been decades in the making, however, and the past 90 years have seen both renaissances and winters for the field of study. At present, AI is launching a persistent infiltration into our personal lives with the rise of self-driving cars and intelligent personal assistants. In the enterprise, we likewise see AI rearing its head in adaptive marketing and cybersecurity. The rise of AI is exciting, but people often throw the term around in an attempt to win buzzword bingo, rather than to accurately reflect technological capabilities.
Cybersecurity was the virtual elephant in the showroom at this month's Consumer Electronics Show in Las Vegas. Attendees of the annual tech trade show, organized by the Consumer Technology Association, relished the opportunity to experience a future filled with delivery drones, autonomous vehicles, virtual and augmented reality and a plethora of "Internet of things" devices, including fridges, wearables, televisions, routers, speakers, washing machines and even robot home assistants. Given the proliferation of connected devices--already, there are estimated to be at least 6.4 billion--there remains the critical question of how to ensure their security. The cybersecurity challenge posed by the internet of things is unique. The scale of connected devices magnifies the consequences of insecurity.
"All you have to do is look at the attacks that have taken place recently--WannaCry, NotPetya and others--and see how quickly the industry and government is coming out and assigning responsibility to nation states such as North Korea, Russia and Iran," said Dmitri Alperovitch, chief technology officer at CrowdStrike Inc., a cybersecurity company that has investigated a number of state-sponsored hacks. The White House and other countries took roughly six months to blame North Korea and Russia for the WannaCry and NotPetya attacks, respectively, while it took about three years for U.S. authorities to indict a North Korean hacker for the 2014 attack against Sony . Forensic systems are gathering and analyzing vast amounts of data from digital databases and registries to glean clues about an attacker's infrastructure. These clues, which may include obfuscation techniques and domain names used for hacking, can add up to what amounts to a unique footprint, said Chris Bell, chief executive of Diskin Advanced Technologies, a startup that uses machine learning to attribute cyberattacks. Additionally, the increasing amount of data related to cyberattacks--including virus signatures, the time of day the attack took place, IP addresses and domain names--makes it easier for investigators to track organized hacking groups and draw conclusions about them.
Several years ago, there were reports that an IBM artificial intelligence (AI) project had mimicked the brain of a cat. Being the smartass that I am, I responded on Twitter with, "You mean it spends 18 hours a day in sleep mode?" That report was later debunked, but the effort to simulate the brain continues, using new types of processors far faster and more brain-like than your standard x86 processor. IBM and the U.S. Air Force have announced one such project, while Google has its own. What Google is proposing is a template for how to create a single machine learning model that can address multiple tasks.