Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.
Software star-up, Z Advanced Computing, Inc. (ZAC), has received funding from the U.S. Air Force to incorporate the company's 3D image recognition technology into unmanned aerial vehicles (UAVs) and drones for aerial image and object recognition. ZAC's in-house image recognition software is based on Explainable-AI (XAI), where computer-generated image results can be understood by human experts. ZAC – based in Potomac, Maryland – is the first to demonstrate XAI, where various attributes and details of 3D objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," says Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as deep Convolutional Neural Networks (CNNs), even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," adds Dr. Bijan Tadayon, CEO of ZAC.
Our brains are wired in a way that they can differentiate between objects, both living and non-living by simply looking at them. In fact, the recognition of objects and a situation through visualization is the fastest way to gather, as well as to relate information. This becomes a pretty big deal for computers where a vast amount of data has to be stuffed into it, before the computer can perform an operation on its own. Ironically, with each passing day, it is becoming essential for machines to identify objects through facial recognition, so that humans can take the next big step towards a more scientifically advanced social mechanism. So, what progress have we really made in that respect?
More than two billion images are shared daily in social networks alone. Research shows that it would take a person ten years to look at all the photos shared on Snapchat in the last hour! Media buyers and providers experience difficulty organizing relevant content in groups, parsing components of images/videos, and defining the return on investment from generated content in an efficient way. NVIDIA has many customers and ecosystem partners tackling that problem, using NVIDIA DGX as their preferred platform for deep learning (DL) powered image recognition. One of the notable names among the ecosystem is Imagga, a pioneer in offering a deep learning powered image recognition and image processing solution, built on NVIDIA DGX Station, the world's first personal AI supercomputer.
This blog post discusses how to turn your images into text describing what is in them so you can later perform analysis on their contents and topics, all right out of a Jupyter Notebook. An example of when this would be useful is if you are given thousands of tweets, and want to know if the image media has any effect on engagement. Lucky for us, instead of writing our own image recognition tool, the engineers at Amazon, Google, and Microsoft completed this task and made their APIs accessible. Here we'll be using Rekognition, Amazon's deep learning-based image and video analysis tool.