Z Advanced Computing, Inc. (ZAC), the pioneer startup on Explainable-AI (Artificial Intelligence) (XAI), is developing its Smart Home product line through a paid-pilot for Smart Appliances for BSH Home Appliances (a subsidiary of the Bosch Group, originally a joint venture between Bosch and Siemens), the largest manufacturer of home appliances in Europe and one of the largest in the world. ZAC just successfully finished its Phase 1 of the pilot program. "Our cognitive-based algorithm is more robust, resilient, consistent, and reproducible, with a higher accuracy, than Convolutional Neural Nets or GANs, which others are using now. It also requires much smaller number of training samples, compared to CNNs, which is a huge advantage," said Dr. Saied Tadayon, CTO of ZAC. "We did the entire work on a regular laptop, for both training and recognition, without any dedicated GPU. So, our computing requirement is much smaller than a typical Neural Net, which requires a dedicated GPU," continued Dr. Bijan Tadayon, CEO of ZAC.
Face and Image Recognition is not only about security and surveillance or controlling the quality of industrial production processes. The technology is proving increasingly impactful to the fashion and beauty industries, generating multiple exciting opportunities for manufacturers and consumers alike. Face and Image recognition being an AI frontrunner in terms of security, agriculture, and industrial QA, the technology's business uses beyond these three realms are still much less known. As a result, many businesses in industries other than security and surveillance, agriculture, and industrial production have barely given any thought to employing Image Recognition as a means of attaining better capabilities to raise their sights and achieve higher levels of quality and profitability. Meanwhile, the Image Recognition- inspired and - enabled opportunities, which have been cropping up of late elsewhere, can barely be ignored and should be taken note of by a much, much wider audience.
When we are faced with challenging image classification tasks, we often explain our reasoning by dissecting the image, and pointing out prototypical aspects of one class or another. The mounting evidence for each of the classes helps us make our final decision. In this work, we introduce a deep network architecture -- prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification. The model thus reasons in a way that is qualitatively similar to the way ornithologists, physicians, and others would explain to people on how to solve challenging image classification tasks. The network uses only image-level labels for training without any annotations for parts of images.
Parametric spatial transformation models have been successfully applied to image registration tasks. In such models, the transformation of interest is parameterized by a fixed set of basis functions as for example B-splines. Each basis function is located on a fixed regular grid position among the image domain because the transformation of interest is not known in advance. As a consequence, not all basis functions will necessarily contribute to the final transformation which results in a non-compact representation of the transformation. For each element in the sequence, a local deformation defined by its position, shape, and weight is computed by our recurrent registration neural network.
This paper concerns the undetermined problem of estimating geometric transformation between image pairs. Recent methods introduce deep neural networks to predict the controlling parameters of hand-crafted geometric transformation models (e.g. However, the low-dimension parametric models are incapable of estimating a highly complex geometric transform with limited flexibility to model the actual geometric deformation from image pairs. To address this issue, we present an end-to-end trainable deep neural networks, named Arbitrary Continuous Geometric Transformation Networks (Arbicon-Net), to directly predict the dense displacement field for pairwise image alignment. Arbicon-Net is generalized from training data to predict the desired arbitrary continuous geometric transformation in a data-driven manner for unseen new pair of images.
Researchers at TU Wien (Vienna) have developed an ultra-fast image sensor with a built-in neural network; the sensor can be trained to recognize certain objects. They describe their work on ultrafast machine vision in a paper in Nature. Machine vision technology has taken huge leaps in recent years, and is now becoming an integral part of various intelligent systems, including autonomous vehicles and robotics. Usually, visual information is captured by a frame-based camera, converted into a digital format and processed afterwards using a machine-learning algorithm such as an artificial neural network (ANN). The large amount of (mostly redundant) data passed through the entire signal chain, however, results in low frame rates and high power consumption.
Waymo, the self-driving technology company, just came out with the modestly named Content Search, but it could have huge implications for advancing autonomous vehicle technology. Waymo's new Content Search tool allows engineers to catalogue and find billions of images. As explained on its blog, Waymo and Google Research, both divisions of parent company Alphabet, collaborated to create Content Search. By leveraging the search technology similar to what powers Google Photos and Google Image, Waymo engineers can now quickly locate just about any object stored in Waymo's driving history and logs through 20 million miles of collecting data on the road. In essence, the Content Search turns all the objects into a searchable catalogue, accurately tracking billions of images.
Over the years, the basic retail experience has remained more or less the same for the consumers. You go to a store, you look for the right product, and you make a purchase. But for the retailers, it is ever-changing. Analyzing consumer behavior is one of the biggest challenges that CPGs all around the world face. With increasing complexities, traditional auditing methods have proved inefficient.
The news: A new type of artificial eye, made by combining light-sensing electronics with a neural network on a single tiny chip, can make sense of what it's seeing in just a few nanoseconds, far faster than existing image sensors. Why it matters: Computer vision is integral to many applications of AI--from driverless cars to industrial robots to smart sensors that act as our eyes in remote locations--and machines have become very good at responding to what they see. But most image recognition needs a lot of computing power to work. Part of the problem is a bottleneck at the heart of traditional sensors, which capture a huge amount of visual data, regardless of whether or not it is useful for classifying an image. Crunching all that data slows things down.
Recent advancements in artificial intelligence and machine learning have hugely contributed to the growth of Image Recognition and Object Detection in retail. While Image Recognition and Object Detection are used interchangeably, these are two different techniques. Image Recognition is the process of analyzing an input image and predicting its category (also called as a class label) from a set of categories. For instance, consider an automatic store checkout scenario. The user displays an SKU in front of a camera that is powered by an Image Recognition software.