Goto

Collaborating Authors

Results


Object Detection with No Data Thanks to Template Matching

#artificialintelligence

How to implement custom object detection with template matching. Today, state-of-the-art object detection algorithms (algorithms aiming to detect objects in pictures) are using neural networks such as Yolov4. Template matching is a technique in digital image processing for finding small parts of an image that matches a template image. It is a much simpler solution than a neural network to conduct object detection. In my experience, combining a neural network like Yolov4 and object detection with template matching here is a good way to considerably improve your neural network performance! When you use OpenCV template matching, your template slides pixel by pixel on your image.


Here is why Face and Image Recognition Gaining Prominence

#artificialintelligence

Do you remember watching crime shows where investigating teams used to hire sketch artists to draw the image/face of criminal described by witnesses? And they would then hunt for the person to lock him up. But one might wonder today, are these tactics still common in detecting crime or criminals? With the rise in Artificial Intelligence enabled Face and Image Recognition technologies, the days of sketching criminal are long gone. The process of identifying or verifying the identity of a person using their face has made investigations a lot easier today.


Computer Vision and Image Analytics

#artificialintelligence

Over the past few months, I've been working on a fascinating project with one of the world's largest pharmaceutical companies to apply SAS Viya computer vision to help identify potential quality issues on the production line as part of the validated inspection process. As I know the application of these types of AI and ML techniques are of real interest to many high-tech manufacturing organisations as part of their Manufacturing 4.0 initiatives, I thought I'd take the to opportunity to share my experiences with a wide audience, so I hope you enjoy this blog post. For obvious reasons, I can't share specifics of the organisation or product, so please don't ask me to. But I hope you find this article interesting and informative, and if you would like to know more about the techniques then please feel free to contact me. Quality inspections are a key part of the manufacturing process, and while many of these inspections can be automated using a range of techniques, tests and measurements, some issues are still best identified by the human eye.


Deep Dive into Computer Vision with Neural Networks – Part 2

#artificialintelligence

Machine vision, or computer vision, is a popular research topic in artificial intelligence (AI) that has been around for many years. However, machine vision still remains as one of the biggest challenges in AI. In this article, we will explore the use of deep neural networks to address some of the fundamental challenges of computer vision. In particular, we will be looking at applications such as network compression, fine-grained image classification, captioning, texture synthesis, image search, and object tracking. Texture synthesis is used to generate a larger image containing the same texture.


Interpretable Image Recognition with Hierarchical Prototypes

arXiv.org Machine Learning

Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.


Kitov.ai: Smart Visual Inspection to Enhance Manufacturing...

#artificialintelligence

To assist manufacturers in performing an automated visual inspection, Kitov.ai has developed a smart visual inspection technology for a broad range of production lines. Israel-based Kitov.ai has built an end-to-end, fully automated 3D inspection system powered by artificial intelligence and deep learning that enables manufacturers to produce quality products at a low cost rapidly. In an interview with CIO Applications, Hanan Gino, CEO of Kitov.ai, Give us an overview of Kitov.ai Kitov.ai was founded in late 2014 by CTO and Founder Dr. Yossi Rubner, as a spin-off of RTC Vision, a company that has been developing advanced computer vision algorithms for leading companies for over a decade.


Zero-shot Image Recognition Using Relational Matching, Adaptation and Calibration

arXiv.org Artificial Intelligence

Zero-shot learning (ZSL) for image classification focuses on recognizing novel categories that have no labeled data available for training. The learning is generally carried out with the help of mid-level semantic descriptors associated with each class. This semantic-descriptor space is generally shared by both seen and unseen categories. However, ZSL suffers from hubness, domain discrepancy and biased-ness towards seen classes. To tackle these problems, we propose a three-step approach to zero-shot learning. Firstly, a mapping is learned from the semantic-descriptor space to the image-feature space. This mapping learns to minimize both one-to-one and pairwise distances between semantic embeddings and the image features of the corresponding classes. Secondly, we propose test-time domain adaptation to adapt the semantic embedding of the unseen classes to the test data. This is achieved by finding correspondences between the semantic descriptors and the image features. Thirdly, we propose scaled calibration on the classification scores of the seen classes. This is necessary because the ZSL model is biased towards seen classes as the unseen classes are not used in the training. Finally, to validate the proposed three-step approach, we performed experiments on four benchmark datasets where the proposed method outperformed previous results. We also studied and analyzed the performance of each component of our proposed ZSL framework.


Graph-RISE: Graph-Regularized Image Semantic Embedding

arXiv.org Machine Learning

Learning image representations to capture fine-grained semantics has been a challenging and important task enabling many applications such as image search and clustering. In this paper, we present Graph-Regularized Image Semantic Embedding (Graph-RISE), a large-scale neural graph learning framework that allows us to train embeddings to discriminate an unprecedented O(40M) ultra-fine-grained semantic labels. Graph-RISE outperforms state-of-the-art image embedding algorithms on several evaluation tasks, including image classification and triplet ranking. We provide case studies to demonstrate that, qualitatively, image retrieval based on Graph-RISE effectively captures semantics and, compared to the state-of-the-art, differentiates nuances at levels that are closer to human-perception.


IBM created software using NYPD images that can search for people by SKIN COLOR, report claims

Daily Mail - Science & tech

From 2012 to 2016, the New York City Police Department supplied IBM with thousands of surveillance images of unaware New Yorkers for the development of software that could help track down people'of interest,' a shocking report claims. IBM's technology was designed to match stills of individuals with specific physical characteristics, including clothing color, age, gender, hair color, and even skin tone, according to The Intercept. Internal documents and sources involved with the program cited by the report reveal IBM released an early iteration of its video analytics software by 2013, before improving its capabilities over the following years. The report adds to growing concerns on the potential for racial profiling with advanced surveillance technology. From 2012 to 2016, the New York City Police Department supplied IBM with thousands of surveillance images of unaware New Yorkers for the development of software that could help track down people'of interest,' a shocking report claims According to the investigation by The Intercept and the Investigative Fund, the NYPD did not end up using IBM's analytics program as part of its larger surveillance system, and discontinued it by 2016.


A beginner's guide to AI: Computer vision and image recognition

#artificialintelligence

This is the second story in our continuing series covering the basics of artificial intelligence. While it isn't necessary to read the first article, which covers neural networks, doing so may add to your understanding of the topics covered in this one. Teaching a computer how to'see' is no small feat. You can slap a camera on a PC, but that won't give it sight. In order for a machine to actually view the world like people or animals do, it relies on computer vision and image recognition.