Pattern Recognition: Overviews


The FBI Says Its Photo Analysis is Scientific Evidence. Scientists Disagree.

Mother Jones

This story was originally published by ProPublica. At the FBI Laboratory in Quantico, Virginia, a team of about a half-dozen technicians analyzes pictures down to their pixels, trying to determine if the faces, hands, clothes or cars of suspects match images collected by investigators from cameras at crime scenes. The unit specializes in visual evidence and facial identification, and its examiners can aid investigations by making images sharper, revealing key details in a crime or ruling out potential suspects. But the work of image examiners has never had a strong scientific foundation, and the FBI's endorsement of the unit's findings as trial evidence troubles many experts and raises anew questions about the role of the FBI Laboratory as a standard-setter in forensic science. FBI examiners have tied defendants to crime pictures in thousands of cases over the past half-century using unproven techniques, at times giving jurors baseless statistics to say the risk of error was vanishingly small. Much of the legal foundation for the unit's work is rooted in a 22-year-old comparison of bluejeans. Studies on several photo comparison techniques, conducted over the last decade by the FBI and outside scientists, have found they are not reliable. Since those studies were published, there's no indication that lab officials have checked past casework for errors or inaccurate testimony. Image examiners continue to use disputed methods in an array of cases to bolster prosecutions against people accused of robberies, murder, sex crimes and terrorism. The work of image examiners is a type of pattern analysis, a category of forensic science that has repeatedly led to misidentifications at the FBI and other crime laboratories. Before the discovery of DNA identification methods in the 1980s, most of the bureau's lab worked in pattern matching, which involves comparing features from items of evidence to the suspect's body and belongings. Examiners had long testified in court that they could determine what fingertip left a print, what gun fired a bullet, which scalp grew a hair "to the exclusion of all others." Research and exonerations by DNA analysis have repeatedly disproved these claims, and the U.S. Department of Justice no longer allows technicians and scientists from the FBI and other agencies to make such unequivocal statements, according to new testimony guidelines released last year. Though image examiners rely on similarly flawed methods, they have continued to testify to and defend their exactitude, according to a review of court records and examiners' written reports and published articles.


Democrats call for a review of face recognition tech

#artificialintelligence

US lawmakers have asked the Government Accountability Office to examine how face recognition technology is being used by companies and law enforcement agencies. The questioners: A group of Democrats from both the House of Representatives and the Senate sent a letter to the GAO asking to examine which agencies are using the technology, and what safeguards the industry has in place. Some form of government regulation could eventually be imposed. Eye spies: There is growing concern that unfettered use of facial recognition could enable greater government surveillance and automate discrimination. Some companies also appear concerned.


Unified Hypersphere Embedding for Speaker Recognition

arXiv.org Artificial Intelligence

Incremental improvements in accuracy of Convolutional Neural Networks are usually achieved through use of deeper and more complex models trained on larger datasets. However, enlarging dataset and models increases the computation and storage costs and cannot be done indefinitely. In this work, we seek to improve the identification and verification accuracy of a text-independent speaker recognition system without use of extra data or deeper and more complex models by augmenting the training and testing data, finding the optimal dimensionality of embedding space and use of more discriminative loss functions. Results of experiments on VoxCeleb dataset suggest that: (i) Simple repetition and random time-reversion of utterances can reduce prediction errors by up to 18%. (ii) Lower dimensional embeddings are more suitable for verification. (iii) Use of proposed logistic margin loss function leads to unified embeddings with state-of-the-art identification and competitive verification accuracies.


OpenCV Tutorial: A Guide to Learn OpenCV - PyImageSearch

#artificialintelligence

Whether you're interested in learning how to apply facial recognition to video streams, building a complete deep learning pipeline for image classification, or simply want to tinker with your Raspberry Pi and add image recognition to a hobby project, you'll need to learn OpenCV somewhere along the way. The truth is that learning OpenCV used to be quite challenging. The documentation was hard to navigate. The tutorials were hard to follow and incomplete. And even some of the books were a bit tedious to work through. The good news is learning OpenCV isn't as hard as it used to be. And in fact, I'll go as far as to say studying OpenCV has become significantly easier. And to prove it to you (and help you learn OpenCV), I've put together this complete guide to learning the fundamentals of the OpenCV library using the Python programming language. Let's go ahead and get started learning the basics of OpenCV and image processing. By the end of today's blog post, you'll understand the fundamentals of OpenCV.


Mining Rank Data

arXiv.org Machine Learning

The problem of frequent pattern mining has been studied quite extensively for various types of data, including sets, sequences, and graphs. Somewhat surprisingly, another important type of data, namely rank data, has received very little attention in data mining so far. In this paper, we therefore addresses the problem of mining rank data, that is, data in the form of rankings (total orders) of an underlying set of items. More specifically, two types of patterns are considered, namely frequent rankings and dependencies between such rankings in the form of association rules. Algorithms for mining frequent rankings and frequent closed rankings are proposed and tested experimentally, using both synthetic and real data.


NegPSpan: efficient extraction of negative sequential patterns with embedding constraints

arXiv.org Machine Learning

Mining frequent sequential patterns consists in extracting recurrent behaviors, modeled as patterns, in a big sequence dataset. Such patterns inform about which events are frequently observed in sequences, i.e. what does really happen. Sometimes, knowing that some specific event does not happen is more informative than extracting a lot of observed events. Negative sequential patterns (NSP) formulate recurrent behaviors by patterns containing both observed events and absent events. Few approaches have been proposed to mine such NSPs. In addition, the syntax and semantics of NSPs differ in the different methods which makes it difficult to compare them. This article provides a unified framework for the formulation of the syntax and the semantics of NSPs. Then, we introduce a new algorithm, NegPSpan, that extracts NSPs using a PrefixSpan depth-first scheme and enabling maxgap constraints that other approaches do not take into account. The formal framework allows for highlighting the differences between the proposed approach wrt to the methods from the literature, especially wrt the state of the art approach eNSP. Intensive experiments on synthetic and real datasets show that NegPSpan can extract meaningful NSPs and that it can process bigger datasets than eNSP thanks to significantly lower memory requirements and better computation times.


An introduction to frequent pattern mining - The Data Mining Blog

#artificialintelligence

In this blog post, I will give a brief overview of an important subfield of data mining that is called pattern mining. Pattern mining consists of using/developing data mining algorithms to discover interesting, unexpected and useful patterns in databases. Pattern mining algorithms can be applied on various types of data such as transaction databases, sequence databases, streams, strings, spatial data, graphs, etc. Pattern mining algorithms can be designed to discover various types of patterns: subgraphs, associations, indirect associations, trends, periodic patterns, sequential rules, lattices, sequential patterns, high-utility patterns, etc. But what is an interesting pattern? For example, some researchers define an interesting pattern as a pattern that appears frequently in a database.


The IBM Speaker Recognition System: Recent Advances and Error Analysis

arXiv.org Machine Learning

We present the recent advances along with an error analysis of the IBM speaker recognition system for conversational speech. Some of the key advancements that contribute to our system include: a nearest-neighbor discriminant analysis (NDA) approach (as opposed to LDA) for intersession variability compensation in the i-vector space, the application of speaker and channel-adapted features derived from an automatic speech recognition (ASR) system for speaker recognition, and the use of a DNN acoustic model with a very large number of output units (~10k senones) to compute the frame-level soft alignments required in the i-vector estimation process. We evaluate these techniques on the NIST 2010 SRE extended core conditions (C1-C9), as well as the 10sec-10sec condition. To our knowledge, results achieved by our system represent the best performances published to date on these conditions. For example, on the extended tel-tel condition (C5) the system achieves an EER of 0.59%. To garner further understanding of the remaining errors (on C5), we examine the recordings associated with the low scoring target trials, where various issues are identified for the problematic recordings/trials. Interestingly, it is observed that correcting the pathological recordings not only improves the scores for the target trials but also for the nontarget trials.


Excuse me, do you speak fraud? Network graph analysis for fraud detection and mitigation

@machinelearnbot

Network analysis offers a new set of techniques to tackle the persistent and growing problem of complex fraud. Network analysis supplements traditional techniques by providing a mechanism to bridge investigative and analytics methods. Beyond base visualization, network analysis provides a standardized platform for complex fraud pattern storage and retrieval, pattern discovery and detection, statistical analysis, and risk scoring. This article gives an overview of the main challenges and demonstrates a promising approach using a hands-on example. With swelling globalization, advanced digital communication technology, and international financial deregulation, fraud investigators face a daunting battle against increasingly sophisticated fraudsters.


The IBM 2016 Speaker Recognition System

arXiv.org Machine Learning

In this paper we describe the recent advancements made in the IBM i-vector speaker recognition system for conversational speech. In particular, we identify key techniques that contribute to significant improvements in performance of our system, and quantify their contributions. The techniques include: 1) a nearest-neighbor discriminant analysis (NDA) approach that is formulated to alleviate some of the limitations associated with the conventional linear discriminant analysis (LDA) that assumes Gaussian class-conditional distributions, 2) the application of speaker- and channel-adapted features, which are derived from an automatic speech recognition (ASR) system, for speaker recognition, and 3) the use of a deep neural network (DNN) acoustic model with a large number of output units (~10k senones) to compute the frame-level soft alignments required in the i-vector estimation process. We evaluate these techniques on the NIST 2010 speaker recognition evaluation (SRE) extended core conditions involving telephone and microphone trials. Experimental results indicate that: 1) the NDA is more effective (up to 35% relative improvement in terms of EER) than the traditional parametric LDA for speaker recognition, 2) when compared to raw acoustic features (e.g., MFCCs), the ASR speaker-adapted features provide gains in speaker recognition performance, and 3) increasing the number of output units in the DNN acoustic model (i.e., increasing the senone set size from 2k to 10k) provides consistent improvements in performance (for example from 37% to 57% relative EER gains over our baseline GMM i-vector system). To our knowledge, results reported in this paper represent the best performances published to date on the NIST SRE 2010 extended core tasks.