Supervised Learning


Why Sprint is paying a record $330 million settlement in New York

USATODAY

Sprint shares were higher on Wednesday following news that the company is preparing to mortgage its wireless airwaves. ALBANY – Sprint has agreed to pay a $330 million settlement after the company skirted New York tax law for nearly a decade, New York's attorney general announced Friday. The record-breaking settlement came in the wake of a false claims lawsuit filed by Attorney General Barbara Underwood alleging the cellular provider failed to collect and remit over $100 million in state and local taxes on flat-rate calling plans. The $330 million settlement is the largest recovery by a single state in a false claims lawsuit, according to the attorney general's office. "Sprint knew exactly how New York sales tax law applied to its plans – yet for years the company flagrantly broke the law, cheating the state and its localities out of tax dollars that should have been invested in our communities," Underwood said in a statement.


Conditional Graph Neural Processes: A Functional Autoencoder Approach

arXiv.org Artificial Intelligence

We introduce a novel encoder-decoder architecture to embed functional processes into latent vector spaces. This embedding can then be decoded to sample the encoded functions over any arbitrary domain. This autoencoder generalizes the recently introduced Conditional Neural Process (CNP) model of random processes. Our architecture employs the latest advances in graph neural networks to process irregularly sampled functions. Thus, we refer to our model as Conditional Graph Neural Process (CGNP). Graph neural networks can effectively exploit `local' structures of the metric spaces over which the functions/processes are defined. The contributions of this paper are twofold: (i) a novel graph-based encoder-decoder architecture for functional and process embeddings, and (ii) a demonstration of the importance of using the structure of metric spaces for this type of representations.


Lawyers in South Korean wartime labor case set deadline for response from Nippon Steel & Sumitomo Metal

The Japan Times

Lawyers representing South Korean plaintiffs in a World War II labor court case against Japan's Nippon Steel & Sumitomo Metal Corp. have set a Dec. 24 deadline for the firm to show willingness to discuss a court verdict on compensation. If the firm fails to respond, the lawyers, who spoke after being denied a meeting with company officials for a second time on Tuesday, said they would start procedures to seize its South Korean assets. Tuesday's incident stemmed from a ruling by South Korea's Supreme Court late in October that Nippon Steel must pay 100 million won ($90,500) to each of four South Koreans for forced labor during the war. The Japanese government has denounced the verdict, saying all wartime reparations were dealt with in a 1965 treaty that normalized ties between the two nations. At the time of the ruling, Nippon Steel called it "extremely regrettable," but added that it would review the decision carefully in considering further steps.


Lawyers in South Korean Forced Labor Case Set Deadline for Nippon Steel Response

U.S. News

TOKYO (Reuters) - Lawyers representing South Korean plaintiffs in a World War Two forced labor court case against Japan's Nippon Steel & Sumitomo Metal Corp. have set a Dec. 24 deadline for the firm to show willingness to discuss a court verdict on compensation.


Machine Learning Reductions & Mother Algorithms, Part II: Multiclass to Binary Classification

#artificialintelligence

Following our introductory Part I on ML reductions & mother algorithms, let's talk about a classic reduction: one-against-all (OAA) -- also known as one-vs-all (OVA) and one-vs-rest (OVR). Unfortunately, it's seen in some circles as too simple, with dissidents pointing to the problem with class imbalance. In fact, this issue can be mitigated with a neat trick (more on that later), leaving us with a general purpose solution to almost any multiclass problem you can think of. Classification algorithms aim to learn a optimal decision boundary to separate different inputs from each other. At prediction time, inputs are classified into different classes using this boundary.


Partial Evaluation of Logic Programs in Vector Spaces

arXiv.org Artificial Intelligence

In this paper, we introduce methods of encoding propositional logic programs in vector spaces. Interpretations are represented by vectors and programs are represented by matrices. The least model of a definite program is computed by multiplying an interpretation vector and a program matrix. To optimize computation in vector spaces, we provide a method of partial evaluation of programs using linear algebra. Partial evaluation is done by unfolding rules in a program, and it is realized in a vector space by multiplying program matrices. We perform experiments using randomly generated programs and show that partial evaluation has potential for realizing efficient computation in huge scale of programs.


DONUT: CTC-based Query-by-Example Keyword Spotting

arXiv.org Machine Learning

Keyword spotting--or wakeword detection--is an essential feature for hands-free operation of modern voice-controlled devices. With such devices becoming ubiquitous, users might want to choose a personalized custom wakeword. In this work, we present DONUT, a CTC-based algorithm for online query-by-example keyword spotting that enables custom wakeword detection. The algorithm works by recording a small number of training examples from the user, generating a set of label sequence hypotheses from these training examples, and detecting the wakeword by aggregating the scores of all the hypotheses given a new audio recording. Our method combines the generalization and interpretability of CTC-based keyword spotting with the user-adaptation and convenience of a conventional query-by-example system. DONUT has low computational requirements and is well-suited for both learning and inference on embedded systems without requiring private user data to be uploaded to the cloud.


Proceedings of the 2018 Workshop on Compositional Approaches in Physics, NLP, and Social Sciences

arXiv.org Artificial Intelligence

The ability to compose parts to form a more complex whole, and to analyze a whole as a combination of elements, is desirable across disciplines. This workshop bring together researchers applying compositional approaches to physics, NLP, cognitive science, and game theory. Within NLP, a long-standing aim is to represent how words can combine to form phrases and sentences. Within the framework of distributional semantics, words are represented as vectors in vector spaces. The categorical model of Coecke et al. [2010], inspired by quantum protocols, has provided a convincing account of compositionality in vector space models of NLP. There is furthermore a history of vector space models in cognitive science. Theories of categorization such as those developed by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between feature vectors. More recently G\"ardenfors [2004, 2014] has developed a model of concepts in which conceptual spaces provide geometric structures, and information is represented by points, vectors and regions in vector spaces. The same compositional approach has been applied to this formalism, giving conceptual spaces theory a richer model of compositionality than previously [Bolt et al., 2018]. Compositional approaches have also been applied in the study of strategic games and Nash equilibria. In contrast to classical game theory, where games are studied monolithically as one global object, compositional game theory works bottom-up by building large and complex games from smaller components. Such an approach is inherently difficult since the interaction between games has to be considered. Research into categorical compositional methods for this field have recently begun [Ghani et al., 2018]. Moreover, the interaction between the three disciplines of cognitive science, linguistics and game theory is a fertile ground for research. Game theory in cognitive science is a well-established area [Camerer, 2011]. Similarly game theoretic approaches have been applied in linguistics [J\"ager, 2008]. Lastly, the study of linguistics and cognitive science is intimately intertwined [Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies compositional approaches via vector spaces and categorical quantum theory, allowing the interplay between the three disciplines to be examined.


Boosting for Comparison-Based Learning

arXiv.org Machine Learning

We consider the problem of classification in a comparison-based setting: given a set of objects, we only have access to triplet comparisons of the form "object $x_i$ is closer to object $x_j$ than to object $x_k$.'' In this paper we introduce TripletBoost, a new method that can learn a classifier just from such triplet comparisons. The main idea is to aggregate the triplets information into weak classifiers, which can subsequently be boosted to a strong classifier. Our method has two main advantages: (i) it is applicable to data from any metric space, and (ii) it can deal with large scale problems using only passively obtained and noisy triplets. We derive theoretical generalization guarantees and a lower bound on the number of necessary triplets, and we empirically show that our method is both competitive with state of the art approaches and resistant to noise.


Wisenet SmartCam N2 review: Solid facial detection tops this security camera's list of features

PCWorld

Of all the AI features that put the "smarts" in smart security cameras, facial detection is undoubtedly the most complex and frustrating. Thanks to the variable quality of the algorithms behind them, three different cameras with facial detection can give you maddeningly different results. My expectations, then, for Wisenet's SmartCam N2 with facial recognition were modest. But after using it for week or so, I'm ready to say the N2 is one of the better facial-recognition cameras out there. The N2's capsule-style body comes mounted to a metal base you can set on a table or shelf.