Every Untrue Label is Untrue in its Own Way: Controlling Error Type with the Log Bilinear Loss

arXiv.org Machine Learning

Deep learning has become the method of choice in many application domains of machine learning in recent years, especially for multi-class classification tasks. The most common loss function used in this context is the cross-entropy loss, which reduces to the log loss in the typical case when there is a single correct response label. While this loss is insensitive to the identity of the assigned class in the case of misclassification, in practice it is often the case that some errors may be more detrimental than others. Here we present the bilinear-loss (and related log-bilinear-loss) which differentially penalizes the different wrong assignments of the model. We thoroughly test this method using standard models and benchmark image datasets. As one application, we show the ability of this method to better contain error within the correct super-class, in the hierarchically labeled CIFAR100 dataset, without affecting the overall performance of the classifier.


Context-modulation of hippocampal dynamics and deep convolutional networks

arXiv.org Machine Learning

Complex architectures of biological neural circuits, such as parallel processing pathways, has been behaviorally implicated in many cognitive studies. However, the theoretical consequences of circuit complexity on neural computation have only been explored in limited cases. Here, we introduce a mechanism by which direct and indirect pathways from cortex to the CA3 region of the hippocampus can balance both contextual gating of memory formation and driving network activity. We implement this concept in a deep artificial neural network by enabling a context-sensitive bias. The motivation for this is to improve performance of a size-constrained network. Using direct knowledge of the superclass information in the CIFAR-100 and Fashion-MNIST datasets, we show a dramatic increase in performance without an increase in network size.


Metaprogramming Ruby 2: Program Like the Ruby Pros [PDF] - Programmer Books

#artificialintelligence

Ruby inherits characteristics from various languages--Lisp, Smalltalk, C, and Perl, to name a few. Metaprogramming comes from Lisp (and Smalltalk). It's a bit like magic, which makes something astonishing possible. There are two kinds of magic: white magic, which does good things, and black magic, which can do nasty things. If you discipline yourself, you can do good things, such as enhancing the language without tweaking its syntax by macros or enabling internal domain-specific languages.


Collaborative Reasoning and Collaborative Ontology Development in CRAFT

AAAI Conferences

Analysts use CRAFT to represent their collective knowledge and reasoning via interconnected graphical models built upon a shared evolving ontology. These semantic models help connect analysts to digital information sources and to each other, and the aggregated knowledge and findings of many analysts may be analyzed and visualized. We also summarize the results of a preliminary user study of collaborative, implicit ontology evolution using this tool.


Inheritance in Object-Oriented Knowledge Representation

arXiv.org Artificial Intelligence

This paper contains the consideration of inheritance mechanism in such knowledge representation models as object-oriented programming, frames and object-oriented dynamic networks. In addition, inheritance within representation of vague and imprecise knowledge are also discussed. New types of inheritance, general classification of all known inheritance types and approach, which allows avoiding in many cases problems with exceptions, redundancy and ambiguity within object-oriented dynamic networks and their fuzzy extension, are introduced in the paper. The proposed approach bases on conception of homogeneous and inhomogeneous or heterogeneous class of objects, which allow building of inheritance hierarchy more flexibly and efficiently.