Goto

Collaborating Authors

representation


What's coming up at IJCAI-PRICAI 2020?

AIHub

IJCAI-PRICAI2020, the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence starts today and will run until 15 January. Find out what's happening during the event. The conference schedule is here and includes tutorials, workshops, invited talks and technical sessions. There are also competitions, early career spotlight talks, panel discussions and social events. There will be eight invited talks on a wide variety of topics.


Top Machine Learning Research Papers Released In 2020

#artificialintelligence

It has been only two weeks into the last month of the year and arxiv.org, the popular repository for ML research papers has already witnessed close to 600 uploads. This should give one the idea of the pace at which machine learning research is proceeding; however, keeping track of all these research work is almost impossible. Every year, the research that gets maximum noise is usually from companies like Google and Facebook; from top universities like MIT; from research labs and most importantly from the conferences like NeurIPS or ACL. In this article, we have compiled a list of interesting machine learning research work that has made some noise this year. This is the seminal paper that introduced the most popular ML model of the year -- GPT-3.


In 2020, Indie Games Were A Well-Deserved Distraction

NPR Technology

This is not a representation of what 2020 felt like -- it's a screen shot from Dead Cells. This is not a representation of what 2020 felt like -- it's a screen shot from Dead Cells. And thank goodness for that, right? Amid worldwide shutdowns, strenuous conversations about police reform, and an endless election cycle, we could all use a break. Do what I do: Pick up your Switch (or whatever console you use) and give yourself a well-deserved, news-free distraction.


Insights for AI from the Human Mind

Communications of the ACM

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. Artificial intelligence has recently beaten world champions in Go and poker and made extraordinary progress in domains such as machine translation, object classification, and speech recognition. However, most AI systems are extremely narrowly focused. AlphaGo, the champion Go player, does not know that the game is played by putting stones onto a board; it has no idea what a "stone" or a "board" is, and would need to be retrained from scratch if you presented it with a rectangular board rather than a square grid.


Computational principles of intelligence: learning and reasoning with neural networks

#artificialintelligence

Despite significant achievements and current interest in machine learning and artificial intelligence, the quest for a theory of intelligence, allowing general and efficient problem solving, has done little progress. This work tries to contribute in this direction by proposing a novel framework of intelligence based on three principles. First, the generative and mirroring nature of learned representations of inputs. Second, a grounded, intrinsically motivated and iterative process for learning, problem solving and imagination. Together, those principles create a systems approach offering interpretability, continuous learning, common sense and more.


Introduction to Deep Learning

#artificialintelligence

Neural networks(short for Artificial neural networks) model were inspired by the structure of neurons in our brain(biological neural networks). Each cell in a neural network is called a neuron and is connected to multiple neurons. Neurons in human (and mammalian) brains communicate by sending electrical signals between each other. But these are the only similarities between biological neural networks and artificial neural networks. A deep neural network is a specific type of neural network that excels at capturing nonlinear relationships in data.


Computational principles of intelligence: learning and reasoning with neural networks

arXiv.org Artificial Intelligence

Despite significant achievements and current interest in machine learning and artificial intelligence, the quest for a theory of intelligence, allowing general and efficient problem solving, has done little progress. This work tries to contribute in this direction by proposing a novel framework of intelligence based on three principles. First, the generative and mirroring nature of learned representations of inputs. Second, a grounded, intrinsically motivated and iterative process for learning, problem solving and imagination. Third, an ad hoc tuning of the reasoning mechanism over causal compositional representations using inhibition rules. Together, those principles create a systems approach offering interpretability, continuous learning, common sense and more. This framework is being developed from the following perspectives: as a general problem solving method, as a human oriented tool and finally, as model of information processing in the brain.


Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization

arXiv.org Machine Learning

Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients.


Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image

arXiv.org Machine Learning

We present Worldsheet, a method for novel view synthesis using just a single RGB image as input. This is a challenging problem as it requires an understanding of the 3D geometry of the scene as well as texture mapping to generate both visible and occluded regions from new view-points. Our main insight is that simply shrink-wrapping a planar mesh sheet onto the input image, consistent with the learned intermediate depth, captures underlying geometry sufficient enough to generate photorealistic unseen views with arbitrarily large view-point changes. To operationalize this, we propose a novel differentiable texture sampler that allows our wrapped mesh sheet to be textured; which is then transformed into a target image via differentiable rendering. Our approach is category-agnostic, end-to-end trainable without using any 3D supervision and requires a single image at test time. Worldsheet consistently outperforms prior state-of-the-art methods on single-image view synthesis across several datasets. Furthermore, this simple idea captures novel views surprisingly well on a wide range of high resolution in-the-wild images in converting them into a navigable 3D pop-up. Video results and code at https://worldsheet.github.io


DenseHMM: Learning Hidden Markov Models by Learning Dense Representations

arXiv.org Machine Learning

We propose DenseHMM - a modification of Hidden Markov Models (HMMs) that allows to learn dense representations of both the hidden states and the observables. Compared to the standard HMM, transition probabilities are not atomic but composed of these representations via kernelization. Our approach enables constraint-free and gradient-based optimization. We propose two optimization schemes that make use of this: a modification of the Baum-Welch algorithm and a direct co-occurrence optimization. The latter one is highly scalable and comes empirically without loss of performance compared to standard HMMs. We show that the non-linearity of the kernelization is crucial for the expressiveness of the representations. The properties of the DenseHMM like learned co-occurrences and log-likelihoods are studied empirically on synthetic and biomedical datasets.