Goto

Collaborating Authors

Scaling Laws in Natural Scenes and the Inference of 3D Shape

Neural Information Processing Systems

This paper explores the statistical relationship between natural images and their underlying range (depth) images. We look at how this relationship changesover scale, and how this information can be used to enhance low resolution range data using a full resolution intensity image. Based on our findings, we propose an extension to an existing technique known as shape recipes [3], and the success of the two methods are compared using images and laser scans of real scenes. Our extension is shown to provide a twofold improvement over the current method. Furthermore, wedemonstrate that ideal linear shape-from-shading filters, when learned from natural scenes, may derive even more strength from shadow cues than from the traditional linear-Lambertian shading cues.


New algorithm helps turn low-resolution images into detailed photos, 'CSI'-style

#artificialintelligence

The EnhanceNet-PAT algorithm could help with everything from restoring old photos to improving image recognition for self-driving cars. Anyone who has ever worked with image files knows that, unlike the fictional world of shows like CSI, there's no easy way to take a low-resolution image and magically transform it into a high-resolution picture using some fancy "enhance" tool. Fortunately, some brilliant computer scientists at the Max Planck Institute for Intelligent Systems in Germany are working on the problem -- and they've come up with a pretty nifty algorithm to address it. What they have developed is a tool called EnhanceNet-PAT, which uses artificial intelligence to create high-definition versions of low-res images. While the solution is not a miracle fix, it does produce a noticeably better result than previous attempts, thanks to some smart machine-learning algorithms.


Anomaly Detection via Graphical Lasso

arXiv.org Machine Learning

Anomalies and outliers are common in real-world data, and they can arise from many sources, such as sensor faults. Accordingly, anomaly detection is important both for analyzing the anomalies themselves and for cleaning the data for further analysis of its ambient structure. Nonetheless, a precise definition of anomalies is important for automated detection and herein we approach such problems from the perspective of detecting sparse latent effects embedded in large collections of noisy data. Standard Graphical Lasso-based techniques can identify the conditional dependency structure of a collection of random variables based on their sample covariance matrix. However, classic Graphical Lasso is sensitive to outliers in the sample covariance matrix. In particular, several outliers in a sample covariance matrix can destroy the sparsity of its inverse. Accordingly, we propose a novel optimization problem that is similar in spirit to Robust Principal Component Analysis (RPCA) and splits the sample covariance matrix $M$ into two parts, $M=F+S$, where $F$ is the cleaned sample covariance whose inverse is sparse and computable by Graphical Lasso, and $S$ contains the outliers in $M$. We accomplish this decomposition by adding an additional $ \ell_1$ penalty to classic Graphical Lasso, and name it "Robust Graphical Lasso (Rglasso)". Moreover, we propose an Alternating Direction Method of Multipliers (ADMM) solution to the optimization problem which scales to large numbers of unknowns. We evaluate our algorithm on both real and synthetic datasets, obtaining interpretable results and outperforming the standard robust Minimum Covariance Determinant (MCD) method and Robust Principal Component Analysis (RPCA) regarding both accuracy and speed.


A Nonlinear Dimensionality Reduction Framework Using Smooth Geodesics

arXiv.org Machine Learning

Existing dimensionality reduction methods are adept at revealing hidden underlying manifolds arising from high-dimensional data and thereby producing a low-dimensional representation. However, the smoothness of the manifolds produced by classic techniques in the presence of noise is not guaranteed. In fact, the embedding generated using such non-smooth, noisy measurements may distort the geometry of the manifold and thereby produce an unfaithful embedding. Herein, we propose a framework for nonlinear dimensionality reduction that generates a manifold in terms of smooth geodesics that is designed to treat problems in which manifold measurements have been corrupted by noise. Our method generates a network structure for given high-dimensional data using a neighborhood search and then produces piecewise linear shortest paths that are defined as geodesics. Then, we fit points in each geodesic by a smoothing spline to emphasize the smoothness. The robustness of this approach for noisy and sparse datasets is demonstrated by the implementation of the method on synthetic and real-world datasets.


ML Super Resolution - Pixelmator Blog

#artificialintelligence

It's no secret that we're pretty big fans of machine learning and we love thinking of new and exciting ways to use it in Pixelmator Pro. Our latest ML-powered feature is called ML Super Resolution, released in today's update, and it makes it possible to increase the resolution of images while keeping them stunningly sharp and detailed. Yes, zooming and enhancing images like they do in all those cheesy police dramas is now a reality! Before we get into the nitty-gritty technical stuff, let's get right to the point and take a look at some examples of what ML Super Resolution can do. Until now, if you had opened up the Image menu and chosen Image Size, you would've found three image scaling algorithms -- Bilinear, Lanczos (lan-tsosh, for anyone curious), and Nearest Neighbor, so we'll compare our new algorithm to those three.