graph


Beyond Worst-Case Analysis

Communications of the ACM

Comparing different algorithms is hard. For almost any pair of algorithms and measure of algorithm performance like running time or solution quality, each algorithm will perform better than the other on some inputs.a For example, the insertion sort algorithm is faster than merge sort on already-sorted arrays but slower on many other inputs. When two algorithms have incomparable performance, how can we deem one of them "better than" the other? Worst-case analysis is a specific modeling choice in the analysis of algorithms, where the overall performance of an algorithm is summarized by its worst performance on any input of a given size. The "better" algorithm is then the one with superior worst-case performance. Merge sort, with its worst-case asymptotic running time of Θ(n log n) for arrays of length n, is better in this sense than insertion sort, which has a worst-case running time of Θ(n2). While crude, worst-case analysis can be tremendously useful, and it is the dominant paradigm for algorithm analysis in theoretical computer science. A good worst-case guarantee is the best-case scenario for an algorithm, certifying its general-purpose utility and absolving its users from understanding which inputs are relevant to their applications. Remarkably, for many fundamental computational problems, there are algorithms with excellent worst-case performance guarantees. The lion's share of an undergraduate algorithms course comprises algorithms that run in linear or near-linear time in the worst case. Here, I review three classical examples where worst-case analysis gives misleading or useless advice about how to solve a problem; further examples in modern machine learning are described later.


Learning representations of irregular particle-detector geometry with distance-weighted graph networks

arXiv.org Machine Learning

We explore the use of graph networks to deal with irregular-geometry detectors in the context of particle reconstruction. Thanks to their representation-learning capabilities, graph networks can exploit the full detector granularity, while natively managing the event sparsity and arbitrarily complex detector geometries. We introduce two distance-weighted graph network architectures, dubbed GarNet and GravNet layers, and apply them to a typical particle reconstruction task. The performance of the new architectures is evaluated on a data set of simulated particle interactions on a toy model of a highly granular calorimeter, loosely inspired by the endcap calorimeter to be installed in the CMS detector for the High-Luminosity LHC phase. We study the clustering of energy depositions, which is the basis for calorimetric particle reconstruction, and provide a quantitative comparison to alternative approaches. The proposed algorithms outperform existing methods or reach competitive performance with lower computing-resource consumption. Being geometry-agnostic, the new architectures are not restricted to calorimetry and can be easily adapted to other use cases, such as tracking in silicon detectors.


Topology of Learning in Artificial Neural Networks

arXiv.org Machine Learning

Understanding how neural networks learn remains one of the central challenges in machine learning research. From random at the start of training, the weights of a neural network evolve in such a way as to be able to perform a variety of tasks, like classifying images. Here we study the emergence of structure in the weights by applying methods from topological data analysis. We train simple feedforward neural networks on the MNIST dataset and monitor the evolution of the weights. When initialized to zero, the weights follow trajectories that branch off recurrently, thus generating trees that describe the growth of the effective capacity of each layer. When initialized to tiny random values, the weights evolve smoothly along two-dimensional surfaces. We show that natural coordinates on these learning surfaces correspond to important factors of variation.


Tensors: The building block of TensorFlow MarkTechPost

#artificialintelligence

TensorFlow uses tensor to define the framework and processing data. Mathematically, a tensor is a geometric object that maps in a multi-linear manner geometric vectors, scalars, and another tensor(s) to a resulting tensor. These tensor objects used to implement a Graph object which coordinated among them self to produce the desired result. A tensor(tf.Tensor) object has the two basic properties that are "Data Type" and "Shape". Each element in the Tensor has the same data type, and the data type is always known.


Learning with Inadequate and Incorrect Supervision

arXiv.org Machine Learning

Practically, we are often in the dilemma that the labeled data at hand are inadequate to train a reliable classifier, and more seriously, some of these labeled data may be mistakenly labeled due to the various human factors. Therefore, this paper proposes a novel semi-supervised learning paradigm that can handle both label insufficiency and label inaccuracy. To address label insufficiency, we use a graph to bridge the data points so that the label information can be propagated from the scarce labeled examples to unlabeled examples along the graph edges. To address label inaccuracy, Graph Trend Filtering (GTF) and Smooth Eigenbase Pursuit (SEP) are adopted to filter out the initial noisy labels. GTF penalizes the l_0 norm of label difference between connected examples in the graph and exhibits better local adaptivity than the traditional l_2 norm-based Laplacian smoother. SEP reconstructs the correct labels by emphasizing the leading eigenvectors of Laplacian matrix associated with small eigenvalues, as these eigenvectors reflect real label smoothness and carry rich class separation cues. We term our algorithm as `Semi-supervised learning under Inadequate and Incorrect Supervision' (SIIS). Thorough experimental results on image classification, text categorization, and speech recognition demonstrate that our SIIS is effective in label error correction, leading to superior performance to the state-of-the-art methods in the presence of label noise and label scarcity.


futureofwork _2019-02-19_06-03-48.xlsx

#artificialintelligence

The graph represents a network of 3,408 Twitter users whose tweets in the requested range contained "futureofwork ", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Tuesday, 19 February 2019 at 14:05 UTC. The requested start date was Tuesday, 19 February 2019 at 01:01 UTC and the maximum number of days (going backward) was 14. The maximum number of tweets collected was 5,000. The tweets in the network were tweeted over the 1-day, 11-hour, 23-minute period from Sunday, 17 February 2019 at 13:37 UTC to Tuesday, 19 February 2019 at 01:00 UTC.


Simplifying Graph Convolutional Networks

arXiv.org Machine Learning

Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.


Accelerated Gossip in Networks of Given Dimension using Jacobi Polynomial Iterations

arXiv.org Machine Learning

Consider a network of agents connected by communication links, where each agent holds a real value. The gossip problem consists in estimating the average of the values diffused in the network in a distributed manner. We develop a method solving the gossip problem that depends only on the spectral dimension of the network, that is, in the communication network set-up, the dimension of the space in which the agents live. This contrasts with previous work that required the spectral gap of the network as a parameter, or suffered from slow mixing. Our method shows an important improvement over existing algorithms in the non-asymptotic regime, i.e., when the values are far from being fully mixed in the network. Our approach stems from a polynomial-based point of view on gossip algorithms, as well as an approximation of the spectral measure of the graphs with a Jacobi measure. We show the power of the approach with simulations on various graphs, and with performance guarantees on graphs of known spectral dimension, such as grids and random percolation bonds. An extension of this work to distributed Laplacian solvers is discussed. As a side result, we also use the polynomial-based point of view to show the convergence of the message passing algorithm for gossip of Moallemi \& Van Roy on regular graphs. The explicit computation of the rate of the convergence shows that message passing has a slow rate of convergence on graphs with small spectral gap.


3 Practical Ways to Think about AI in Healthcare

#artificialintelligence

There is no shortage of advances in AI these days especially as it relates to Deep Learning. From "Everybody dance now" where an AI-based Transfer Motion can make you appear to dance like a star, to an AI-based News Anchor in China that reads the daily news with impressive facial expressions and voice inflection much like a human. For Healthcare there has been much advances in medical imaging analysis from diagnostic imaging, to diabetic retinopathy, etc. to name a few. This is great news, but I believe more can be done, specifically as it relates to the use of AI in physician and hospital settings. The following are 3 practical ways to think about AI in healthcare.


Graph neural networks: a review of methods and applications

#artificialintelligence

It's another graph neural networks survey paper today! Clearly, this covers much of the same territory as we looked at earlier in the week, but when we're lucky enough to get two surveys published in short succession it can add a lot to compare the two different perspectives and sense of what's important. In particular here, Zhou et al., have a different formulation for describing the core GNN problem, and a nice approach to splitting out the various components. Rather than make this a standalone write-up, I'm going to lean heavily on the Graph neural network survey we looked at on Wednesday and try to enrich my understanding starting from there. For this survey, the GNN problem is framed based on the formulation in the original GNN paper, 'The graph neural network model,' Scarselli 2009.