Davison, Andrew
A Novel Approach to Balance Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes and its Implementation in BEACON
Nagpal, Vansh, Valluru, Siva Likitha, Lakkaraju, Kausik, Gupta, Nitin, Abdulrahman, Zach, Davison, Andrew, Srivastava, Biplav
In fact, according background in automated recommendations of personalized to a recent meta-survey (Leme et al. 2021), almost meals and then discuss our problem formulation, key solution 40% of the population across high and low-and mediumincome components including data (recipe representation and countries do not adhere to their national food-based format conversion) and meal recommendation, and their dietary guidelines, often prioritizing convenience over nutrition evaluation. We then describe a prototype implementation of needs. Previous studies have shown that adhering the solution in the BEACON system along with the supported to a provided meal plan instead of a self-selected one reduces use cases and conclude with a discussion of practical the risk for adverse health conditions (Metz et al. considerations and avenues for future extensions.
PixRO: Pixel-Distributed Rotational Odometry with Gaussian Belief Propagation
Alzugaray, Ignacio, Murai, Riku, Davison, Andrew
Visual sensors are not only becoming better at capturing high-quality images but also they have steadily increased their capabilities in processing data on their own on-chip. Yet the majority of Visual Odometry (VO) pipelines rely on the transmission and processing of full images in a centralized unit (e.g. CPU or GPU), which often contain much redundant and low-quality information for the task. In this paper, we address the task of frame-to-frame rotational estimation but, instead of reasoning about relative motion between frames using the full images, distribute the estimation at pixel-level. In this paradigm, each pixel produces an estimate of the global motion by only relying on local information and local message-passing with neighbouring pixels. The resulting per-pixel estimates can be then communicated to downstream tasks, yielding higher-level, informative cues instead of the original raw pixel-readings. We evaluate the proposed approach on real public datasets, where we offer detailed insights about this novel technique and open-source our implementation for the future benefit of the community.
Community Detection and Classification Guarantees Using Embeddings Learned by Node2Vec
Davison, Andrew, Morgan, S. Carlyle, Ward, Owen G.
Within network science, a widely applicable and important inference task is to understand how the behavior of interactions between different units (nodes) within the network depend on their latent characteristics. This occurs within a wide array of disciplines, from sociological (Freeman, 2004) to biological (Luo et al., 2007) networks. One simple and interpretable model for such a task is the stochastic block model (SBM) (Holland et al., 1983) which assumes that nodes within the network are assigned a discrete community label. Edges between nodes in the network are then formed independently across all pairs of edges, conditional on these community assignments. While such a model is simplistic, it and various extensions, such as the degree corrected SBM (DCSBM), used to handle degree heterogenity (Karrer and Newman, 2011), and mixed-membership SBMs, to allow for more complex community structures (Airoldi, Blei, Fienberg, and Xing, 2008), have seen a wide degree of empirical success (Latouche et al., 2011; Legramanti et al., 2022; Airoldi, Blei, Fienberg, Xing, and Jaakkola, 2006). One restriction of the stochastic block model and its generalizations is the requirement for a discrete community assignment as a latent representation of the units within the network. While the statistical community has previously considered more flexible latent representations (Hoff et al., 2002), over the past decade, there have been significant advancements in general embedding methods for networks, which produce general vector representations of units within a network, and generally achieve start-of-the-art performance in downstream tasks for node classification and link prediction. An early example of such a method is spectral clustering (Ng et al., 2001), which constructs an embedding of the nodes in the network from an eigendecomposition of the graph Laplacian. The k smallest non zero eigenvectors provides a k dimensional representation of each of the nodes in the network.
Asymptotics of $\ell_2$ Regularized Network Embeddings
Davison, Andrew
A common approach to solving tasks, such as node classification or link prediction, on a large network begins by learning a Euclidean embedding of the nodes of the network, from which regular machine learning methods can be applied. For unsupervised random walk methods such as DeepWalk and node2vec, adding a $\ell_2$ penalty on the embedding vectors to the loss leads to improved downstream task performance. In this paper we study the effects of this regularization and prove that, under exchangeability assumptions on the graph, it asymptotically leads to learning a nuclear-norm-type penalized graphon. In particular, the exact form of the penalty depends on the choice of subsampling method used within stochastic gradient descent to learn the embeddings. We also illustrate empirically that concatenating node covariates to $\ell_2$ regularized node2vec embeddings leads to comparable, if not superior, performance to methods which incorporate node covariates and the network structure in a non-linear manner.
Asymptotics of Network Embeddings Learned via Subsampling
Davison, Andrew, Austern, Morgane
Network data are ubiquitous in modern machine learning, with tasks of interest including node classification, node clustering and link prediction. A frequent approach begins by learning an Euclidean embedding of the network, to which algorithms developed for vector-valued data are applied. For large networks, embeddings are learned using stochastic gradient methods where the sub-sampling scheme can be freely chosen. Despite the strong empirical performance of such methods, they are not well understood theoretically. Our work encapsulates representation methods using a subsampling approach, such as node2vec, into a single unifying framework. We prove, under the assumption that the graph is exchangeable, that the distribution of the learned embedding vectors asymptotically decouples. Moreover, we characterize the asymptotic distribution and provided rates of convergence, in terms of the latent parameters, which includes the choice of loss function and the embedding dimension. This provides a theoretical foundation to understand what the embedding vectors represent and how well these methods perform on downstream tasks. Notably, we observe that typically used loss functions may lead to shortcomings, such as a lack of Fisher consistency.
Next Waves in Veridical Network Embedding
Ward, Owen G., Huang, Zhen, Davison, Andrew, Zheng, Tian
Embedding nodes of a large network into a metric (e.g., Euclidean) space has become an area of active research in statistical machine learning, which has found applications in natural and social sciences. Generally, a representation of a network object is learned in a Euclidean geometry and is then used for subsequent tasks regarding the nodes and/or edges of the network, such as community detection, node classification and link prediction. Network embedding algorithms have been proposed in multiple disciplines, often with domain-specific notations and details. In addition, different measures and tools have been adopted to evaluate and compare the methods proposed under different settings, often dependent of the downstream tasks. As a result, it is challenging to study these algorithms in the literature systematically. Motivated by the recently proposed Veridical Data Science (VDS) framework, we propose a framework for network embedding algorithms and discuss how the principles of predictability, computability and stability apply in this context. The utilization of this framework in network embedding holds the potential to motivate and point to new directions for future research.
Event-based Vision: A Survey
Gallego, Guillermo, Delbruck, Tobi, Orchard, Garrick, Bartolozzi, Chiara, Taba, Brian, Censi, Andrea, Leutenegger, Stefan, Davison, Andrew, Conradt, Joerg, Daniilidis, Kostas, Scaramuzza, Davide
Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.