graph


Giving robots a better feel for object manipulation

#artificialintelligence

A new learning system developed by MIT researchers improves robots' abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch -- and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi. In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are "trained" using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects.


Facebook AI Open-Sources PyTorch-BigGraph Tool for 'Extremely Large' Graphs

#artificialintelligence

Facebook AI Research has announced it is open-sourcing PyTorch-BigGraph (PBG), a tool that can easily process and produce graph embeddings for extremely large graphs. PBG can also process multi-relation graph embeddings where a model is too large to fit in memory. Facebook boasts that PBG not only performs faster than commonly-used embedding softwares, but also provides higher-quality results compared with state-of-the-art benchmarks. The company says PBG will allow users to quickly produce high-quality embeddings from a large graph using either a single machine or multiple machines in parallel. Graphs are widely used in almost all types of programming for representing data.


PyTorch-BigGraph: Faster embeddings of large graphs

#artificialintelligence

Working effectively with large graphs is crucial to advancing both the research and applications of artificial intelligence. So Facebook AI has created and is now open-sourcing PyTorch-BigGraph (PBG), a tool that makes it much faster and easier to produce graph embeddings for extremely large graphs -- in particular, multi-relation graph embeddings for graphs where the model is too large to fit in memory. PBG is faster than commonly used embedding software and produces embeddings of comparable quality to state-of-the-art models on standard benchmarks. With this new tool, anyone can take a large graph and quickly produce high-quality embeddings using a single machine or multiple machines in parallel. As an example, we are also releasing the first published embeddings of the full Wikidata graph of 50 million Wikipedia concepts, which serves as structured data for use in the AI research community.


Efficient Search-Based Weighted Model Integration

arXiv.org Artificial Intelligence

Weighted model integration (WMI) extends Weighted model counting (WMC) to the integration of functions over mixed discrete-continuous domains. It has shown tremendous promise for solving inference problems in graphical models and probabilistic programming. Yet, state-of-the-art tools for WMI are limited in terms of performance and ignore the independence structure that is crucial to improving efficiency. To address this limitation, we propose an efficient model integration algorithm for theories with tree primal graphs. We exploit the sparse graph structure by using search to performing integration. Our algorithm greatly improves the computational efficiency on such problems and exploits context-specific independence between variables. Experimental results show dramatic speedups compared to existing WMI solvers on problems with tree-shaped dependencies.


Salesforce Research: Knowledge graphs and machine learning to power Einstein

ZDNet

A super geeky topic, which could have super important repercussions in the real world. That description could very well fit anything from cold fusion to knowledge graphs, so a bit of unpacking is in order. If you're into science, chances are you know arXiv.org. In other words, it's where cutting edge research often appears first. Some months back, a publication from researchers from Salesforce appeared in arXiv, titled "Multi-Hop Knowledge Graph Reasoning with Reward Shaping."


A study of problems with multiple interdependent components - Part I

arXiv.org Artificial Intelligence

Recognising that real-world optimisation problems have multiple interdependent components can be quite easy. However, providing a generic and formal model for dependencies between components can be a tricky task. In fact, a PMIC can be considered simply as a single optimisation problem and the dependencies between components could be investigated by studying the decomposability of the problem and the correlations between the sub-problems. In this work, we attempt to define PMICs by reasoning from a reverse perspective. Instead of considering a decomposable problem, we model multiple problems (the components) and define how these components could be connected. In this document, we introduce notions related to problems with mutliple interndependent components. We start by introducing realistic examples from logistics and supply chain management to illustrate the composite nature and dependencies in these problems. Afterwards, we provide our attempt to formalise and classify dependency in multi-component problems.


Autoregressive Models for Sequences of Graphs

arXiv.org Artificial Intelligence

This paper proposes an autoregressive (AR) model for sequences of graphs, which generalises traditional AR models. A first novelty consists in formalising the AR model for a very general family of graphs, characterised by a variable topology, and attributes associated with nodes and edges. A graph neural network (GNN) is also proposed to learn the AR function associated with the graph-generating process (GGP), and subsequently predict the next graph in a sequence. The proposed method is compared with four baselines on synthetic GGPs, denoting a significantly better performance on all considered problems.


Machine Learning Algorithms In Layman's Terms, Part 1

#artificialintelligence

As a recent graduate of the Flatiron School's Data Science Bootcamp, I've been inundated with advice on how to ace technical interviews. A soft skill that keeps coming to the forefront is the ability to explain complex machine learning algorithms to a non-technical person. This series of posts is me sharing with the world how I would explain all the machine learning topics I come across on a regular basis...to my grandma. Some get a bit in-depth, others less so, but all I believe are useful to a non-Data Scientist. In the upcoming parts of this series, I'll be going over: To summarize, an algorithm is the mathematical life force behind a model.



Discovering Options for Exploration by Minimizing Cover Time

arXiv.org Artificial Intelligence

Finding a set of edges that minimizes expected One of the main challenges in reinforcement learning cover time is an extremely hard combinatorial optimization is solving tasks with sparse reward. We show problem (Braess, 1968; Braess et al., 2005). Thus, our that the difficulty of discovering a distant rewarding algorithm instead seeks to minimize the upper bound of the state in an MDP is bounded by the expected expected cover time given as a function of the algebraic cover time of a random walk over the graph induced connectivity of the graph Laplacian (Fiedler, 1973; Broder by the MDP's transition dynamics. We & Karlin, 1989; Chung, 1996) using the heuristic method therefore propose to accelerate exploration by constructing by Ghosh & Boyd (2006) that improves the upper bound of options that minimize cover time. The the expected cover time of a uniform random walk.