Goto

Collaborating Authors

 factoring




Quantum Algorithms Conquer a New Kind of Problem

#artificialintelligence

In 1994, a mathematician figured out how to make a quantum computer do something that no ordinary classical computer could. The work revealed that, in principle, a machine based on the rules of quantum mechanics could efficiently break a large number into its prime factors -- a task so difficult for a classical computer that it forms the basis for much of today's internet security. A surge of optimism followed. Perhaps, researchers thought, we'll be able to invent quantum algorithms that can solve a huge range of different problems. "It's been a bit of a bummer trajectory," said Ryan O'Donnell of Carnegie Mellon University.


Factoring out prior knowledge from low-dimensional embeddings

Heiter, Edith, Fischer, Jonas, Vreeken, Jilles

arXiv.org Machine Learning

Embedding high dimensional data into low dimensional spaces, such as with tSNE [van der Maaten and Hinton, 2008] or UMAP [McInnes et al., 2018], allow us to visually inspect and discover meaningful structure from the data that would otherwise be difficult or impossible to see. These methods are as popular as they are useful, but, at the same time limited in that they are one-shot only: they embed the data as is, and that is that. If the resulting embedding reveals novel knowledge, all is well, but, what if the structure that dominates it is something we already know, something we are no longer interested in, or, if we want to discover whether the data has meaningful structure other than what the first result revealed? In word embeddings, for example, we may already know that certain words are synonyms, while in single cell sequencing we may want to discover structure other than known cell types, or factor out family relationships. The question at hand is therefore, how can we obtain low-dimensional embeddings that reveal structure beyond what we already know, i.e. how to factor out prior knowledge from low-dimensional embeddings? For conditional embeddings, research so far mostly focused on emphasizing rather than factoring out prior knowledge [Barshan et al., 2011, De Ridder et al., 2003, Hanhijärvi et al., 2009], with conditional tSNE as notable exception, which, however, can only factor out label information [Kang et al., 2019]. Here, we propose two techniques for factoring out a more general form of prior knowledge from low-dimensional embeddings of arbitrary data types. In particular, we consider background knowledge in the form of pairwise distances between samples. This formulation allows us to cover a plethora of practical instances including labels, clustering structure, family trees, user-defined distances, but also, and especially important for unstructured data, kernel matrices.


Strong Stubborn Set Pruning for Star-Topology Decoupled State Space Search

Gnad, Daniel, Hoffmann, Jörg, Wehrle, Martin

Journal of Artificial Intelligence Research

Analyzing reachability in large discrete transition systems is an important sub-problem in several areas of AI, and of CS in general. State space search is a basic method for conducting such an analysis. A wealth of techniques have been proposed to reduce the search space without affecting the existence of (optimal) solution paths. In particular, strong stubborn set (SSS) pruning is a prominent such method, analyzing action dependencies to prune commutative parts of the search space. We herein show how to apply this idea to star-topology decoupled state space search, a recent search reformulation method invented in the context of classical AI planning. Star-topology decoupled state space search, short decoupled search, addresses planning tasks where a single center component interacts with several leaf components. The search exploits a form of conditional independence arising in this setting: given a fixed path p of transitions by the center, the possible leaf moves compliant with p are independent across the leaves. Decoupled search thus searches over center paths only, maintaining the compliant paths for each leaf separately. This avoids the enumeration of combined states across leaves. Just like standard search, decoupled search is adversely affected by commutative parts of its search space. The adaptation of strong stubborn set pruning is challenging due to the more complex structure of the search space, and the resulting ways in which action dependencies may affect the search. We spell out how to address this challenge, designing optimality-preserving decoupled strong stubborn set (DSSS) pruning methods. We introduce a design for star topologies in full generality, as well as simpler design variants for the practically relevant fork and inverted fork special cases. We show that there are cases where DSSS pruning is exponentially more effective than both, decoupled search and SSS pruning, exhibiting true synergy where the whole is more than the sum of its parts. Empirically, DSSS pruning reliably inherits the best of its components, and sometimes outperforms both.


"Broken" SME payments industry "makes no sense" in 2019 » PaymentEye

#artificialintelligence

SME payments are "broken" and the industry's reliance on old-world processes "staggering", according to Paul Christensen, CEO and founder of machine learning firm Previse. "If I walk into Starbucks and ask for a coffee. I get my coffee and Starbucks gets paid instantly," says Christensen. "If this were a B2B scenario I would walk in, ask for a coffee and then send an invoice in three months. That sounds ridiculous, and yet that is how the entire world of B2B transactions work."


Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

Chitnis, Rohan, Kaelbling, Leslie Pack, Lozano-Pérez, Tomás

arXiv.org Artificial Intelligence

In partially observed environments, it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world, augmenting the robot's sensory observations. For instance, a robot tasked with a search-and-rescue mission may be informed by the human that two victims are probably in the same room. An important question arises: how should we represent the robot's internal knowledge so that this information is correctly processed and combined with raw sensory information? In this paper, we provide an efficient belief state representation that dynamically selects an appropriate factoring, combining aspects of the belief when they are correlated through information and separating them when they are not. This strategy works in open domains, in which the set of possible objects is not known in advance, and provides significant improvements in inference time over a static factoring, leading to more efficient planning for complex partially observed tasks.


Data Science Method to Discover Large Prime Numbers

@machinelearnbot

Large prime numbers have been a topic of considerable research, for its own mathematical beauty, as well as to develop more powerful cryptographic applications and random number generators. In this article, we show how big data, statistical science (more specifically, pattern recognition) and the use of new efficient, distributed algorithms, could lead to an original research path to discover large primes. Here we also discuss new mathematical conjectures related to our methodology. Much of the focus so far has been on discovering raw large primes: Any time a new one, bigger than all predecessors, is found, it gets a lot of attention even beyond the mathematical community, see here. Here we explore a different path: finding numbers (usually not primes) that have a very large prime factor.


Data Science Method to Discover Large Prime Numbers

@machinelearnbot

Large prime numbers have been a topic of considerable research, for its own mathematical beauty, as well as to develop more powerful cryptographic applications and random number generators. In this article, we show how big data, statistical science (more specifically, pattern recognition) and the use of new efficient, distributed algorithms, could lead to an original research path to discover large primes. Here we also discuss new mathematical conjectures related to our methodology. Much of the focus so far has been on discovering raw large primes: Any time a new one, bigger than all predecessors, is found, it gets a lot of attention even beyond the mathematical community, see here. Here we explore a different path: finding numbers (usually not primes) that have a very large prime factor.


Data Science Method to Discover Large Prime Numbers

@machinelearnbot

Large prime numbers have been a topic of considerable research, for its own mathematical beauty, as well as to develop more powerful cryptographic applications and random number generators. In this article, we show how big data, statistical science (more specifically, pattern recognition) and the use of new efficient, distributed algorithms, could lead to an original research path to discover large primes. Here we also discuss new mathematical conjectures related to our methodology. Much of the focus so far has been on discovering raw large primes: Any time a new one, bigger than all predecessors, is found, it gets a lot of attention even beyond the mathematical community, see here. Here we explore a different path: finding numbers (usually not primes) that have a very large prime factor.