omc
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > Canada > British Columbia > Vancouver (0.04)
- Asia > Japan > Kyūshū & Okinawa > Okinawa (0.04)
- (6 more...)
Demystifying Orthogonal Monte Carlo and Beyond
Orthogonal Monte Carlo (OMC) is a very effective sampling algorithm imposing structural geometric conditions (orthogonality) on samples for variance reduction. Due to its simplicity and superior performance as compared to its Quasi Monte Carlo counterparts, OMC is used in a wide spectrum of challenging machine learning applications ranging from scalable kernel methods to predictive recurrent neural networks, generative models and reinforcement learning. However theoretical understanding of the method remains very limited. In this paper we shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results. As a corollary, we manage to obtain first uniform convergence results for OMCs and consequently, substantially strengthen best known downstream guarantees for kernel ridge regression via OMCs. We also propose novel extensions of the method leveraging theory of algebraic varieties over finite fields and particle algorithms, called Near-Orthogonal Monte Carlo (NOMC). We show that NOMC is the first algorithm consistently outperforming OMC in applications ranging from kernel methods to approximating distances in probabilistic metric spaces.
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > Canada > British Columbia > Vancouver (0.04)
- Asia > Japan > Kyūshū & Okinawa > Okinawa (0.04)
- (6 more...)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
Stereotypical gender actions can be extracted from Web text
Herdağdelen, Amaç, Baroni, Marco
Online social networks and micro-blogging services are no longer limited to the followers of the latest technologies or teenagers, as might once have been expected. Such technology and services are becoming widely adopted by the mainstream population as an integral part of their daily lives (Fox et al., 2009). A very prominent example of such an application is Twitter, a micro-blogging service. Twitter lets its users post very short (at most 140-character) messages - which are called tweets - about what they have been doing or thinking, or what they want to share with their friends and other people. Everyday, tens of millions of tweets are posted by users worldwide. The proliferation of publicly available, user-generated content is a vast source of social data and is already shaping the field of computational social science (Lazer et al., 2009; Thelwall et al., 2010a). Another field which enjoys the abundance of Web-based text is knowledge extraction and automated ontology building. An example application is KNEXT ( Kn owledge Ex traction from T ext) - a system proposed for extracting "general world knowledge from miscellaneous texts, including fiction" (Schubert and Tong, 2003). Web-based text is increasingly used as a source for everyday knowledge (frequently referred as commonsense knowledge).
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (3 more...)
- Research Report (1.00)
- Instructional Material (0.94)
Demystifying Orthogonal Monte Carlo and Beyond
Orthogonal Monte Carlo (OMC) is a very effective sampling algorithm imposing structural geometric conditions (orthogonality) on samples for variance reduction. Due to its simplicity and superior performance as compared to its Quasi Monte Carlo counterparts, OMC is used in a wide spectrum of challenging machine learning applications ranging from scalable kernel methods to predictive recurrent neural networks, generative models and reinforcement learning. However theoretical understanding of the method remains very limited. In this paper we shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results. As a corollary, we manage to obtain first uniform convergence results for OMCs and consequently, substantially strengthen best known downstream guarantees for kernel ridge regression via OMCs. We also propose novel extensions of the method leveraging theory of algebraic varieties over finite fields and particle algorithms, called Near-Orthogonal Monte Carlo (NOMC).
Demystifying Orthogonal Monte Carlo and Beyond
Orthogonal Monte Carlo (OMC) is a very effective sampling algorithm imposing structural geometric conditions (orthogonality) on samples for variance reduction. Due to its simplicity and superior performance as compared to its Quasi Monte Carlo counterparts, OMC is used in a wide spectrum of challenging machine learning applications ranging from scalable kernel methods to predictive recurrent neural networks, generative models and reinforcement learning. However theoretical understanding of the method remains very limited. In this paper we shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results. As a corollary, we manage to obtain first uniform convergence results for OMCs and consequently, substantially strengthen best known downstream guarantees for kernel ridge regression via OMCs. We also propose novel extensions of the method leveraging theory of algebraic varieties over finite fields and particle algorithms, called Near-Orthogonal Monte Carlo (NOMC).
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference
We describe an embarrassingly parallel, anytime Monte Carlo method for likelihood-free models. The algorithm starts with the view that the stochasticity of the pseudo-samples generated by the simulator can be controlled externally by a vector of random numbers u, in such a way that the outcome, knowing u, is deterministic. For each instantiation of u we run an optimization procedure to minimize the distance between summary statistics of the simulator and the data. After reweighing these samples using the prior and the Jacobian (accounting for the change of volume in transforming from the space of summary statistics to the space of parameters) we show that this weighted ensemble represents a Monte Carlo estimate of the posterior distribution. The procedure can be run embarrassingly parallel (each node handling one sample) and anytime (by allocating resources to the worst performing sample). The procedure is validated on six experiments.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- North America > United States > California > Orange County > Irvine (0.04)
Demystifying Orthogonal Monte Carlo and Beyond
Lin, Han, Chen, Haoxian, Zhang, Tianyi, Laroche, Clement, Choromanski, Krzysztof
Orthogonal Monte Carlo (OMC) is a very effective sampling algorithm imposing structural geometric conditions (orthogonality) on samples for variance reduction. Due to its simplicity and superior performance as compared to its Quasi Monte Carlo counterparts, OMC is used in a wide spectrum of challenging machine learning applications ranging from scalable kernel methods to predictive recurrent neural networks, generative models and reinforcement learning. However theoretical understanding of the method remains very limited. In this paper we shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results. We also propose a novel extensions of the method leveraging number theory techniques and particle algorithms, called Near-Orthogonal Monte Carlo (NOMC). We show that NOMC is the first algorithm consistently outperforming OMC in applications ranging from kernel methods to approximating distances in probabilistic metric spaces.
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Asia > Japan > Kyūshū & Okinawa > Okinawa (0.04)
- (12 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning > Representation Of Examples (0.34)
Can AI Achieve Common Sense to Make Machines More Intelligent?
Today machines with artificial intelligence (AI) are becoming more prevalent in society. Across many fields, AI has taken over numerous tasks that humans used to do earlier. As the reference is to human intelligence, artificial intelligence is being modified into what humans can do. However, the technology has not yet matched the level of utmost wisdom possessed by humans and it seems like it is not going to achieve the milestone any time sooner. To replace human beings at most jobs, machines need to exhibit what we intuitively call "common sense".
- Information Technology > Artificial Intelligence > Representation & Reasoning > Ontologies (0.80)
- Information Technology > Artificial Intelligence > Systems & Languages > Problem-Independent Architectures (0.57)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Commonsense Reasoning (0.57)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.32)