Computational Learning Theory
Probably Approximately Correct
The best we can hope for when it comes to most decisions is to be probably approximately correct–a high probability of being about right. In finance, analysts compare proposed capital costs with discounted anticipated future cash flows to calculate a net present value–a bunch of assumptions with the hope of being probably approximately correct. Insurance is a hedge against a big loss; it's based on the probability of bad stuff happening–the insurance company makes a little money if their calculations are probably approximately correct. A doctor takes a few data points and makes a diagnosis hoping she is probably approximately correct. School facilities planners estimate future enrollment trends and then school boards estimate the likelihood of a community support for a construction bond, both hope to be probably approximately correct.
Optimal Best Arm Identification with Fixed Confidence
Garivier, Aurélien, Kaufmann, Emilie
We give a complete characterization of the complexity of best-arm identification in one-parameter bandit problems. We prove a new, tight lower bound on the sample complexity. We propose the `Track-and-Stop' strategy, which we prove to be asymptotically optimal. It consists in a new sampling rule (which tracks the optimal proportions of arm draws highlighted by the lower bound) and in a stopping rule named after Chernoff, for which we give a new analysis.
Computational Learning Theory and Machine Learning for Understanding Cells
In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.
Image and Signal Processing Group
The Image and Signal Processing (ISP) Group at the IPL develops data analysis techniques and vision algorithms. We focus on methods able to extract knowledge from empirical data drawn by sensory (mostly imaging) systems. These measurements depend on the properties of the scenes and the physics of the imaging process. Our approach to signal, image and vision processing combines machine learning theory with the understanding of the underlying physics and biological vision. Applications mainly focus on computational visual neuroscience and remote sensing data analysis.
A Theory of Formal Synthesis via Inductive Learning
Jha, Susmit, Seshia, Sanjit A.
Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis.
IoT with Machine Learning
Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. Machine learning is very useful in IoT since it can be used to learn hidden relationships in the Big Data which flows in the system and used to make real-time complex classifications for taking actions based on them. There are many machine learning packages such as Apache Spark, Mahout, and Weka, each with its advantages and disadvantages. This blog shows how to use the easy-to-use powerful Java Statistical Analysis Tool library (JSAT) for a courier parcel pick up website app integrated with RAPIFIRE. The example illustrates how a user can get the estimated waiting time of a courier parcel pick up based on the GPS position of trucks.
How To Become A Machine Learning Expert In One Simple Step
This post looks at perhaps the most important, and often overlooked, step in learning machine learning, an aspect which can make the biggest difference in one's skill set. The web is full of good explanations of machine learning algorithms. And every second applicant for a data science position has finished the Coursera course on machine learning. Theory will not help you choose good values for the 16 parameters a standard implementation of a random forest takes. The default values are good to get started, but which parameters should you modify depending on your data?
Ultimate Intelligence Part II: Physical Measure and Complexity of Intelligence
We continue our analysis of volume and energy measures that are appropriate for quantifying inductive inference systems. We extend logical depth and conceptual jump size measures in AIT to stochastic problems, and physical measures that involve volume and energy. We introduce a graphical model of computational complexity that we believe to be appropriate for intelligent machines. We show several asymptotic relations between energy, logical depth and volume of computation for inductive inference. In particular, we arrive at a "black-hole equation" of inductive inference, which relates energy, volume, space, and algorithmic information for an optimal inductive inference solution. We introduce energy-bounded algorithmic entropy. We briefly apply our ideas to the physical limits of intelligent computation in our universe.
The NIPS experiment « Machine Learning (Theory)
Corinna Cortes and Neil Lawrence ran the NIPS experiment where 1/10th of papers submitted to NIPS went through the NIPS review process twice, and then the accept/reject decision was compared. This was a great experiment, so kudos to NIPS for being willing to do it and to Corinna & Neil for doing it. The 26% disagreement rate presented at the conference understates the meaning in my opinion, given the 22% acceptance rate. The immediate implication is that between 1/2 and 2/3 of papers accepted at NIPS would have been rejected if reviewed a second time. For analysis details and discussion about that, see here.
AlphaGo is not the solution to AI « Machine Learning (Theory)
Congratulations are in order for the folks at Google Deepmind who have mastered Go. However, some of the discussion around this seems like giddy overstatement. Wired says Machines have conquered the last games and Slashdot says We know now that we don't need any big new breakthroughs to get to true AI. The truth is nowhere close. For Go itself, it's been well-known for a decade that Monte Carlo tree search (i.e.