A probability on its own is often an uninteresting thing. But when we can compare probabilities, that is when their full splendour is revealed. By comparing probabilities we are able form judgements; by comparing probabilities we can exploit the elements of our world that are probable; by comparing probabilities we can see the value of objects that are rare. In their own ways, all machine learning tricks help us make better probabilistic comparisons. Comparison is the theme of this post--not discussed in this series before--and the right start to this second sprint of machine learning tricks.
It is no doubt that the sub-field of machine learning / artificial intelligence has increasingly gained more popularity in the past couple of years. As Big Data is the hottest trend in the tech industry at the moment, machine learning is incredibly powerful to make predictions or calculated suggestions based on large amounts of data. Some of the most common examples of machine learning are Netflix's algorithms to make movie suggestions based on movies you have watched in the past or Amazon's algorithms that recommend books based on books you have bought before. So if you want to learn more about machine learning, how do you start? For me, my first introduction is when I took an Artificial Intelligence class when I was studying abroad in Copenhagen.
About this course: Bayesian methods are used in lots of fields: from game development to drug discovery. They give superpowers to many machine learning algorithms: handling missing data, extracting much more information from small datasets. Bayesian methods also allow us to estimate uncertainty in predictions, which is a really desirable feature for fields like medicine. When Bayesian methods are applied to deep learning, it turns out that they allow you to compress your models 100 folds, and automatically tune hyperparametrs, saving your time and money. In six weeks we will discuss the basics of Bayesian methods: from how to define a probabilistic model to how to make predictions from it.
In this post I'll explain what the maximum likelihood method for parameter estimation is and go through a simple example to demonstrate the method. Some of the content requires knowledge of fundamental probability concepts such as the definition of joint probability and independence of events. I've written a blog post with these prerequisites so feel free to read this if you think you need a refresher. Often in machine learning we use a model to describe the process that results in the data that are observed. For example, we may use a random forest model to classify whether customers may cancel a subscription from a service (known as churn modelling) or we may use a linear model to predict the revenue that will be generated for a company depending on how much they may spend on advertising (this would be an example of linear regression).
Today, mobile robotics is an increasingly important bridge between the two areas. It is advancing the theory and practice of cooperative cognition, perception, and action and serving to reunite planning techniques with sensing and real-world performance. Further, developments in mobile robotics can have important practical economic and military consequences. For some time now, amateurs, hobbyists, students, and researchers have had access to how-to books on the low-level mechanical and electronic aspects of mobile-robot construction (Everett 1995; McComb 1987). The famous Massachusetts Institute of Technology (MIT) 6.270 robot-building course has contributed course notes and hardware kits that are now available commercially and in the form of an influential book (Jones 1998; Jones and Flynn 1993).
Homeless youth are prone to human immunodeficiency virus (HIV) due to their engagement in high-risk behavior such as unprotected sex, sex under influence of drugs, and so on. Many nonprofit agencies conduct interventions to educate and train a select group of homeless youth about HIV prevention and treatment practices and rely on word-of-mouth spread of information through their one single social network Previous work in strategic selection of intervention participants does not handle uncertainties in the social networks' structure and evolving network state, potentially causing significant shortcomings in spread of information. Thus, we developed PSINET, a decision-support system to aid the agencies in this task. PSINET includes the following key novelties: (1) it handles uncertainties in network structure and evolving network state; (2) it addresses these uncertainties by using POMDPs in influence maximization; and (3) it provides algorithmic advances to allow high-quality approximate solutions for such POMDPs. Simulations show that PSINET achieves around 60 percent more information spread over the current state of the art.
Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably nearoptimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Then, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the U.S. Geological Survey and the U.S. Fish and Wildlife Service.
This article aims to take on a few of the machine learning algorithms for people who aim to gain knowledge on important machine learning concepts while using freely available materials and resources along the way. The prime objective of this outline is to help you wade through the numerous free options that are available. There are many, to be sure, but which are the best? What is the best order in which to use selected resources? Below are the common machine learning Algorithms briefly explained with Python and R code.
Bayesian Computational Analyses with R is an introductory course on the use and implementation of Bayesian modeling using R software. The Bayesian approach is an alternative to the "frequentist" approach where one simply takes a sample of data and makes inferences about the likely parameters of the population. In contrast, the Bayesian approach uses both likelihood functions and a sample of observed data (the'prior') to estimate the most likely values and distributions for the estimated population parameters (the'posterior'). The course is useful to anyone who wishes to learn about Bayesian concepts and is suited to both novice and intermediate Bayesian students and Bayesian practitioners. It is both a practical, "hands-on" course with many examples using R scripts and software, and is conceptual, as the course explains the Bayesian concepts.
No, this is not about whether you want your virtual agent to understand English slang, the subjunctive tense in Spanish or even the dozens of ways to say "I" in Japanese. In fact, the programming language you build your bot with is as important as the human language it understands. But how do you differentiate between them? Of course, the caveat should always be to veer toward the language you are most comfortable with, but for those dipping their toe into the programming pond for the first time, a clear winner starts to emerge. Python is the language of choice.