Overview of Udacity Artificial Intelligence Engineer Nanodegree, Term 1

#artificialintelligence

After finishing Udacity Deep Learning Foundation I felt that I got a good introduction to Deep Learning, but to understand things, I must dig deeper. Besides I had a guaranteed admission to Self-Driving Car Engineer, Artificial Intelligence, or Robotics Nanodegree programs.


SS99-01-023.pdf

AAAI Conferences

Rule induction methods axe classified into two categories, induction of deterministic rules and probabilistic ones(Michalski 1986; Pawlak 1991; Tsumoto and Tanaka 1996). While deterministic rules are supported by positive examples, probabilistic ones are supported by large positive examples and small negative samples. That is, both kinds of rules select positively one decision if a case satisfies their conditional parts. However, domain experts do not use only positive reasoning but also negative reasoning, since a domain is not always deterministic. For example, when a patient does not have a headache, migraine should not be suspected: negative reasoning plays an important role in cutting the search space of a differential diagnosis(Tsumoto and Tanaka 1996). 1 Therefore, negative rules should be induced from databases in order to induce rules which will be easier for domain experts to 1The essential point is that if extracted patterns do not reflect experts' reasoning process, domain experts have difficulties in interpreting them. Without interpretation of domain experts, a discovery procedure would not proceed, which also means that the interaction between human experts and computers is indispensable to computer-assisted discovery.


Flexible Models for Microclustering with Application to Entity Resolution

Neural Information Processing Systems

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points. Finite mixture models, Dirichlet process mixture models, and Pitman-Yor process mixture models make this assumption, as do all other infinitely exchangeable clustering models. However, for some applications, this assumption is inappropriate. For example, when performing entity resolution, the size of each cluster should be unrelated to the size of the data set, and each cluster should contain a negligible fraction of the total number of data points. These applications require models that yield clusters whose sizes grow sublinearly with the size of the data set. We address this requirement by defining the microclustering property and introducing a new class of models that can exhibit this property. We compare models within this class to two commonly used clustering models using four entity-resolution data sets.


AI: Man vs machine, or man AND machine?

#artificialintelligence

WITH the recent triumph of the Google AlphaGo program over Go master Lee Se-dol in Seoul, the doomsayers are in full chorus again over the spectre of Hollywood-style artificial intelligence (AI) taking over humanity. It was the same fear in the late 1990s when IBM's Deep Blue supercomputer beat then reigning chess world champion Garry Kasparov. There are a few differences however: Go is considered a much more intricate game than chess, and AI technology has improved quite a bit since then, and we're seeing breakthroughs such as Google's self-driving cars, virtual assistants like Apple's Siri and Microsoft's Cortana, and even IBM Watson's win in popular trivia quiz Jeopardy!. Enough that even sober scientists are taking note. In a December 2014 interview with the British Broadcasting Corporation (BBC), renowned physicist Stephen Hawking expressed his concerns, saying that AI poses a threat to humanity's existence, despite its usefulness.


Flexible Models for Microclustering with Application to Entity Resolution

arXiv.org Machine Learning

Most generative models for clustering implicitly assume that the number of data points in each cluster grows linearly with the total number of data points. Finite mixture models, Dirichlet process mixture models, and Pitman--Yor process mixture models make this assumption, as do all other infinitely exchangeable clustering models. However, for some applications, this assumption is inappropriate. For example, when performing entity resolution, the size of each cluster should be unrelated to the size of the data set, and each cluster should contain a negligible fraction of the total number of data points. These applications require models that yield clusters whose sizes grow sublinearly with the size of the data set. We address this requirement by defining the microclustering property and introducing a new class of models that can exhibit this property. We compare models within this class to two commonly used clustering models using four entity-resolution data sets.