Goto

Collaborating Authors

 claude shannon


The brief history of artificial intelligence: The world has changed fast – what might be next? - Big Think

#artificialintelligence

To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future. How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the '90s were big bricks with tiny green displays.


How Claude Shannon's Concept of Entropy Quantifies Information

#artificialintelligence

If someone tells you a fact you already know, they've essentially told you nothing at all. Whereas if they impart a secret, it's fair to say something has really been communicated. This distinction is at the heart of Claude Shannon's theory of information. Introduced in an epochal 1948 paper, "A Mathematical Theory of Communication," it provides a rigorous mathematical framework for quantifying the amount of information needed to accurately send and receive a message, as determined by the degree of uncertainty around what the intended message could be saying. Which is to say, it's time for an example.


The English Opening.

#artificialintelligence

The one thing I tell most people upon meeting me is, I play chess. Long before I got into the data science career field I have played chess. It has always been a pastime to me. The reason I mention it so consistently is because those who play chess have a very analytical way of thinking. It is a natural adaption from the game itself, depending on your consistency of play.


Markov models and Markov chains explained in real life: probabilistic workout routine

#artificialintelligence

Andrei Markov didn't agree with Pavel Nekrasov, when he said independence between variables was necessary for the Weak Law of Large Numbers to be applied. When you collect independent samples, as the number of samples gets bigger, the mean of those samples converges to the true mean of the population. But Markov believed independence was not a necessary condition for the mean to converge. So he set out to define how the average of the outcomes from a process involving dependent random variables could converge over time. Thanks to this intellectual disagreement, Markov created a way to describe how random, also called stochastic, systems or processes evolve over time.


Markov models and Markov chains explained in real life: probabilistic workout routine

#artificialintelligence

Andrei Markov didn't agree with Pavel Nebrasov, when he said independence between variables was necessary for the Weak Law of Large Numbers to be applied. When you collect independent samples, as the number of samples gets bigger, the mean of those samples converges to the true mean of the population. But Markov believed independence was not a necessary condition for the mean to converge. So he set out to define how the average of the outcomes from a process involving dependent random variables could converge over time. Thanks to this intellectual disagreement, Markov created a way to describe how random, also called stochastic, systems or processes evolve over time.


When Bayes, Ockham, and Shannon come together to define machine learning

#artificialintelligence

Thanks to my CS7641 class at Georgia Tech in my MS Analytics program, where I discovered this concept and was inspired to write about it. It is somewhat surprising that among all the high-flying buzzwords of machine learning, we don't hear much about the one phrase which fuses some of the core concepts of statistical learning, information theory, and natural philosophy into a single three-word-combo. Moreover, it is not just an obscure and pedantic phrase meant for machine learning (ML) Ph. It has a precise and easily accessible meaning for anyone interested to explore, and a practical pay-off for the practitioners of ML and data science. I am talking about the Minimum Description Length.


Nervous System: Claude Shannon's Magic Mouse and the Beginnings of Artificial Intelligence Legaltech News

#artificialintelligence

More than 60 years ago, when digital computers that could do rote and automated tasks were still gaining acceptance, pioneering information theorist Claude Shannon announced he had successfully built a machine capable of learning from its mistakes and teaching itself how to improve.


To Understand The Future of AI, Study Its Past

#artificialintelligence

Dr. Claude Shannon, one of the pioneers of the field of artificial intelligence, with an electronic ... [ ] mouse designed to navigate its way around a maze after only one'training' run. A schism lies at the heart of the field of artificial intelligence. Since its inception, the field has been defined by an intellectual tug-of-war between two opposing philosophies: connectionism and symbolism. These two camps have deeply divergent visions as to how to "solve" intelligence, with differing research agendas and sometimes bitter relations. Today, connectionism dominates the world of AI. The emergence of deep learning, which is a quintessentially connectionist technique, has driven the worldwide explosion in AI activity and funding over the past decade.


To Understand The Future of AI, Study Its Past

#artificialintelligence

Dr. Claude Shannon, one of the pioneers of the field of artificial intelligence, with an electronic ... [ ] mouse designed to navigate its way around a maze after only one'training' run. A schism lies at the heart of the field of artificial intelligence. Since its inception, the field has been defined by an intellectual tug-of-war between two opposing philosophies: connectionism and symbolism. These two camps have deeply divergent visions as to how to "solve" intelligence, with differing research agendas and sometimes bitter relations. Today, connectionism dominates the world of AI. The emergence of deep learning, which is a quintessentially connectionist technique, has driven the worldwide explosion in AI activity and funding over the past decade.


The Birthplace of AI

#artificialintelligence

Prior to the conference, Assistant Professor of Mathematics at Dartmouth John McCarthy and Claude Shannon from MIT had been co-editing the then forthcoming Volume 34 of the Annals of Mathematics Studies journal, on Automata Studies (Shannon & McCarthy, 1956). Automata are self-operating machines designed to automatically follow predetermined sequences of operations or respond to predetermined instructions. As engineering mechanisms they appear in a wide variety of everyday applications such as mechanical clocks where a hammer strikes a bell or a cuckoo appears to sing. "At the time I believed if only we could get everyone who was interested in the subject together to devote time to it and avoid distractions, we could make real progress" -- John McCarthy The initial group McCarthy had in mind included Marvin Minsky whom he had known since they were graduate students together at Fine Hall in the early 1950s. The two had talked about artificial intelligence then, and Minsky's PhD dissertation in mathematics had been on neural nets (Moor, 2006) and the structure of the human brain (Nasar, 1998).