representation


A Tour of The Top 10 Algorithms for Machine Learning Newbies

#artificialintelligence

In machine learning, there's something called the "No Free Lunch" theorem. In a nutshell, it states that no one algorithm works best for every problem, and it's especially relevant for supervised learning (i.e. For example, you can't say that neural networks are always better than decision trees or vice-versa. There are many factors at play, such as the size and structure of your dataset. As a result, you should try many different algorithms for your problem, while using a hold-out "test set" of data to evaluate performance and select the winner.


Transfer Learning: A Beginner's Guide

#artificialintelligence

This blog post will introduce the concept of'transfer learning' and how it is used in machine learning applications. Transfer learning is not a machine learning model or technique; it is rather a'design methodology' within machine learning. Another type of'design methodology' is, for example, active learning. A next blog post will explain how you can use active learning in conjunction with transfer learning to optimally leverage existing (and new) data. In a broad sense, machine learning applications that leverage external information to improve the performance or generalisation capabilities use transfer learning.


#Open #IoT with #Blockchain #AI and #BigData – Paradigm Interactions

#artificialintelligence

There will be many people who will say it does exist and has working technologies, hardware and software. It is an interesting error in thinking to focus on closed system devices/products as to what Ubiquity (IoT3) is. Devices are used to get across the point of various types of connections and networks being accessed. But more importantly in a full implementation of the concept of Ubiquity (often described as the IoT) devices may not even be owned anymore. The ownership of devices ceases to be important if you can own your digital identity, can verify it and establish your own ecosystem of assets in Blockchain.


23 terms to help understand artificial intelligence (AI)

#artificialintelligence

AI winters – Moments in the history of AI, in which doubts overshadowed previous enthusiasm. API – (Application Programming Interface), a standardized set of methods by which a software program provides services for other software programs. Artifact – Object made by a human. Further reading: What is big data? Connectionism – Paradigm of cognitive science based on neural networks.


AI and machine learning bias has dangerous implications

#artificialintelligence

Algorithms are everywhere in our world, and so is bias. From social media news feeds to streaming service recommendations to online shopping, computer algorithms--specifically, machine learning algorithms--have permeated our day-to-day world. As for bias, we need only examine the 2016 American election to understand how deeply--both implicitly and explicitly--it permeates our society as well. What's often overlooked, however, is the intersection between these two: bias in computer algorithms themselves. Contrary to what many of us might think, technology is not objective.


Generative Adversarial Networks (GANs): Engine and Applications

@machinelearnbot

GANs were introduced by Ian Goodfellow in 2014. They aren't the only approach of neural networks in unsupervised learning. There's also the Boltzmann machine (Geoffrey Hinton and Terry Sejnowski, 1985) and Autoencoders (Dana H. Ballard, 1987). Both of them are dedicated to extract features from data by learning the identity function f(x) x and both of them rely on Markov chains to train or to generate samples. Generative adversarial networks were designed to avoid using Markov chains because of the high computational cost of the latter.


Social place-cells in the bat hippocampus

Science

Social animals have to know the spatial positions of conspecifics. However, it is unknown how the position of others is represented in the brain. We designed a spatial observational-learning task, in which an observer bat mimicked a demonstrator bat while we recorded hippocampal dorsal-CA1 neurons from the observer bat. A neuronal subpopulation represented the position of the other bat, in allocentric coordinates. About half of these "social place-cells" represented also the observer's own position--that is, were place cells.


Spatial representations of self and other in the hippocampus

Science

An animal's awareness of its location in space depends on the activity of place cells in the hippocampus. How the brain encodes the spatial position of others has not yet been identified. We investigated neuronal representations of other animals' locations in the dorsal CA1 region of the hippocampus with an observational T-maze task in which one rat was required to observe another rat's trajectory to successfully retrieve a reward. Information reflecting the spatial location of both the self and the other was jointly and discretely encoded by CA1 pyramidal cells in the observer rat. A subset of CA1 pyramidal cells exhibited spatial receptive fields that were identical for the self and the other.


Thoughts on Gary Marcus' Critique of Deep Learning – Intuition Machine – Medium

#artificialintelligence

Gary Marcus has recently published a detailed, rather extensive critique of Deep Learning. While many of Dr. Marcus's points are well-known among those deeply familiar with the field and have been somewhat well-publicized for years, these discussions haven't yet reached many who are newly involved in decision-making in this space. Overall, the discussion the critique has generated seems clarifying and useful. I have decided to write up my thoughts because, while I think Dr. Marcus' critique is thoughtful, necessary and often justified, I disagree with some of the conclusions. To start, Dr. Marcus' assessment that Deep Learning, as originally defined, is merely a statistical technique for classifying patterns is spot on in my opinion.


Turning brain signals into useful information

#artificialintelligence

FOR those who reckon that brain-computer interfaces will never catch on, there is a simple answer: they already have. Well over 300,000 people worldwide have had cochlear implants fitted in their ears. Strictly speaking, this hearing device does not interact directly with neural tissue, but the effect is not dissimilar. A processor captures sound, which is converted into electrical signals and sent to an electrode in the inner ear, stimulating the cochlear nerve so that sound is heard in the brain. Michael Merzenich, a neuroscientist who helped develop them, explains that the implants provide only a crude representation of speech, "like playing Chopin with your fist".