Education


Will machines one day be as creative as humans? - Microsoft Research

#artificialintelligence

Recent methods in artificial intelligence enable AI software to produce rich and creative digital artifacts such as text and images painted from scratch. One technique used in creating these artifacts are generative adversarial networks (GANs). Generative adversarial networks are a recent breakthrough in machine learning. Initially proposed by Ian Goodfellow and colleagues at the University of Montreal at NIPS 2014, the GAN approach enables the specification and training of rich probabilistic deep learning models using standard deep learning technology. Allowing for flexible probabilistic models is important in order to capture rich phenomena present in complex data.


So, you want a job in machine learning? 5 takeaways from NIPS 2017

@machinelearnbot

Machine learning is nothing new. Many of the techniques which now come under the umbrella term of machine learning have been around for decades. However, machine learning recently become much more popular, spurred by the availability of vast amounts of data and cheaper computing power. For one week earlier this month, over 8,000 data scientists, including myself, converged on Los Angeles for the annual NIPS (Neural Information Processing Systems) conference. Started over 30 years ago, NIPS is now one of the world's biggest events on machine learning.


RE-WORK . FOURTH GLOBAL MACHINE INTELLIGENCE SUMMIT 28 - 29 JUNE 2017 @teamrework Amsterdam

#artificialintelligence

Hoy traemos a este espacio al FOURTH GLOBAL MACHINE INTELLIGENCE SUMMIT, que tedrá lugar el 28 - 29 JUNE 2017 en Amsterdam Informar de un error de Maps Postillion Convention Centre Amsterdam Paul van Vlissingenstraat 8 The Postillion Convention Centre Amsterdam is very conveniently located between the city and the arterial roads and 20 minutes from Amsterdam Airport Schiphol. TOPICS WE COVER NATURAL LANGUAGE PROCESSING INDUSTRIAL AUTOMATION Where machine learning meets artificial intelligence. The rise of intelligent machines to make sense of data. The Machine Intelligence Summit: where machine learning meets artificial intelligence. The rise of intelligent machines to make sense of data in the real world.


DeepMind Has Simple Tests That Might Prevent Elon Musk's AI Apocalypse

#artificialintelligence

You don't have to agree with Elon Musk's apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm. This type of self-learning software powers Uber's self-driving cars, helps Facebook identify people in social-media posts, and let's Amazon's Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe. Researchers put AI software into a series of simple, two-dimensional video games composed of blocks of pixels, like a chess board, called a gridworld. It assesses nine safety features, including whether AI systems can modify themselves and learn to cheat.


TensorFlow for R

@machinelearnbot

You'll work with the IMDB dataset: a set of 50,000 highly polarized reviews from the Internet Movie Database. They're split into 25,000 reviews for training and 25,000 reviews for testing, each set consisting of 50% negative and 50% positive reviews. Because you should never test a machine-learning model on the same data that you used to train it! Just because a model performs well on its training data doesn't mean it will perform well on data it has never seen; and what you care about is your model's performance on new data (because you already know the labels of your training data – obviously you don't need your model to predict those). For instance, it's possible that your model could end up merely memorizing a mapping between your training samples and their targets, which would be useless for the task of predicting targets for data the model has never seen before.


NeuroSeed uses progressive technologies – NEUROSEED – Medium

@machinelearnbot

The implementation of the NeuroSeed platform will lead to the rapid development of the entire set of machine learning technologies. It will help to reduce the costs of development, mass deployment and significant increase in the efficiency of derivative systems. The concept of the rationality of using several machine learning models merged and further pre-trained is proposed and approved. A common limiting factor in the development and implementation of such systems was the lack of reliable technologies that could provide decentralized digital reliability for the machine learning models and data sources. This technology has become a blockchain.


Algorithm better at diagnosing pneumonia than radiologists

@machinelearnbot

Stanford researchers have developed a deep-learning algorithm that evaluates chest X-rays for signs of disease. Stanford researchers have developed an algorithm that offers diagnoses based off chest X-ray images. A paper about the algorithm, called CheXNet, was published Nov. 14 on the open-access, scientific preprint website arXiv. "Interpreting X-ray images to diagnose pathologies like pneumonia is very challenging, and we know that there's a lot of variability in the diagnoses radiologists arrive at," said Pranav Rajpurkar, a graduate student in the Machine Learning Group at Stanford and co-lead author of the paper. "We became interested in developing machine learning algorithms that could learn from hundreds of thousands of chest X-ray diagnoses and make accurate diagnoses."



Artificially intelligent robots could gain consciousness

Daily Mail

From babysitting children to beating the world champion at Go, robots are slowly but surely developing more and more advanced capabilities. And many scientists, including Professor Stephen Hawking, suggest it may only be a matter of time before machines gain consciousness. In a new article for The Conversation, Professor Subhash Kak, Regents Professor of Electrical and Computer Engineering at Oklahoma State University explains the possible consequences if artificial intelligence gains consciousness. In a new article for The Conversation, Professor Subhash Kak explains the possible consequences if artificial intelligence gains consciousness. Most computer scientists think that consciousness is a characteristic that will emerge as technology develops.


If the Impact of Artificial Intelligence on Work is Unclear, What Can Schools Do?

#artificialintelligence

Artificial intelligence is already reshaping the labor market. Its impact will likely become even more disruptive. But experts have historically been bad at predicting which jobs and tasks will be lost to automation, and public officials have historically been slow to respond to technological advances with smart, effective regulations. That's the nutshell of a RAND Corporation report on "The Risks of Artificial Intelligence to Security and the Future of Work," released earlier this week. What can K-12 educators and policymakers take away from the work?