"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
AI (Artificial Intelligence) has experienced several periods of severe funding cuts and lack of interest, such as during the 1970s and 1980s. They were called "AI winters," a reference to the concept of nuclear winter where the sun is blocked from a layer of smoke and dust! But of course, things are much different nowadays. AI is one of the hottest categories of tech and is a strategic priority for companies like Facebook, Google, Microsoft and many others. Yet could we be facing another winter?
While covering advanced topics, this article is accessible to professionals with limited knowledge in statistical or mathematical theory. It introduces new material not covered in my recent book (available here) on applied stochastic processes. You don't need to read my book to understand this article, but the book is a nice complement and introduction to the concepts discussed here. None of the material presented here is covered in standard textbooks on stochastic processes or dynamical systems. In particular, it has nothing to do with the classical logistic map or Brownian motions, though the systems investigated here exhibit very similar behaviors and are related to the classical models.
Last June, a team at Harvard Medical School and MIT showed that it's pretty darn easy to fool an artificial intelligence system analyzing medical images. Researchers modified a few pixels in eye images, skin photos and chest X-rays to trick deep learning systems into confidently classifying perfectly benign images as malignant. These so-called "adversarial attacks" implement small, carefully designed changes to data--in this case pixel changes imperceptible to human vision--to nudge an algorithm to make a mistake. That's not great news at a time when medical AI systems are just reaching the clinic, with the first AI-based medical device approved in April and AI systems besting doctors at diagnosis across healthcare sectors. Now, in collaboration with a Harvard lawyer and ethicist, the same team is out with an article in the journal Science to offer suggestions about when and how the medical industry might intervene against adversarial attacks.
The conventional model of oncogenic RAS-MAPK pathway signaling in cancer suggests that mutations in the pathway render downstream signaling largely independent of regulation (autonomous). However, the emerging model of a semiautonomous state through which pathological RAS signaling remains under some control suggests a potential therapeutic opportunity to target upstream regulators, such as SHP2, SOS, and GRB2. Mass spectrometry is a predominant experimental technique in metabolomics and related fields, but metabolite structural elucidation remains highly challenging. Researchers report SIRIUS 4 (https://bio.informatik.uni-jena.de/sirius/), Amazon SageMaker is an end-to-end machine learning platform that enables users to prepare training data and build machine learning models quickly using pre-built Jupyter notebook with pre-built algorithms.
You just learn how to build and train 5 deep learning models for classification problems using Tensorflow. One more thing about adding pooling layer is that because of the pooling, the image size is gradually shrinking. Early convolutional weights often train to detect simple edges, while successive convolutional layers combine those edges into gradually more complex shapes such as faces, cars, and even dogs. Human learning is the beginning of Deep learning!
CS 229 ― Machine Learning My twin brother Afshine and I created this set of illustrated Machine Learning cheatsheets covering the content of the CS 229 class, which I TA-ed in Fall 2018 at Stanford. They can (hopefully!) be useful to all future students of this course as well as to anyone else interested in Machine Learning.
Today, AI can design machine learning systems known as neural networks in a process called neural architecture search (NAS). But this technique requires a considerable amount of resources like time, processing power and money. Even for Google, producing a single convolution neural network -- often used for image classification -- takes 48,000 GPU hours. Now, MIT researchers have developed a NAS algorithm that automatically learns a convolution neural network in a fraction of the time -- just 200 GPU hours. Speeding up the process in which AI designs neural networks could enable more people to use and experiment with NAS, and that could advance the adoption of AI.
To my readers it will appear as though I am writing some article on old Greek mythology, but you will soon realize that the world remains the same the more it changes. Recently Ali Rahimi, a researcher in artificial intelligence at Google, compared machine learning with alchemy. Later a few technology journalists, more than ever before, started writing about the relationship between technology and alchemy. Alchemy is about using the "trial and error" method and coming out with a formula (mostly secret or something that cannot be deconstructed). Similarly, in machine learning a model is designed out of data, this model constantly learns and produces an output but nobody know how decisions are made.
To celebrate the German composer's March 21, 1685 birthday, Doodle lets users compose a melody in Bach's style. The interactive Doodle is the product of collaboration between Google's Magenta – which helps people make their own music and art through machine learning – and Google's PAIR – which makes the tools that allow machine-learning to be accessed by everyone. A machine-learning model called Coconet made it all possible. Developed by Google, Coconet was trained on 306 of Bach's chorale harmonizations. "His chorales always have four voices: each carries their own melodic line, creating a rich harmonic progression when played together," writes Google.
Walking around without being constantly identified by AI could soon be a thing of the past, legal experts have warned. The use of facial recognition software could signal the end of civil liberties if the law doesn't change as quickly as advancements in technology, they say. Software already being trialled around the world could soon be adopted by companies and governments to constantly track you wherever you go. Shop owners are already using facial recognition to track shoplifters and could soon be sharing information across a broad network of databases, potentially globally. Previous research has found that the technology isn't always accurate, mistakenly identifying women and individuals with darker shades of skin as the wrong people.