I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence(AI) space. Recently, I've been spending time on a couple of scenarios that relate to imagination in deep learning systems which reminded me of a very influential paper Alphabet's subsidiary DeepMind published last year in this subject.
A brain mechanism referred to as "replay" inspired researchers at Baylor College of Medicine to develop a new method to protect deep neural networks, found in artificial intelligence (AI), from forgetting what they have previously learned. The study, in the current edition of Nature Communications, has implications for both neuroscience and deep learning. Deep neural networks are the main drivers behind the recent fast progress in AI. These networks are extremely good at learning to solve individual tasks. However, when they are trained on a new task, they typically lose the ability to solve the previously learned task completely.
The Cambridge Dictionary defines "bootstrap" as: "to improve your situation or become more successful, without help from others or without advantages that others have." While a machine learning algorithm's strength depends heavily on the quality of data it is fed, an algorithm that can do the work required to improve itself should become even stronger. A team of researchers from DeepMind and Imperial College recently set out to prove that in the arena of computer vision. In the updated paper Bootstrap Your Own Latent – A New Approach to Self-Supervised Learning, the researchers release the source code and checkpoint for their new "BYOL" approach to self-supervised image representation learning along with new theoretical and experimental insights. In computer vision, learning good image representations is critical as it allows for efficient training on downstream tasks. Image representation learning basically leverages neural networks that have been trained to produce good representations.
One study estimated that pharmaceutical companies spent US$2·6 billion in 2015, up from $802 million in 2003, for the development of a new chemical entity approved by the US Food and Drug Administration (FDA). N Engl J Med. 2015; 372: 1877-1879 The increasing cost of drug development is due to the large volume of compounds to be tested in preclinical stages and the high proportion of randomised controlled trials (RCTs) that do not find clinical benefits or with toxicity issues. Given the high attrition rates, substantial costs, and low pace of de-novo drug discovery, exploiting known drugs can help improve their efficacy while minimising side-effects in clinical trials. As Nobel Prize-winning pharmacologist Sir James Black said, "The most fruitful basis for the discovery of a new drug is to start with an old drug". New uses for old drugs.
Machine learning leads as Verdict lists the top five terms tweeted on artificial intelligence in August 2020, based on data from GlobalData's Influencer Platform. The top tweeted terms are the trending industry discussions happening on Twitter by key individuals (influencers) as tracked by the platform. Techniques to leverage machine learning algorithms, application of the technology across sectors such as agriculture, and data protection techniques in machine learning, were some of the popularly discussed topics in August. According to an article shared by Dr Ganapthi Pulipaka, a chief data scientist, machine learning is one of the most popular techniques for analysing images. In other news, Jason Brownlee, a machine learning specialist, shared an article on the framework of data preparation techniques in machine learning. In some cases, the data of the machine learning model may itself suggest the data preparation needed.
I recently started a new newsletter focus on AI education. TheSequence is a no-BS (meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence(AI) space. Recently, I've been spending time on a couple of scenarios that relate to imagination in deep learning systems which reminded me of a very influential paper Alphabet's subsidiary DeepMind published last year in this subject.
For many people who are struggling to conceive, in-vitro fertilization (IVF) can offer a life-changing solution. But the average success rate for IVF is only about 30 percent. Investigators from Brigham and Women's Hospital and Massachusetts General Hospital are developing an artificial intelligence system with the goal of improving IVF success by helping embryologists objectively select embryos most likely to result in a healthy birth. Using thousands of embryo image examples and deep-learning artificial intelligence (AI), the team developed a system that was able to differentiate and identify embryos with the highest potential for success significantly better than 15 experienced embryologists from five different fertility centers across the United States. Results of their study are published in eLife.
This part of the series looks at the future of AI with much of the focus in the period after 2025. The leading AI researcher, Geoff Hinton, stated that it is very hard to predict what advances AI will bring beyond five years, noting that exponential progress makes the uncertainty too great. This article will therefore consider both the opportunities as well as the challenges that we will face along the way across different sectors of the economy. It is not intended to be exhaustive. AI deals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment. Some of the classical approaches to AI include (non-exhaustive list) Search algorithms such as Breath-First, Depth-First, Iterative Deepening Search, A* algorithm, and the field of Logic including Predicate Calculus and Propositional Calculus. Local Search approaches were also developed for example Simulated Annealing, Hill Climbing (see also Greedy), Beam Search and Genetic Algorithms (see below). Machine Learning is defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The term was introduced by Arthur Samuel in 1959. A non-exhaustive list of examples of techniques include Linear Regression, Logistic Regression, K-Means, k-Nearest Neighbour (kNN), Naive Bayes, Support Vector Machine (SVM), Decision Trees, Random Forests, XG Boost, Light Gradient Boosting Machine (LightGBM), CatBoost. Deep Learning refers to the field of Neural Networks with several hidden layers. Such a neural network is often referred to as a deep neural network. Neural Networks are biologically inspired networks that extract abstract features from the data in a hierarchical fashion.
The Association of Data Scientists (ADaSci) recently announced Deep Learning DEVCON or DLDC 2020, a two-day virtual conference that aims to bring machine learning and deep learning practitioners and experts from the industry on a single platform to share and discuss recent developments in the field. Scheduled for 29th and 30th October, the conference comes at a time when deep learning, a subset of machine learning, has become one of the most advancing technologies in the world. From being used in the fields of natural language processing to making self-driving cars, it has come a long way. As a matter of fact, reports suggest that by 2024, the deep learning market is expected to grow at a CAGR of 25%. Thus, it can easily be established that the advancements in the field of deep learning have just initiated and got a long road ahead.