"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
Starfleet's star android, Lt. Commander Data, has been enlisted by his renegade android "brother" Lore to join a rebellion against humankind -- much to the consternation of Jean-Luc Picard, captain of the USS Enterprise. "The reign of biological life-forms is coming to an end," Lore tells Picard. "You, Picard, and those like you, are obsolete." In real life, the era of smart machines has already arrived. They haven't completely taken over the world yet, but they're off to a good start.
Healthcare is an important industry which offers value-based care to millions of people, while at the same time becoming top revenue earners for many countries. Today, the Healthcare industry in the US alone earns a revenue of $1.668 trillion. The US also spends more on healthcare per capita as compared to most other developed or developing nations. Quality, Value, and Outcome are three buzzwords that always accompany healthcare and promise a lot, and today, healthcare specialists and stakeholders around the globe are looking for innovative ways to deliver on this promise. Technology-enabled smart healthcare is no longer a flight of fancy, as Internet-connected medical devices are holding the health system as we know it together from falling apart under the population burden.
Academics from the Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital have demonstrated how neural networks can be trained to administer anesthetic during surgery. Over the past decade, machine learning (ML), artificial intelligence (AI), and deep learning algorithms have been developed and applied to a range of sectors and applications, including in the medical field. In healthcare, the potential of neural networks and deep learning has been demonstrated in the automatic analysis of large medical datasets to detect patterns and trends; improved diagnosis procedures, tumor detection based on radiology images, and more recently, an exploration into robotic surgery. Now, neural networking may have new, previously-unexplored applications in the surgical and drug administration areas. A team made up of MIT and Mass General scientists, as reported by Tech Xplore, have developed and trained a neural network to administrator Propofol, a drug commonly used as general anesthesia when patients are undergoing medical procedures.
A way of monitoring household appliances by using machine learning to analyse vibrations on a wall or ceiling has been developed by researchers in the US. Their system could be used to create centralized smart home systems without the need for individual sensors in each object. What is more, the technology could help track energy use, identify electrical faults and even remind people to empty the dishwasher. "Recognizing home activities can help computers better understand human behaviours and needs, with the hope of developing a better human-machine interface," says team member and information scientist Cheng Zhang of Cornell University. The system, dubbed VibroSense, comprises two core parts: a laser Doppler vibrometer and a deep learning model, which is a type of machine learning system.
LSTMs are one of the most important breakthroughs in machine learning; Giving machine learning algorithms the ability to recall past information, allows for the realization of temporal patterns. How better to understand a concept, than to create it from scratch? LSTM stands for Long short-term memory, denoting its ability to use past information to make predictions. The mechanism behind the LSTM is quite simple. Instead of a single feedforward process for the data to be propagated through, LSTMs have different sources of processed information, from different timesteps as inputs for the networks, therefore being able to access time-related patterns within the data.
If you have ever delved into the world of data science, then it will not be an absurdity for me to assume that by now you have certainly encountered the term Neural Networks somewhere, at some point in your journey towards probing Machine Learning and Artificial Intelligence or data science in general. A most approved definition for Neural Network (NN) is that it is a brain-inspired computer architecture containing varying network topologies of functions, where the nodes are interconnected in a specific fashion to unmask the underlying patterns in a dataset by following a series of algorithms iteratively (Figure 1). Perhaps, with a little mathematical maturity and a slight knowledge of optimization theory, it is criminally simple to call a Neural Network as a function approximator or a regressor. From my previous article, we saw through a practical example that how the high-level APIs such as Keras and Tensorflow makes it really simple to build and train a neural network. From my articles, you can probably tell that I am a very visual person, and when it comes to learning, I think that engaging visuals facilitates deeper comprehension.
Recently, researchers affiliated with the Baylor College of Medicine, the University of Cambridge, the University of Massachusetts Amherst, and Rice University created a new way of adapting a neuroscience concept called "brain replay" to the digital realm of artificial neural networks to enable continuous learning. From a neuroscience perspective, the concept of brain replay is analogous to a streaming service that activates repeat showings from its vast archives of stored pre-recorded content. The brain can replay memories by reactivating the neural activity patterns that represent prior experiences, whether asleep or awake. This ability for memory replay starts in the hippocampus, then continues in the cortex. The research trio of Hava Siegelmann, Andreas Tolias, and Gido van de Ven published a study in Nature Communications on August 13, 2020, that shows state-of-the-art performance from neural networks by deploying a new twist on mimicking brain replay.
I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Humans build knowledge in images. Every time we are presented with an idea or an experience, our brain immediately formulates visual representations of it.
In the first part of our tutorial on neural networks, we explained the basic concepts about neural networks, from the math behind them to implementing neural networks in Python without any hidden layers. We showed how to make satisfactory predictions even in case scenarios where we did not use any hidden layers. However, there are several limitations to single-layer neural networks. In this tutorial, we will dive in-depth on the limitations and advantages of using neural networks in machine learning. We will show how to implement neural nets with hidden layers and how these lead to a higher accuracy rate on our predictions, along with implementation samples in Python on Google Colab.