Goto

Collaborating Authors

Neural Networks


How to Create a Simple Neural Network in Python

#artificialintelligence

Neural networks are great at learning trends in both large and small data sets. However, data scientists have to be aware of the dangers of overfitting, which are more evident in projects where small data sets are used. Overfitting is when an algorithm is trained and modeled to fit a set of data points too closely so that it does not generalize well to new data points. Often, overfitting machine learning models have very high accuracy on the data sets they are trained on, but as a data scientist, the goal is usually to predict new data points as precisely as possible. To make sure that the model is evaluated based on how good it is to predict new data points, and not how well it is modeled to the current ones, it is common to split the datasets into one training set and one test set (and sometimes a validation set).


How artificial intelligence can democratise education after Covid-19

#artificialintelligence

New forms of AI, based on deep neural networks, can now uncover patterns about how the students perform and help teachers optimise their strategies …


Build Natural Flower Classifier using Amazon Rekognition Custom Labels

#artificialintelligence

Building your own computer vision model from scratch can be fun and fulfilling. You get to decide your preferred choice of machine learning framework and platform for training and deployment, design your data pipeline and neural network architecture, write custom training and inference scripts, and fine-tune your model algorithm's hyperparameters to get the optimal model performance. On the other hand, this can also be a daunting task for someone who has no or little computer vision and machine learning expertise. This post shows a step-by-step guide on how to build a natural flower classifier using Amazon Rekognition Custom Labels with AWS best practices. Amazon Rekognition Custom Labels is a feature of Amazon Rekognition, one of the AWS AI services for automated image and video analysis with machine learning. It provides Automated Machine Learning (AutoML) capability for custom computer vision end-to-end machine learning workflows.


Machine Learning in Medicine

#artificialintelligence

In perinatal medicine, fetal weight is a sort of Goldilocks problem: it has to be just right. If the fetus is too small, it may not be developing properly; too large and the mother faces much greater risks in childbirth. The trouble is, there is no way to directly measure fetal weight. Instead, doctors must rely on estimates that are calculated using a formula that includes the fetus's head and abdominal circumference, and the length of the femur. Unfortunately, this formula isn't always as accurate as doctors would like.


5 Emerging AI And Machine Learning Trends To Watch In 2021

#artificialintelligence

Artificial Intelligence and machine learning have been hot topics in 2020 as AI and ML technologies increasingly find their way into everything from advanced quantum computing systems and leading-edge medical diagnostic systems to consumer electronics and "smart" personal assistants. Revenue generated by AI hardware, software and services is expected to reach $156.5 billion worldwide this year, according to market researcher IDC, up 12.3 percent from 2019. But it can be easy to lose sight of the forest for the trees when it comes to trends in the development and use of AI and ML technologies. As we approach the end of a turbulent 2020, here's a big-picture look at five key AI and machine learning trends– not just in the types of applications they are finding their way into, but also in how they are being developed and the ways they are being used. Hyperautomation, an IT mega-trend identified by market research firm Gartner, is the idea that most anything within an organization that can be automated – such as legacy business processes – should be automated.


What is a Neural Network?

#artificialintelligence

Think back to the first time you heard the phrase "neural networks" or "neural nets" -- perhaps it's right now -- and try to remember what your first impression was. As an Applied Math and Economics major with a newfound interest in data science and machine learning, I remember thinking that whatever neural networks are, they must be extremely important, really cool, and very complicated. I also remember thinking that a true understanding of neural networks must be on the other side of a thick wall of prerequisite knowledge including neuroscience and graduate mathematics. Through taking a machine learning course with Professor Samuel Watson at Brown, I have learned that three of the previous four statements are true in most cases -- neural nets are extremely important, really cool, and they can be very complicated depending on the architecture of the model. But most importantly, I learned that understanding neural networks requires minimal prerequisite knowledge as long as the information is presented in a logical and digestable way.


Research shows the intrinsically nonlinear nature of receptive fields in vision

#artificialintelligence

The receptive field (RF) of a neuron is the term applied to the space in which the presence of a stimulus alters the response of the same neuron. The responses of visual neurons, as well as visual perception phenomena in general, are highly nonlinear functions of the visual input (in mathematics, nonlinear systems represent phenomena whose behavior cannot be expressed as the sum of the behaviors of its descriptors). Conversely, vision models used in science are based on the notion of linear receptive field; in artificial intelligence and machine learning, as artificial neural networks are based on classical models of vision, also use linear receptive fields. "Modeling vision based on a linear receptive field poses several inherent problems: it changes with each input, it presupposes a set of basis functions for the visual system, and it conflicts with recent studies on dendritic computations," asserts Marcelo Bertalmío, first author of a study recently published in the journal Scientific Reports. The paper proposes modeling the receptive field in a nonlinear manner, introducing the intrinsically nonlinear receptive field or INRF.


Kubeflow is your perfect Machine Learning workstation

#artificialintelligence

It's (mostly) true that Data Scientists do not care about infrastructure. Indeed, even though DevOps is a very interesting field, most of them are not exactly eager to start a VM, allocate the needed resources, configure the network, ssh into the machine, build a docker image and launch a Jupyter Notebook server. To cut to the chase, in this story, we create a ready to use, GPU accelerated Deep Learning environment, that has already TensorFlow and PyTorch installed. To do that we need to create the Dockerfile that describes the environment, build it and use it as the image of the Notebook server inside a Kubeflow instance. So, without further ado let's see the Dockerfile and walk through it step by step.


How I built a Face Mask Detector for COVID-19 using PyTorch Lightning

#artificialintelligence

Our dataset is imbalanced (5,000 masked faces VS 90,000 non-masked faces). Therefore, when splitting the dataset into train/validation, we need to keep the same proportions of the samples in train/validation as the whole dataset. We're going to use 70% of the dataset for training and 30% for validation: When dealing with unbalanced data, we need to pass this information to the loss function to avoid unproportioned step sizes of the optimizer. We do this by assigning a weight to each class, according to its representability in the dataset. We assign more weight to classes with a small number of samples so that the network will be penalized more if it makes mistakes predicting the label of these classes.


Neural Networks -- the Basics

#artificialintelligence

What if we used 100% of the brain? Or better yet: what if we could teach computers to learn like our brains? This is the fundamental concept behind neural networks (NN), a crucial subset of machine learning (ML) and artificial intelligence (AI), that emulate the human brain In this article, I'll explain the how and why behind neural networks and look at some specific applications. To discuss the very basics of neural networks, we have to define some very basic terms. I'll explain more complex vocabulary as it becomes useful.