Goto

Collaborating Authors

Deep Learning


Loss Functions in Deep Learning

#artificialintelligence

For a more in-depth explanation of Forward Propagation and Backpropagation in neural networks, please refer to my other article What is Deep Learning and How does it work? For a given input vector x the neural network predicts an output, which is generally called a prediction vector y. We must compute a dot-product between the input vector x and the weight matrix W1 that connects the first layers with the second. After that, we apply a non-linear activation function to the result of the dot-product. Depending on the task we want the network to do, this prediction vector represents different things.


Deep learning has found two exoplanets that human astronomers missed

#artificialintelligence

The search for planets orbiting other stars has reached industrial scale. Astronomers have discovered over 4,000 of them, more than half using data from the Kepler space telescope, an orbiting observatory designed for this purpose. Launched in 2009, Kepler observed a fixed field of view for many months, looking for the tiny periodical changes in stars' brightness caused by planets moving in front of them. But in 2012 the mission ran into trouble when one of the spacecraft's four reaction wheels failed. These wheels stabilize the craft, allowing it to point accurately in a specific direction.


Best 6 Python libraries for Machine Learning

#artificialintelligence

Artificial Intelligence (AI) and machine learning (ML) are gaining increasing traction in today's digital world. Machine learning (ML) is a subset of AI involving the study of computer algorithms that allows computers to learn and grow from experience apart from human intervention. Python has been the go-to choice for Machine Learning and Artificial Intelligence developers for a long time. Python offers some of the best flexibilities and features to developers that not only increase their productivity but the quality of the code as well, not to mention the extensive libraries helping ease the workload. Arthur Samuel said -- "Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed." The NumPy library for Python concentrates on handling extensive multi-dimensional data and the intricate mathematical functions operating on the data.


Deep Learning Pipeline PDF

#artificialintelligence

Build your own pipeline based on modern TensorFlow approaches rather than outdated engineering concepts. This book shows you how to build a deep learning pipeline for real-life TensorFlow projects. You'll learn what a pipeline is and how it works so you can build a full application easily and rapidly. Then troubleshoot and overcome basic Tensorflow obstacles to easily create functional apps and deploy well-trained models. Step-by-step and example-oriented instructions help you understand each step of the deep learning pipeline while you apply the most straightforward and effective tools to demonstrative problems and datasets.


How To Improve Programming Skills, For Data Scientists And Machine Learning Practitioners

#artificialintelligence

Algorithms tend to scare a lot of ML practitioners away, including me. The field of machine learning arose as a method to eliminate the need to implement heuristic algorithms to detect patterns, we left feature detection to neural networks. Still, algorithms have their place in the software and computing domain, and certainly within the machine learning field. Practising the implementation of algorithms is one of the recommended ways to sharpen your programming skills. Apart from the apparent benefit of building intuition on implementing memory-efficient code, there's another benefit to tackling algorithms which is the development of a problem-solving mindset.


MeInGame: A deep learning method to create videogame characters that look like real people

#artificialintelligence

In recent years, videogame developers and computer scientists have been trying to devise techniques that can make gaming experiences increasingly immersive, engaging and realistic. These include methods to automatically create videogame characters inspired by real people. Most existing methods to create and customize videogame characters require players to adjust the features of their character's face manually, in order to recreate their own face or the faces of other people. More recently, some developers have tried to develop methods that can automatically customize a character's face by analyzing images of real people's faces. However, these methods are not always effective and do not always reproduce the faces they analyze in realistic ways.


Video Highlights: Deep Learning for Probabilistic Time Series Forecasting - insideBIGDATA

#artificialintelligence

In this Data Science Salon talk, Kashif Rasul, Principal Research Scientist at Zalando, presents some modern probabilistic time series forecasting methods using deep learning. The Data Science Salon is a unique vertical focused conference which grew into the most diverse community of senior data science, machine learning and other technical specialists in the space.


Deep Learning Chip Market Breaking New Grounds and Touch New Level in upcoming year by

#artificialintelligence

Reports And Markets is part of the Algoro Research Consultants Pvt. Ltd. and offers premium progressive statistical surveying, market research reports, analysis & forecast data for industries and governments around the globe. Are you mastering your market? Do you know what the market potential is for your product, who the market players are and what the growth forecast is? We offer standard global, regional or country specific market research studies for almost every market you can imagine.


'Self-trained' deep learning to improve disease diagnosis

#artificialintelligence

New work by computer scientists at Lawrence Livermore National Laboratory (LLNL) and IBM Research on deep learning models to accurately diagnose diseases from X-ray images with less labeled data won the Best Paper award for Computer-Aided Diagnosis at the SPIE Medical Imaging Conference on Feb. 19. The technique, which includes novel regularization and "self-training" strategies, addresses some well-known challenges in the adoption of artificial intelligence (AI) for disease diagnosis, namely the difficulty in obtaining abundant labeled data due to cost, effort or privacy issues and the inherent sampling biases in the collected data, researchers said. AI algorithms also are not currently able to effectively diagnose conditions that are not sufficiently represented in the training data. LLNL computer scientist Jay Thiagarajan said the team's approach demonstrates that accurate models can be created with limited labeled data and perform as well or even better than neural networks trained on much larger labeled datasets. The paper, published by SPIE, included co-authors at IBM Research Almaden in San Jose.


Robust artificial intelligence tools to predict future cancer

#artificialintelligence

To catch cancer earlier, we need to predict who is going to get it in the future. The complex nature of forecasting risk has been bolstered by artificial intelligence (AI) tools, but the adoption of AI in medicine has been limited by poor performance on new patient populations and neglect to racial minorities. Two years ago, a team of scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Jameel Clinic demonstrated a deep learning system to predict cancer risk using just a patient's mammogram. The model showed significant promise and even improved inclusivity: It was equally accurate for both white and Black women, which is especially important given that Black women are 43 percent more likely to die from breast cancer. But to integrate image-based risk models into clinical care and make them widely available, the researchers say the models needed both algorithmic improvements and large-scale validation across several hospitals to prove their robustness.