Deep Learning


Deep Learning with TensorFlow 2.0 [2019]

#artificialintelligence

But what is that one special thing they have in common? They are all masters of deep learning. We often hear about AI, or self-driving cars, or the'algorithmic magic' at Google, Facebook, and Amazon. But it is not magic - it is deep learning. And more specifically, it is usually deep neural networks – the one algorithm to rule them all.


Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

#artificialintelligence

Breast cancer is the second leading cancer-related cause of death among women in the US. Early detection, through routine annual screening mammography, is the best first line of defense against breast cancer. However, these screening mammograms require expert radiologists (i.e. A radiologist can spend up to 10 hours a day working through these mammograms, in the process experiencing both eye-strain and mental fatigue. Modern computer vision models, built principally on Convolutional Neural Networks (CNNs), have seen incredible progress in recent years.


Text Analytics with Python: A Practitioner's Guide to Natural Language Processing: Dipanjan Sarkar: 9781484243534: Amazon.com: Books

#artificialintelligence

Leverage Natural Language Processing (NLP) in Python and learn how to set up your own robust environment for performing text analytics. The second edition of this book will show you how to use the latest state-of-the-art frameworks in NLP, coupled with Machine Learning and Deep Learning to solve real-world case studies leveraging the power of Python. This edition has gone through a major revamp introducing several major changes and new topics based on the recent trends in NLP. We have a dedicated chapter around Python for NLP covering fundamentals on how to work with strings and text data along with introducing the current state-of-the-art open-source frameworks in NLP. We have a dedicated chapter on feature engineering representation methods for text data including both traditional statistical models and newer deep learning based embedding models.


ABCI Adopts NGC for Easy Access to Deep Learning Frameworks NVIDIA Blog

#artificialintelligence

From discovering drugs, to locating black holes, to finding safer nuclear energy sources, high performance computing systems around the world have enabled breakthroughs across all scientific domains. Japan's fastest supercomputer, ABCI, powered by NVIDIA Tensor Core GPUs, enables similar breakthroughs by taking advantage of AI. The system is the world's first large-scale, open AI infrastructure serving researchers, engineers and industrial users to advance their science. The software used to drive these advances is as critical as the servers the software runs on. However, installing an application on an HPC cluster is complex and time consuming.


NVIDIA Builds Supercomputer to Build Self-Driving Cars NVIDIA Blog

#artificialintelligence

In a clear demonstration of why AI leadership demands the best compute capabilities, NVIDIA today unveiled the world's 22nd fastest supercomputer -- DGX SuperPOD -- which provides AI infrastructure that meets the massive demands of the company's autonomous-vehicle deployment program. The system was built in just three weeks with 96 NVIDIA DGX-2H supercomputers and Mellanox interconnect technology. Delivering 9.4 petaflops of processing capability, it has the muscle for training the vast number of deep neural networks required for safe self-driving vehicles. Customers can buy this system in whole or in part from any DGX-2 partner based on our DGX SuperPOD design. AI training of self-driving cars is the ultimate compute-intensive challenge.


rasbt/python-machine-learning-book-2nd-edition

#artificialintelligence

Helpful installation and setup instructions can be found in the README.md To access the code materials for a given chapter, simply click on the open dir links next to the chapter headlines to navigate to the chapter subdirectories located in the code/ subdirectory. You can also click on the ipynb links below to open and view the Jupyter notebook of each chapter directly on GitHub. In addition, the code/ subdirectories also contain .py However, I highly recommend working with the Jupyter notebook if possible in your computing environment.


Why we should focus on weak artificial intelligence for the moment

#artificialintelligence

Every few decades, a technological development leads us to believe that artificial general intelligence (aka strong AI), the brand of AI that can think and decide like humans, is just around the corner. The excitement that follows is accompanied by fears of dystopian near-future and an arms-race between companies and states to be the first to create general AI. However, every time we thought we were closing in on strong AI, we have been disappointed. Every time, we spent a lot of time, resources, money and the energy of our most brilliant scientists on accomplishing something that seems to be a pipe dream. And every time, what ensued was a period of disappointment and disinterest in the field, which lasted decades.


Get a grip on neural networks, R, Python, TensorFlow, deployment of AI, and much more, at our MCubed workshops

#artificialintelligence

Event You know that you could achieve great things if only you had time to get to grips with TensorFlow, or mine a vast pile of text, or simply introduce machine-learning into your existing workflow. That's why at our artificial-intelligence conference MCubed, which runs from September 30 to October 2, we have a quartet of all-day workshops that will take you deep into key technologies, and show you how to apply them in your own organisation. Prof Mark Whitehorn and Kate Kilgour will dive deep into machine learning and neural networks, from perceptrons through convolutional neural networks (CNNs) and autoencoders to generative adversarial networks. If you want to get more specific, Oliver Zeigermann returns to MCubed with his workshop on Deep Learning with TensorFlow 2. This session will cover Neural Networks, CNNs and recurrent neural networks, using TensorFlow 2, and Python, to show you how to develop and train your own neural networks. One problem many of us face is making sense of a mountain of text.


Advanced Topics in Deep Convolutional Neural Networks

#artificialintelligence

Throughout this article, I will discuss some of the more complex aspects of convolutional neural networks and how they related to specific tasks such as object detection and facial recognition. This article is a natural extension to my article titled: Simple Introductions to Neural Networks. I recommend looking at this before tackling the rest of this article if you are not well-versed in the idea and function of convolutional neural networks. Due to the excessive length of the original article, I have decided to leave out several topics related to object detection and facial recognition systems, as well as some of the more esoteric network architectures and practices currently being trialed in the research literature. I will likely discuss these in a future article related more specifically to the application of deep learning for computer vision.


r/MachineLearning - [P] Clickstream based user intent prediction with LSTMs and CNNs

#artificialintelligence

I also did some experimentation with GRUs and LSTMs in NLP context, where I saw LSTMs performing better than GRUs, while they need more training time. Honestly, I never tried complete variable length sequences, because of the restriction, that each batch must be the same length and some layers are not usable if you have variable sequences. I don't think the difference will be huge, at least in my data. I experimented with different sequence lengths (100, 200, 250, 400, 500), and 400 and 500 have not performed better then 250. I did indeed achieve a noticeable performance improvement with embeddings, instead of one hot encoding.