Goto

Collaborating Authors

Results


The Dark Secret at the Heart of AI

#artificialintelligence

The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen--or shouldn't happen--unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur--and it's inevitable they will. That's one reason Nvidia's car is still experimental.


The case against investing in machine learning: Seven reasons not to and what to do instead

#artificialintelligence

The word on the street is if you don't invest in ML as a company or become an ML specialist, the industry will leave you behind. The hype has caught on at all levels, catching everyone from undergrads to VCs. Words like "revolutionary," "innovative," "disruptive," and "lucrative" are frequently used to describe ML. Allow me to share some perspective from my experiences that will hopefully temper this enthusiasm, at least a tiny bit. This essay materialized from having the same conversation several times over with interlocutors who hope ML can unlock a bright future for them. I'm here to convince you that investing in an ML department or ML specialists might not be in your best interest. That is not always true, of course, so read this with a critical eye. The names invoke a sense of extraordinary success, and for a good reason. Yet, these companies dominated their industries before Andrew Ng's launched his first ML lectures on Coursera. The difference between "good enough" and "state-of-the-art" machine learning is significant in academic publications but not in the real world. About once or twice a year, something pops into my newsfeed, informing me that someone improved the top 1 ImageNet accuracy from 86 to 87 or so. Our community enshrines state-of-the-art with almost religious significance, so this score's systematic improvement creates an impression that our field is racing towards unlocking the singularity. No-one outside of academia cares if you can distinguish between a guitar and a ukulele 1% better. Sit back and think for a minute.


Advancing Artificial Intelligence Research - Liwaiwai

#artificialintelligence

As part of a new collaboration to advance and support AI research, the MIT Stephen A. Schwarzman College of Computing and the Defense Science and Technology Agency in Singapore are awarding funding to 13 projects led by researchers within the college that target one or more of the following themes: trustworthy AI, enhancing human cognition in complex environments, and AI for everyone. The 13 research projects selected are highlighted below. Emerging machine learning technology has the potential to significantly help with and even fully automate many tasks that have confidently been entrusted only to humans so far. Leveraging recent advances in realistic graphics rendering, data modeling, and inference, Madry's team is building a radically new toolbox to fuel streamlined development and deployment of trustworthy machine learning solutions. In natural language technologies, most languages in the world are not richly annotated.


Classify text with BERT

#artificialintelligence

This tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews. In addition to training a model, you will learn how to preprocess text into an appropriate format. If you're new to working with the IMDB dataset, please see Basic text classification for more details. BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models.


Vision-based fire detection facilities work better under new deep learning model

#artificialintelligence

Fast and accurate fire detection is significant to the sustainable development of human society and Earth ecology. The existence of objects with similar characteristics to fire increases the difficulty of vision-based fire detection. Improving the accuracy of fire detection by digging deeper visual features of fire always remains challenging. Recently, researchers from the Institute of Acoustics of the Chinese Academy of Sciences (IACAS) have proposed an efficient deep learning model for fast and accurate vision-based fire detection. The model is based on multiscale feature extraction, implicit deep supervision, and channel attention mechanism. The researchers utilized the real-time acquired image as the input of the model and normalized the image.


Building a One-shot Learning Network with PyTorch

#artificialintelligence

Deep learning has been quite popular for image recognition and classification tasks in recent years due to its high performances. However, traditional deep learning approaches usually require a large dataset for the model to be trained on to distinguish very few different classes, which is drastically different from how humans are able to learn from even very few examples. Few-shot or one-shot learning is a categorization problem that aims to classify objects given only a limited amount of samples, with the ultimate goal of creating a more human-like learning algorithm. In this article, we will dive into the deep learning approaches to solving the one-shot learning problem by using a special network structure: Siamese Network. We will build the network using PyTorch and test it on the Omniglot handwritten character dataset and performed several experiments to compare the results of different network structures and hyperparameters, using a one-shot learning evaluation metric.


Machine Learning & Deep Learning in Python & R

#artificialintelligence

Machine Learning & Deep Learning in Python & R, Covers Regression, Decision Trees, SVM, Neural Networks, CNN, Time Series Forecasting and more using both Python & R Hot & New Created by Start-Tech Academy English English [Auto] PREVIEW THIS COURSE - GET COUPON CODE Description You're looking for a complete Machine Learning and Deep Learning course that can help you launch a flourishing career in the field of Data Science & Machine Learning, right? You've found the right Machine Learning course! After completing this course you will be able to: · Confidently build predictive Machine Learning and Deep Learning models to solve business problems and create business strategy · Answer Machine Learning related interview questions · Participate and perform in online Data Analytics competitions such as Kaggle competitions Check out the table of contents below to see what all Machine Learning and Deep Learning models you are going to learn. How this course will help you? A Verifiable Certificate of Completion is presented to all students who undertake this Machine learning basics course.


NLP 101: Towards Natural Language Processing

#artificialintelligence

Under the umbrella of data science fields, natural language processing (NLP) is one of the most famous and important subfields. Natural language processing is a computer science field that gives computers the ability to understand human -- natural -- languages. Although the field has gained a lot of traction recently, it is -- in fact -- a field as old as computers themselves. However, the advancement of technology and computing power has led to incredible advancements in NLP. Now, speech technologies are becoming as famous as written text technologies.


Transfer Learning: A Shortcut for Training Deep Learning Models

#artificialintelligence

When to use Transfer Learning? In this approach, the last few fully connected layers of the pre-trained model are removed and replaced with a shallow neural network. The layers of the pre-trained model are frozen, and only the shallow neural network is trained with the available target dataset. The features extracted by the pre-trained model help the shallow to learn and perform well on the target task. The benefit of this approach is the low chance of overfitting, as we are only training the last few layers of the model, keeping the initial layers fixed.


Adversarial Examples in Deep Learning – A Primer - KDnuggets

#artificialintelligence

We have seen the advent of state-of-the-art (SOTA) deep learning models for computer vision ever since we started getting bigger and better compute (GPUs and TPUs), more data (ImageNet etc.) and easy to use open-source software and tools (TensorFlow and PyTorch). Every year (and now every few months!) we see the next SOTA deep learning model dethrone the previous model in terms of Top-k accuracy for benchmark datasets. The following figure depicts some of the latest SOTA deep learning vision models (and doesn't depict some like Google's BigTransfer!). However most of these SOTA deep learning models are brought down to their knees when it tries to make predictions on a specific class of images, called as adversarial images. The whole idea of an adversarial example can be a natural example or a synthetic example.