Goto

Collaborating Authors

deep learning


BERT for Individual: Tutorial+Baseline

#artificialintelligence

So if you're like me just beginning out at NLP after finishing a few months building Computer Vision models as a beginner then surely this story has something in supply for you. BERT is a deep learning model that has given state-of-the-art results on a wide variety of natural language processing tasks. It stands for Bidirectional Encoder Representations for Transformers. It has been pre-trained on Wikipedia and BooksCorpus and requires (only) task-specific fine-tuning. It has caused a stir in the Machine Learning community by presenting state-of-the-art results in a wide variety of NLP tasks, including Question Answering (SQuAD v1.1), Natural Language Inference (MNLI), and others.


Addition and Subtraction using Recurrent Neural Networks.

#artificialintelligence

How does google understand how to translate '今日はどうですか?' to'How are you doing today?' or vice versa? How do we get to predict a disease spread such as COVID-19 way into the future beforehand? How do automatic Text generation or Text Summarization mechanisms work? The answer is Recurrent Neural Networks. RNNs have been the solution to deal with most problems in Natural language Processing and not only NLP but in Bio-informatics, Financial Forecasting, Sequence modelling etc.


Supervised vs Unsupervised & Discriminative vs Generative

#artificialintelligence

Highlights: GANs and classical Deep Learning methods (classification, object detection) are similar, but they are also fundamentally different in nature. Reviewing their properties will be the topic of this post. Therefore, before we proceed further with the GANs series, it will be useful to refresh and recap what is supervised and unsupervised learning. In addition, we will explain the difference between discriminative and generative models. Finally, we will introduce latent variables, since they are an important concept in GANs.


Deepmind Researchers Propose 'ReLICv2': Pushing The Limits of Self-Supervised ResNets

#artificialintelligence

The supervised learning architectures generally require a massive amount of labeled data. Acquiring this vast amount of high-quality labeled data can turn out to be a very costly and time-consuming task. The main idea behind self-supervised methods in deep learning is to learn the patterns from a given set of unlabelled data and fine-tune the model with few labeled data. Self-supervised learning using residual networks has recently progressed, but they still underperform by a large margin corresponding to supervised residual network models on ImageNet classification benchmarks. This poor performance has rendered the use of self-supervised models in performance-critical scenarios till this point.


TinyML is bringing deep learning models to microcontrollers

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Deep learning models owe their initial success to large servers with large amounts of memory and clusters of GPUs. The promises of deep learning gave rise to an entire industry of cloud computing services for deep neural networks. Consequently, very large neural networks running on virtually unlimited cloud resources became very popular, especially among wealthy tech companies that can foot the bill. But at the same time, recent years have also seen a reverse trend, a concerted effort to create machine learning models for edge devices.


Insurance 2030--The impact of AI on the future of insurance

#artificialintelligence

Welcome to the future of insurance, as seen through the eyes of Scott, a customer in the year 2030. Upon hopping into the arriving car, Scott decides he wants to drive today and moves the car into "active" mode. Scott's personal assistant maps out a potential route and shares it with his mobility insurer, which immediately responds with an alternate route that has a much lower likelihood of accidents and auto damage as well as the calculated adjustment to his monthly premium. Scott's assistant notifies him that his mobility insurance premium will increase by 4 to 8 percent based on the route he selects and the volume and distribution of other cars on the road. It also alerts him that his life insurance policy, which is now priced on a "pay-as-you-live" basis, will increase by 2 percent for this quarter. The additional amounts are automatically debited from his bank account. When Scott pulls into his destination's parking lot, his car bumps into one of several parking signs.


How to Regulate Artificial Intelligence the Right Way: State of AI and Ethical Issues

#artificialintelligence

It is critical for governments, leaders, and decision makers to develop a firm understanding of the fundamental differences between artificial intelligence, machine learning, and deep learning. Artificial intelligence (AI) applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, and decision trees. AI recognizes patterns from vast amounts of quality data providing insights, predicting outcomes, and making complex decisions. Machine learning (ML) is a subset of AI that utilises advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon's Alexa and Apple's Siri improve every year thanks to constant use by consumers coupled with the machine learning that takes place in the background.


5 Data Science Trends in the Next 5 Years

#artificialintelligence

This field is large enough that it's a bit impossible to deeply cover all the things that can happen in the coming 5 years for it. Important trends that I foresee but won't cover here are specific applications of Data Science in unique domains, integrating of low-code/no-code tools in the tech stack, and other narrowly-focused insights. This is going to be a focus on the general, broad themes of change I see coming to stay in the next half-decade. This isn't an exhaustive list, but it does cover a lot of the issues that are currently faced in practice today: The title of the Data Scientist has been a big issue for many in the industry mainly because of the ambiguity around what the role entails and also what the company needs. Although I believe the job descriptions have largely become clearer and concise, the job profiles are just starting to become normalized.



How ConvNets found a way to survive the Transformers invasion in computer vision

#artificialintelligence

Traditionally, Convolutional Neural Networks (CNN) have been the preferred choice for computer vision tasks. CNNs, composed of layers of artificial neurons, calculate the weighted sum of the inputs to give output in the form of activation values. In the case of computer vision applications, CNNs accept pixel values to output various visual features. Indubitably, the invention of AlexNet was the apogee of the CNN movement. AlexNet has become the leading CNN-based architecture for object detection tasks in the computer vision field.