Goto

Collaborating Authors

osforscience/deep-learning-ocean

#artificialintelligence

The purpose of this project is to introduce a shortcut to developers and researcher for finding useful resources about Deep Learning. There are different motivations for this open source project. There other similar repositories similar to this repository and are very comprehensive and useful and to be honest they made me ponder if there is a necessity for this repository! The point of this repository is that the resources are being targeted. The organization of the resources is such that the user can easily find the things he/she is looking for.


School Yourself on NLP, Machine Learning & Deep Learning

#artificialintelligence

Advanced concepts in NLP with lectures from the Fall 2020 offering of CS 685 (advanced natural language processing) at UMass Amherst. All slides / notes / notebooks for each lecture are linked in the course description.


Top 7 Artificial Intelligence Breakthroughs We Saw In 2019

#artificialintelligence

Over the years, artificial intelligence has amazed everyone with numerous breakthroughs, and this year it was no different. The whole year, we witnessed awe-inspiring innovations in reinforcement learning, neural networks, among others. Tech companies from across the world benchmarked various leaps in artificial intelligence to further eliminated the doubts people had about achieving true AI. As a chronicler of the technological progress in the space of analytics, artificial intelligence, data science and big data, among others, Analytics India Magazine was on top of every jaw-dropping development. We bring to you the top 7 amazing AI advancements that changed the world forever.


How deep learning can help scientific research

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Whether we take it for granted or not, deep learning algorithms have become an inseparable part of our daily lives. Personalized feeds, face and voice recognition, web search, smart speakers, digital assistants, email, and many other applications that we can't part ways with use deep learning algorithms under the hood. But how effective is deep learning in scientific research, where problems are often much more complex than classifying an image and requirements are much more sensitive than recommending what to buy next? To answer this question, former Google CEO Eric Schmidt and Google AI researcher Maithra Raghu have put together a comprehensive guide on the different deep learning techniques and their application to scientific research.


The Rise of Meta Learning

#artificialintelligence

Meta-Learning describes the abstraction to designing higher level components associated with training Deep Neural Networks. The term "Meta-Learning" is thrown around in Deep Learning literature frequently referencing "AutoML", "Few-Shot Learning", or "Neural Architecture Search" when in reference to the automated design of neural network architectures. Emerging from comically titled papers such as "Learning to learn by gradient descent by gradient descent", the success of OpenAI's rubik's cube robotic hand demonstrates the maturity of the idea. Meta-Learning is the most promising paradigm to advance the state-of-the-art of Deep Learning and Artificial Intelligence. OpenAI set the AI world on fire by demonstrating ground-breaking capabilities of a robotic hand trained with Reinforcement Learning.