Goto

Collaborating Authors

Results


Model Compression via Pruning

#artificialintelligence

To obtain fast and accurate inference on edge devices, a model has to be optimized for real-time inference. Fine-tuned state-of-the-art models like VGG16/19, ResNet50 have 138 million and 23 million parameters respectively and inference is often expensive on resource-constrained devices. Previously I've talked about one model compression technique called "Knowledge Distillation" using a smaller student network to mimic the performance of a larger teacher network (Both student and teacher network has different network architecture). Today, the focus will be on "Pruning" one model compression technique that allows us to compress the model to a smaller size with zero or marginal loss of accuracy. In short, pruning eliminates the weights with low magnitude (That does not contribute much to the final model performance).


Artificial Intelligence Course

#artificialintelligence

The short answer to What is Artificial Intelligence is that it depends on who you ask. A layman with a fleeting understanding of technology would link it to robots. They'd say Artificial Intelligence is a terminator like-figure that can act and think on its own. An AI researcher would say that it's a set of algorithms that can produce results without having to be explicitly instructed to do so. And they would all be right. AI courses at Great Learning provide you with an overview of the current implementation scenario in various industries. With an in-depth introduction to artificial intelligence, you can easily master the basics for a better future in the course.


Self-improving Chatbots based on Deep Reinforcement Learning

#artificialintelligence

We present a Reinforcement Learning (RL) model for self-improving chatbots, specifically targeting FAQ-type chatbots. The model is not aimed at building a dialog system from scratch, but to leverage data from user conversations to improve chatbot performance. At the core of our approach is a score model, which is trained to score chatbot utterance-response tuples based on user feedback. The scores predicted by this model are used as rewards for the RL agent. Policy learning takes place offline, thanks to an user simulator which is fed with utterances from the FAQ-database.


How to Become a Full Stack Industry-Ready Data Science Professional?

#artificialintelligence

Artificial Intelligence (AI) and its sub-field Machine Learning (ML) have taken the world by storm. We are moving towards a world enhanced by these recent upcoming technologies. It's the most exciting time to be in this career field! The global Artificial Intelligence market is expected to grow to $400 billion by the year 2025. From Startups to big organizations, all want to join the AI and ML bandwagon to acquire cutting edge technology.


Deep Learning: Advanced NLP and RNNs

#artificialintelligence

It's hard to believe it's been been over a year since I released my first course on Deep Learning with NLP (natural language processing). A lot of cool stuff has happened since then, and I've been deep in the trenches learning, researching, and accumulating the best and most useful ideas to bring them back to you. So what is this course all about, and how have things changed since then? In previous courses, you learned about some of the fundamental building blocks of Deep NLP. We looked at RNNs (recurrent neural networks), CNNs (convolutional neural networks), and word embedding algorithms such as word2vec and GloVe.


Machine Learning Studies the Impact of Covid-19 on Mental Health

#artificialintelligence

COVID-19 pandemic has profoundly influenced the health, financial, and social texture of countries. Recognizable proof of individual-level susceptibility factors may help individuals in distinguishing and dealing with their emotional, psychological, and social well-being. In March 2020, the episode of the Covid illness 2019 (COVID-19) arrived in all nations of the Western world. To decrease the speed of its spread, numerous nations hindered their economies and upheld articulated limitations on public life. After calamities, the vast majority are resilient and don't surrender to psychopathology.


NLP 101: Towards Natural Language Processing

#artificialintelligence

Under the umbrella of data science fields, natural language processing (NLP) is one of the most famous and important subfields. Natural language processing is a computer science field that gives computers the ability to understand human -- natural -- languages. Although the field has gained a lot of traction recently, it is -- in fact -- a field as old as computers themselves. However, the advancement of technology and computing power has led to incredible advancements in NLP. Now, speech technologies are becoming as famous as written text technologies.


Deep Dive in Datasets for Machine translation in NLP Using TensorFlow and PyTorch

#artificialintelligence

With the advancement of machine translation, there is a recent movement towards large-scale empirical techniques that have prompted exceptionally massive enhancements in translation quality. Machine Translation is the technique of consequently changing over one characteristic language into another, saving the importance of the info text. The ongoing research on Image description presents a considerable challenge in the field of natural language processing and computer vision. To overcome this issue, multimodal machine translation presents data from other methods, for the most part, static pictures, to improve the interpretation quality. Here, we will cover the absolute most well-known datasets that are utilized in machine translation.


Google proposes applying AI to patent application generation and categorization

#artificialintelligence

Google asserts that the patent industry stands to benefit from AI and machine learning models like BERT, a natural language processing algorithm that attained state-of-the-art results when it was released in 2018. In a whitepaper published today, the tech giant outlines a methodology to train a BERT model on over 100 million patent publications from the U.S. and other countries using open-source tooling, which can then be used to determine the novelty of patents and generate classifications to assist with categorization. The global patent corpus is large, with millions of new patents issued every year. Patent applications average around 10,000 words and are meticulously wordsmithed by inventors, lawyers, and patent examiners. Patent filings are also written with language that can be unintelligible to lay readers and highly context-dependent; many terms are used to mean completely different things in different patents.


How DAOs and AGI can remake our world?

#artificialintelligence

Decentralized Autonomous Organizations, unlike traditional hierarchical organizations or for the matter of fact even agile organizations are an interconnected network of individuals which work in a self-enforcing manner with self- defined protocols and are not necessarily bound by legal contracts but rather an ecosystem of trust. The unique idea about these organizations is that they are governed by incentive networks in the sense groups of people from disparate disciplines work together on a project which they feel to be highly instrumental for the progress of humanity and science,then at the culmination of the project rewards are distributed in proportions stipulated in the smart contracts those which at the genesis of the project the peers had consented upon. Artificial General Intelligence a discipline created by Ben Goertzel an avid AI researcher, it can be understood and envisaged as an intelligent system possessing the ability to harbor a plethora of mental cognitive states and action capabilities like and more than Humans, possibly it could be a digital twin that leverages on high speed computational advantages. An AGI in contrast with Narrow AI which comprises the current systems in Machine learning discipline, can transcend the barriers that Narrow AI faces because of specialized algorithms that are built for specific use cases whereas AGI can be understood as a general purpose learner constituting in its cognitive organization multiple intertwined yet independent systems the likes of reinforcement learning for grasping new concepts without training data with a purpose of maximizing rewards through trial and error, Natural language processing for deriving important inputs from human interaction, it could go on depending on how incessantly this field is researched with a long term goal of Using AI for better good of Humankind. The plausible form it could manifest is that of Collective General intelligence.