Collaborating Authors


Natural Language Processing: What, Why, and How?


Ever wondered how Google search shows exactly what you want to see? “Puma” can be both an animal or shoe company, but for you, it is mostly the shoe company and google know it! How does it happen? How do search...

Top 20 Artificial Intelligence Research Labs In The World In 2021


Artificial intelligence is continuously evolving and propagating across every industry. With much of the groundbreaking innovations moving the industry forward, the technology is continuously making headlines every day. AI refers to software or systems that perform intelligent tasks like those of human brains such as learning, reasoning, and judgment. Its applications range from automation and translation systems for natural languages that people use daily, to image recognition systems that help identify faces and letters from images. Today, AI is used in different forms including digital assistants, chatbots, and machine learning, among others.

Undergraduates explore practical applications of artificial intelligence


Deep neural networks excel at finding patterns in datasets too vast for the human brain to pick apart. That ability has made deep learning indispensable to just about anyone who deals with data. This year, the MIT Quest for Intelligence and the MIT-IBM Watson AI Lab sponsored 17 undergraduates to work with faculty on yearlong research projects through MIT's Advanced Undergraduate Research Opportunities Program (SuperUROP). Students got to explore AI applications in climate science, finance, cybersecurity, and natural language processing, among other fields. And faculty got to work with students from outside their departments, an experience they describe in glowing terms.

Unstructured Privacy Data Risks: AI Can Help


As per Gartner, 65% of world population's data will be impacted due to privacy regulations by 2023. In fact, it might happen sooner as most countries wish to provide economic nationalism by restricting cross country data transfers and data rationing by global technology businesses. Another Independent trend coupled with the rise of tighter privacy regulations is the volume of unstructured data being collected. Combined, both structured & unstructured data are projected to grow at the rate of 7-12% on an annual basis. Technological advances along with ever falling storage prices have made it quite easy to collect unstructured data from the customers.

Build a Deep Learning Text Generator Project with Markov Chains


Natural language processing (NLP) and deep learning are growing in popularity for their use in ML technologies like self-driving cars and speech recognition software. As more companies begin to implement deep learning components and other machine learning practices, the demand for software developers and data scientists with proficiency in deep learning is skyrocketing. Today, we will introduce you to a popular deep learning project, the Text Generator, to familiarize you with important, industry-standard NLP concepts, including Markov chains. By the end of this article, you'll understand how to build a Text Generator component for search engine systems and the know-how to implement Markov chains for faster predictive models. Text generation is popular across the board and in every industry, especially for the mobile, app, and data science.

The Inescapable Duality of Data and Knowledge Artificial Intelligence

We will discuss how over the last 30 to 50 years, systems that focused only on data have been handicapped with success focused on narrowly focused tasks, and knowledge has been critical in developing smarter, intelligent, more effective systems. We will draw a parallel with the role of knowledge and experience in human intelligence based on cognitive science. And we will end with the recent interest in neuro-symbolic or hybrid AI systems in which knowledge is the critical enabler for combining data-intensive statistical AI systems with symbolic AI systems which results in more capable AI systems that support more human-like intelligence.

A Panoramic Survey of Natural Language Processing in the Arab World

Communications of the ACM

Though Arabic NLP has many challenges, it has seen many successes and developments.

How artificial intelligence is transforming the world


Artificial intelligence (AI) is the basis for mimicking human intelligence processes through the creation and application of algorithms built into a dynamic computing environment. Stated simply, AI is trying to make computers think and act like humans. The more humanlike the desired outcome, the more data and processing power required. At least since the first century BCE, humans have been intrigued by the possibility of creating machines that mimic the human brain. In modern times, the term artificial intelligence was coined in 1955 by John McCarthy. In 1956, McCarthy and others organized a conference titled the "Dartmouth Summer Research Project on Artificial Intelligence."

Computational Emotion Analysis From Images: Recent Advances and Future Directions Artificial Intelligence

Understanding the information contained in the increasing repository of data is of vital importance to behavior sciences [34], which aim to predict human decision making and enable wide applications, such as mental health evaluation [14], business recommendation [33], opinion mining [54], and entertainment assistance [78]. Analyzing media data on an affective (emotional) level belongs to affective computing, which is defined as "the computing that relates to, arises from, or influences emotions" [38]. The importance of emotions has been emphasized for decades since Minsky introduced the relationship between intelligence and emotion [31]. One famous claim is "The question is not whether intelligent machines can have any emotions, but whether machines can be intelligent without emotions." Based on the types of media data, the research on affective computing can be classified into different categories, such as text [13, 72], image [75], speech [45], music [64], facial expression [24], video [56, 79], physiological signals [2], and multi-modal data [52, 41, 80]. The adage "a picture is worth a thousand words" indicates that images can convey rich semantics.

Local Interpretations for Explainable Natural Language Processing: A Survey Artificial Intelligence

As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for natural language processing (NLP) tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term \textit{interpretability} and its various aspects at the beginning of this work. The methods collected and summarised in this survey are only associated with local interpretation and are divided into three categories: 1) explaining the model's predictions through related input features; 2) explaining through natural language explanation; 3) probing the hidden states of models and word representations.