Collaborating Authors


Insights into black box of artificial intelligence


At many banks, insurance companies and online retailers, self-learning computer algorithms are used to make decisions that have major consequences for customers. However, just how algorithms in artificial intelligence (AI) represent and process their input data internally is largely unknown. They have published their results in the journal Neural Networks. 'What we call artificial intelligence today is based on deep artificial neural networks that roughly mimic human brain functions,' explains Dr. Patrick Krauss from the Cognitive Computational Neuroscience Group at FAU. As is the case in children learning their native language without being aware of the rules of grammar, AI algorithms can learn to make the right choice by independently comparing a large amount of input data.

Artificial general intelligence: Are we close, and does it even make sense to try?


The idea of artificial general intelligence as we know it today starts with a dot-com blowout on Broadway. Twenty years ago--before Shane Legg clicked with neuroscience postgrad Demis Hassabis over a shared fascination with intelligence; before the pair hooked up with Hassabis's childhood friend Mustafa Suleyman, a progressive activist, to spin that fascination into a company called DeepMind; before Google bought that company for more than half a billion dollars four years later--Legg worked at a startup in New York called Webmind, set up by AI researcher Ben Goertzel. Today the two men represent two very different branches of the future of artificial intelligence, but their roots reach back to common ground. Even for the heady days of the dot-com bubble, Webmind's goals were ambitious. Goertzel wanted to create a digital baby brain and release it onto the internet, where he believed it would grow up to become fully self-aware and far smarter than humans.

The Unbearable Shallowness of "Deep AI"


Since people invented writing, communications technology has become steadily more high-bandwidth, pervasive and persuasive, taking a commensurate toll on human attention and cognition. In that bandwidth war between machines and humans, the machines' latest weapon is a class of statistical algorithm dubbed "deep AI." This computational engine already, at a stroke, conquered both humankind's most cherished mind-game (Go) and our unconscious spending decisions (online). This month, finally, we can read how it happened, and clearly enough to do something. But I'm not just writing a book review, because the interaction of math with brains has been my career and my passion. Plus, I know the author. So, after praising the book, I append an intellectual digest, debunking the hype in favor of undisputed mathematical principles governing both machine and biological information-processing systems. That makes this article unique but long. "Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World" is the first book to chronicle the rise of savant-like artificial intelligence (AI), and the last we'll ever need. Investigative journalist Cade Metz lays out the history and the math through the machines' human inventors. The title, "Genius Makers," refers both to the genius-like brilliance of the human makers of AI, as well as to the genius-like brilliance of the AI programs they create. Of all possible AIs, the particular flavor in the book is a class of data-digestion algorithms called deep learning. Metz's book is a ripping good read, paced like a page-turner prodding a reader to discover which of the many genius AI creators will outflank or outthink the others, and how. Together, in collaboration and competition, the computer scientists Metz portrays are inventing and deploying the fastest and most human-impacting revolution in technology to date, the apparently inexorable replacement of human sensation and choice by machine sensation and choice. This is the story of the people designing the bots that do so many things better than us.

What is Artificial Intelligence? How Does AI Work, Applications and Future?


What is Artificial Narrow Intelligence (ANI)? This is the most common form of AI that you'd find in the market now. These Artificial Intelligence systems are designed to solve one single problem and would be able to execute a single task really well. By definition, they have narrow capabilities, like recommending a product for an e-commerce user or predicting the weather. This is the only kind of Artificial Intelligence that exists today. They're able to come close to human functioning in very specific contexts, and even surpass them in many instances, but only excelling in very controlled environments with a limited set of parameters. What is Artificial General Intelligence (AGI)? AGI is still a theoretical concept. It's defined as AI which has a human-level of cognitive function, across a wide variety of domains such as language processing, image processing, computational functioning and reasoning and so on.

Simulating Empathy: Using Emotion AI To Improve The Customer Experience - CB Insights Research


Businesses will prioritize building AI technologies that can interpret and respond to human emotions as they look to connect with consumers. Over the last decade, artificial intelligence has gone from buzzword to a must-have business competence. From retail to healthcare to financial services, AI is penetrating nearly every industry, with advances in deep learning, computer vision, and more paving the way. Download our full report to find out the top trends poised to reshape industries in 2021. AI, though, has largely been challenged when it comes to recognizing and reacting to human emotion. In fact, the AI Now Institute at New York University called for a ban on the use of emotion recognition tech "in important decisions that impact people's lives and access to opportunities" in its 2019 report.

Data Generation in Low Sample Size Setting Using Manifold Sampling and a Geometry-Aware VAE Machine Learning

While much efforts have been focused on improving Variational Autoencoders through richer posterior and prior distributions, little interest was shown in amending the way we generate the data. In this paper, we develop two non \emph{prior-dependent} generation procedures based on the geometry of the latent space seen as a Riemannian manifold. The first one consists in sampling along geodesic paths which is a natural way to explore the latent space while the second one consists in sampling from the inverse of the metric volume element which is easier to use in practice. Both methods are then compared to \emph{prior-based} methods on various data sets and appear well suited for a limited data regime. Finally, the latter method is used to perform data augmentation in a small sample size setting and is validated across various standard and \emph{real-life} data sets. In particular, this scheme allows to greatly improve classification results on the OASIS database where balanced accuracy jumps from 80.7% for a classifier trained with the raw data to 89.1% when trained only with the synthetic data generated by our method. Such results were also observed on 4 standard data sets.

The Inescapable Duality of Data and Knowledge Artificial Intelligence

We will discuss how over the last 30 to 50 years, systems that focused only on data have been handicapped with success focused on narrowly focused tasks, and knowledge has been critical in developing smarter, intelligent, more effective systems. We will draw a parallel with the role of knowledge and experience in human intelligence based on cognitive science. And we will end with the recent interest in neuro-symbolic or hybrid AI systems in which knowledge is the critical enabler for combining data-intensive statistical AI systems with symbolic AI systems which results in more capable AI systems that support more human-like intelligence.

Contrastive Reasoning in Neural Networks Artificial Intelligence

Neural networks represent data as projections on trained weights in a high dimensional manifold. The trained weights act as a knowledge base consisting of causal class dependencies. Inference built on features that identify these dependencies is termed as feed-forward inference. Such inference mechanisms are justified based on classical cause-to-effect inductive reasoning models. Inductive reasoning based feed-forward inference is widely used due to its mathematical simplicity and operational ease. Nevertheless, feed-forward models do not generalize well to untrained situations. To alleviate this generalization challenge, we propose using an effect-to-cause inference model that reasons abductively. Here, the features represent the change from existing weight dependencies given a certain effect. We term this change as contrast and the ensuing reasoning mechanism as contrastive reasoning. In this paper, we formalize the structure of contrastive reasoning and propose a methodology to extract a neural network's notion of contrast. We demonstrate the value of contrastive reasoning in two stages of a neural network's reasoning pipeline : in inferring and visually explaining decisions for the application of object recognition. We illustrate the value of contrastively recognizing images under distortions by reporting an improvement of 3.47%, 2.56%, and 5.48% in average accuracy under the proposed contrastive framework on CIFAR-10C, noisy STL-10, and VisDA datasets respectively.

How artificial intelligence is transforming the world


Artificial intelligence (AI) is the basis for mimicking human intelligence processes through the creation and application of algorithms built into a dynamic computing environment. Stated simply, AI is trying to make computers think and act like humans. The more humanlike the desired outcome, the more data and processing power required. At least since the first century BCE, humans have been intrigued by the possibility of creating machines that mimic the human brain. In modern times, the term artificial intelligence was coined in 1955 by John McCarthy. In 1956, McCarthy and others organized a conference titled the "Dartmouth Summer Research Project on Artificial Intelligence."

Top 60 Artificial Intelligence Interview Questions & Answers


A month ago, India's first driverless metro train in the national capital, Delhi, was launched. Yes! Like it or not, automation is happening and will continue to happen in places where you couldn't have imagined before. Artificial Intelligence has swept away the world around us, leading to the natural progression of demand for skilled professionals in the job market. It is one field that will never go outdated and will continue to grow. Wondering how to leverage this opportunity? How can you prepare yourself for such a league of jobs that make the world go around? We have got a repository of questions to help you get ready for your next interview! This article will cover the artificial intelligence interview questions and help you with the much-needed tips and tricks to crack the interview. The article is divided into three parts: basic artificial intelligence questions, intermediate level, and advanced AI questions. AnalytixLabs is India's top-ranked AI & Data Science Institute and is in its tenth year.