Goto

Collaborating Authors

Neural Networks


On the Edge - How Edge AI is reshaping the future

#artificialintelligence

Now we are moving into the world of'edge computing', in which data is processed close to its source, cutting out the need for it to be sent to the cloud. But computing isn't the only thing taking place on'the edge' – now, AI is being brought to the source of the data as well, allowing'Edge AI' to bring about new standards of speed and intelligence. So, what is Edge AI, what kinds of benefits will it offer, and how will it empower solutions going forward? Currently, the heavy computing capacity required to run deep learning models necessitates that the majority of AI processes be carried out in the cloud. However, running AI in the cloud has its disadvantages, including the fact that it requires an internet connection, and that performance can be impacted by bandwidth and latency limitations.


The fulfilling Journey of Auria Kathi -- The AI Poet Artist living in the clouds

#artificialintelligence

On 1st January 2019, we (Fabin Rasheed and I) had introduced to the world, a side project we've been working on for months. An artificial poet-artist, who doesn't physically exist in this world but writes a poem, draws an abstract art based on the poem and finally color the art based on emotion. We called "her" Auria Kathi -- an anagram for "AI Haiku Art". Auria has an artificial face along with her artificial poetry and art. Everything about Auria was built using artificial neural networks.


Exploiting AI: How Cybercriminals Misuse and Abuse AI and ML

#artificialintelligence

Artificial intelligence (AI) is swiftly fueling the development of a more dynamic world. AI, a subfield of computer science that is interconnected with other disciplines, promises greater efficiency and higher levels of automation and autonomy. Simply put, it is a dual-use technology at the heart of the fourth industrial revolution. Together with machine learning (ML) -- a subfield of AI that analyzes large volumes of data to find patterns via algorithms -- enterprises, organizations, and governments are able to perform impressive feats that ultimately drive innovation and better business. The use of both AI and ML in business is rampant.


Interpretability in Machine Learning: An Overview

#artificialintelligence

This essay provides a broad overview of the sub-field of machine learning interpretability. While not exhaustive, my goal is to review conceptual frameworks, existing research, and future directions. I follow the categorizations used in Lipton et al.'s Mythos of Model Interpretability, which I think is the best paper for understanding the different definitions of interpretability. We'll go over many ways to formalize what "interpretability" means. Broadly, interpretability focuses on the how. It's focused on getting some notion of an explanation for the decisions made by our models. Below, each section is operationalized by a concrete question we can ask of our machine learning model using a specific definition of interpretability. If you're new to all this, we'll first briefly explain why we might care about interpretability at all.


An analysis of models on facial emotion detection

#artificialintelligence

Facial emotion detection is a common issue focused on in the field of cognitive science. An attempt to understand what exactly we as humans see in each other that gives us insight into other emotions is a challenge we can approach from an artificial intelligence side. While I don't have enough experience in psychology or even artificial intelligence to determine these factors, we can always start off by building a model to determine at least the start of this question. Fer2013 is a dataset with pictures of individuals labeled with the emotions of anger, happiness, surprise, disgust, and sadness. When testing humans on the dataset to correctly identify the facial expression of a set of pictures within the set, the accuracy is 65%.


AI news: Neural network learns when it should not be trusted - '99% won't cut it'

#artificialintelligence

Mr Amini said: "It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator." The test revealed the network's ability to flag when users should not place full trust in its decisions. In such examples, "if this is a health care application, maybe we don't trust the diagnosis that the model is giving, and instead seek a second opinion," Amini added. Dr Raia Hadsell, a DeepMind artificial intelligence researcher not involved with the workDeep evidential describes regression as "a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems. She added: "This is done in a novel way that avoids some of the messy aspects of other approaches -- [for example] sampling or ensembles -- which makes it not only elegant but also computationally more efficient -- a winning combination."


Do Neural Networks Dream Visual Illusions? - Neuroscience News

#artificialintelligence

Summary: When convolutional neural networks are trained under experimental conditions, they are decided by the brightness and color of a visual image in similar ways to the human visual system. A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications. Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation.


Software 2.0: The Software That Writes Itself & How Kotlin Is Ushering This New Wave

#artificialintelligence

"Neural networks represent the beginning of a fundamental shift in how we write software. The current coding paradigms nudge developers to write code using restrictive machine learning libraries that can learn, or explicitly programmed to do a specific job. But, we are witnessing a tectonic shift towards automation even in the coding department. So far, code was used to automate jobs now there is a requirement for code that can write itself adapting to various jobs. This is software 2.0 where software writes on its own and thanks to machine learning; this is now a reality. Differentiable programming especially, believes the AI team at Facebook, is key to building tools that can help build ML tools. To enable this, the team has picked Kotlin language. Kotlin was developed by JetBrains and is popular with the Android developers. Its rise in popularity is a close second to Swift. Kotlin has many similarities with Python syntax, and it was designed as a substitute for Java.


Vision-based fire detection facilities work better under new deep learning model

#artificialintelligence

Fast and accurate fire detection is significant to the sustainable development of human society and Earth ecology. The existence of objects with similar characteristics to fire increases the difficulty of vision-based fire detection. Improving the accuracy of fire detection by digging deeper visual features of fire always remains challenging. Recently, researchers from the Institute of Acoustics of the Chinese Academy of Sciences (IACAS) have proposed an efficient deep learning model for fast and accurate vision-based fire detection. The model is based on multiscale feature extraction, implicit deep supervision, and channel attention mechanism. The researchers utilized the real-time acquired image as the input of the model and normalized the image.


Deep Learning: Advanced NLP and RNNs

#artificialintelligence

It's hard to believe it's been been over a year since I released my first course on Deep Learning with NLP (natural language processing). A lot of cool stuff has happened since then, and I've been deep in the trenches learning, researching, and accumulating the best and most useful ideas to bring them back to you. So what is this course all about, and how have things changed since then? In previous courses, you learned about some of the fundamental building blocks of Deep NLP. We looked at RNNs (recurrent neural networks), CNNs (convolutional neural networks), and word embedding algorithms such as word2vec and GloVe.