Goto

Collaborating Authors

neural network


The fulfilling Journey of Auria Kathi -- The AI Poet Artist living in the clouds

#artificialintelligence

On 1st January 2019, we (Fabin Rasheed and I) had introduced to the world, a side project we've been working on for months. An artificial poet-artist, who doesn't physically exist in this world but writes a poem, draws an abstract art based on the poem and finally color the art based on emotion. We called "her" Auria Kathi -- an anagram for "AI Haiku Art". Auria has an artificial face along with her artificial poetry and art. Everything about Auria was built using artificial neural networks.


Exploiting AI: How Cybercriminals Misuse and Abuse AI and ML

#artificialintelligence

Artificial intelligence (AI) is swiftly fueling the development of a more dynamic world. AI, a subfield of computer science that is interconnected with other disciplines, promises greater efficiency and higher levels of automation and autonomy. Simply put, it is a dual-use technology at the heart of the fourth industrial revolution. Together with machine learning (ML) -- a subfield of AI that analyzes large volumes of data to find patterns via algorithms -- enterprises, organizations, and governments are able to perform impressive feats that ultimately drive innovation and better business. The use of both AI and ML in business is rampant.


Interpretability in Machine Learning: An Overview

#artificialintelligence

This essay provides a broad overview of the sub-field of machine learning interpretability. While not exhaustive, my goal is to review conceptual frameworks, existing research, and future directions. I follow the categorizations used in Lipton et al.'s Mythos of Model Interpretability, which I think is the best paper for understanding the different definitions of interpretability. We'll go over many ways to formalize what "interpretability" means. Broadly, interpretability focuses on the how. It's focused on getting some notion of an explanation for the decisions made by our models. Below, each section is operationalized by a concrete question we can ask of our machine learning model using a specific definition of interpretability. If you're new to all this, we'll first briefly explain why we might care about interpretability at all.


An analysis of models on facial emotion detection

#artificialintelligence

Facial emotion detection is a common issue focused on in the field of cognitive science. An attempt to understand what exactly we as humans see in each other that gives us insight into other emotions is a challenge we can approach from an artificial intelligence side. While I don't have enough experience in psychology or even artificial intelligence to determine these factors, we can always start off by building a model to determine at least the start of this question. Fer2013 is a dataset with pictures of individuals labeled with the emotions of anger, happiness, surprise, disgust, and sadness. When testing humans on the dataset to correctly identify the facial expression of a set of pictures within the set, the accuracy is 65%.


Do Neural Networks Dream Visual Illusions? - Neuroscience News

#artificialintelligence

Summary: When convolutional neural networks are trained under experimental conditions, they are decided by the brightness and color of a visual image in similar ways to the human visual system. A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications. Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation.


Software 2.0: The Software That Writes Itself & How Kotlin Is Ushering This New Wave

#artificialintelligence

"Neural networks represent the beginning of a fundamental shift in how we write software. The current coding paradigms nudge developers to write code using restrictive machine learning libraries that can learn, or explicitly programmed to do a specific job. But, we are witnessing a tectonic shift towards automation even in the coding department. So far, code was used to automate jobs now there is a requirement for code that can write itself adapting to various jobs. This is software 2.0 where software writes on its own and thanks to machine learning; this is now a reality. Differentiable programming especially, believes the AI team at Facebook, is key to building tools that can help build ML tools. To enable this, the team has picked Kotlin language. Kotlin was developed by JetBrains and is popular with the Android developers. Its rise in popularity is a close second to Swift. Kotlin has many similarities with Python syntax, and it was designed as a substitute for Java.


Vision-based fire detection facilities work better under new deep learning model

#artificialintelligence

Fast and accurate fire detection is significant to the sustainable development of human society and Earth ecology. The existence of objects with similar characteristics to fire increases the difficulty of vision-based fire detection. Improving the accuracy of fire detection by digging deeper visual features of fire always remains challenging. Recently, researchers from the Institute of Acoustics of the Chinese Academy of Sciences (IACAS) have proposed an efficient deep learning model for fast and accurate vision-based fire detection. The model is based on multiscale feature extraction, implicit deep supervision, and channel attention mechanism. The researchers utilized the real-time acquired image as the input of the model and normalized the image.


Deep Learning: Advanced NLP and RNNs

#artificialintelligence

It's hard to believe it's been been over a year since I released my first course on Deep Learning with NLP (natural language processing). A lot of cool stuff has happened since then, and I've been deep in the trenches learning, researching, and accumulating the best and most useful ideas to bring them back to you. So what is this course all about, and how have things changed since then? In previous courses, you learned about some of the fundamental building blocks of Deep NLP. We looked at RNNs (recurrent neural networks), CNNs (convolutional neural networks), and word embedding algorithms such as word2vec and GloVe.


What's happening in my LSTM layer?

#artificialintelligence

In building a deep neural network, especially using some of the higher level frameworks such as Keras, we often don't fully understand what's happening in each layer. The sequential model will get you far indeed, but when it's time to do something more complex or intriguing, you will need to dive into the details. In this article, I'm going to explain exactly what's happening as you pass a batch of data through an LSTM layer with an example from PyTorch. I want to note that, I won't be covering any of the exact mechanics of the LSTM cells or why they are useful. If you're reading this, you're probably aware of the vanishing gradient problem and understand the basics of the gating mechanisms.


Why your brain is not a computer

#artificialintelligence

We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity. We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain's very structure at will, altering the animal's behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.