Goto

Collaborating Authors

Neural Networks


A Complete Neural Network Algorithm from Scratch in Python

#artificialintelligence

The Neural Network has been developed to mimic a human brain. Though we are not there yet, neural networks are very efficient in machine learning. It was popular in the 1980s and 1990s. Recently it has become more popular. Probably because computers are fast enough to run a large neural network in a reasonable time.


How do you measure trust in deep learning?

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Whether it's diagnosing patients or driving cars, we want to know whether we can trust a person before assigning them a sensitive task. In the human world, we have different ways to establish and measure trustworthiness. In artificial intelligence, the establishment of trust is still developing. In the past years, deep learning has proven to be remarkably good at difficult tasks in computer vision, natural language processing, and other fields that were previously off-limits for computers.


AI news: Neural network learns when it should not be trusted – '99% won't cut it'

#artificialintelligence

AI experts developed a method for modelling the machine's confidence level based on the quality of the available data.MIT engineers expect this advance may eventually save lives, as deep learning is now widely deployed in everyday ways.For example, a network's level of certainty can be the difference between an autonomous vehicle determining between a clear crossroad and "it's probably clear, so stop just in case."This It can be used to assess products that rely on learned models."By AI analyst adds how previous approaches to uncertainty analysis are based on Bayesian deep learning."We


JaidedAI/EasyOCR

#artificialintelligence

Ready-to-use OCR with 70 languages supported including Chinese, Japanese, Korean and Thai. We are currently supporting 70 languages. See list of supported languages. Note 1: for Windows, please install torch and torchvision first by following the official instruction here https://pytorch.org. On pytorch website, be sure to select the right CUDA version you have.


Language & Cognition: re-reading Jerry Fodor

#artificialintelligence

In my opinion the late Jerry Fodor was one of the most brilliant cognitive scientists (that I knew of), if you wanted to have a deep understanding of the major issues in cognition and the plausibility/implausibility of various cognitive architectures. Very few had the technical breadth and depth in tackling some of the biggest questions concerning the mind, language, computation, the nature of concepts, innateness, ontology, etc. The other day I felt like re-reading his Concepts -- Where Cognitive Science Went Wrong (I read this small monograph at least 10 times before, and I must say that I still do not comprehend everything that's in it fully). But, what did happen in the 11th reading of Concepts is this: I now have a new and deeper understanding of his Productivity, Systematicity and Compositionality arguments that should clearly put an end to any talk of connectionist architectures being a serious architecture for cognition -- by'connectionist architectures' I roughly mean also modern day'deep neural networks' (DNNs) that are essentially, if we strip out the advances in compute power, the same models that were the target of Fodor's onslaught. I have always understood the'gist' of his argument, but I believe I now have a deeper understanding -- and, in the process I am now more than I have ever been before, convinced that DNNs cannot be considered as serious models for high-level cognitive tasks (planning, reasoning, language understanding, problem solving, etc.) beyond being statistical pattern recognizers (although very good ones at that).


Viewpoint: Moore's law isn't broken - it's overheated

#artificialintelligence

Nick Harris, CEO and co-founder of US photonics computing specialist Lightmatter explains how advances in photonic computing technology could give Moore's Law a shot in the arm. Recent advancements in machine learning, computer vision, natural language processing, deep learning and more are already impacting life and humanity in ways seen and often unseen. This is especially true as it relates to artificial intelligence (AI). The demands of AI are growing at a blistering rate. Training AI models today requires ultra-high performance computer chips, leading to what one might refer to as a'space race' among top technology companies to build, acquire, or get exclusive access to the highest-performance chips as soon as they come to market.


Advancing Artificial Intelligence Research - Liwaiwai

#artificialintelligence

As part of a new collaboration to advance and support AI research, the MIT Stephen A. Schwarzman College of Computing and the Defense Science and Technology Agency in Singapore are awarding funding to 13 projects led by researchers within the college that target one or more of the following themes: trustworthy AI, enhancing human cognition in complex environments, and AI for everyone. The 13 research projects selected are highlighted below. Emerging machine learning technology has the potential to significantly help with and even fully automate many tasks that have confidently been entrusted only to humans so far. Leveraging recent advances in realistic graphics rendering, data modeling, and inference, Madry's team is building a radically new toolbox to fuel streamlined development and deployment of trustworthy machine learning solutions. In natural language technologies, most languages in the world are not richly annotated.


A Comprehensive Guide to Convolution Neural Network

#artificialintelligence

As we saw in the structure of CNN, convolution layers is used to extract the features and for extracting features it uses filters. So, let us discuss about how the features are extracted using filter now. In the above image we used various filters like Prewitt or Sobel and obtained the edges. For detail understanding about working on the images and extracting edges you can shoot up at my below blog for theoretical and practical implementation. Let us understand how filter operation basically works using an animated image.


Types of activation functions in Deep Learning

#artificialintelligence

There are various aspects of deep learning that we usually have to consider while making a deep learning model. Choosing the right number of layers, the activation function, number of epochs, loss function, the optimizer to name a few. I am revisiting these concepts for one of my projects so I decided to write about the different activation functions we use. So why do we even use the activation function and not just feed the summation directly to the next layer. The problem if we do this would be that the layers of the neural network wont be able to learn complex functions over time. The activation function adds non linearity to the model.


Interview with Ionut Schiopu – ICIP 2020 award winner

AIHub

Ionut Schiopu and Adrian Munteanu received a Top Viewed Special Session Paper Award at the IEEE International Conference on Image Processing (ICIP 2020) for their paper "A study of prediction methods based on machine learning techniques for lossless image coding". Here, Ionut Schiopu tells us more about their work. The research topic of our paper is to introduce a more efficient algorithm for lossless image compression based on Machine Learning (ML) techniques, where the main objective is to minimize the amount of data required to represent the input image without any information loss. In recent years, a new research strategy for coding has emerged by exploring the advances brought by modern ML techniques by proposing novel hybrid coding solutions where specific modules in conventional coding frameworks are replaced with more efficient modules based on ML techniques. The paper follows this research strategy and uses a deep neural network to replace the prediction module in the conventional coding framework.