Goto

Collaborating Authors

Neural Networks


Why we must rethink AI benchmarks

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. For decades, researchers have used benchmarks to measure progress in different areas of artificial intelligence such as vision and language. Especially in the past few years, with deep learning becoming very popular, benchmarks have become a narrow focus for many research labs and scientists. But while benchmarks can help compare the performance of AI systems on specific problems, they are often taken out of context, sometimes to harmful results. In a paper accepted at the NeurIPS 2021 conference, scientists at University of California, Berkeley, University of Washington, and Google outline the limits of popular AI benchmarks.


How deep learning took so much time to take off

#artificialintelligence

Maybe the worse thing that can happen to an idea is being born on the wrong moment, or/and even wrong place. Take the case of YouTube, was is the first video streaming platform? But it was born on the right moment! "In 1999–2000 it was too hard to watch online content you had to put codecs in your browser and do all this stuff [about company that failed two years before YouTube]" Bill Gross It was somehow similar with deep learning, since the act adding more hidden-layers is not new, and it is even straightfoward: anyone with a outside thinking could try that out, and have succeeded if we had the proper tools. What made deep learning just now? Indeed, it is amazing how fast hardware evolved, in special for personal usage.


iiot machinelearning_2022-05-20_04-17-50.xlsx

#artificialintelligence

The graph represents a network of 1,175 Twitter users whose tweets in the requested range contained "iiot machinelearning", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Friday, 20 May 2022 at 11:21 UTC. The requested start date was Friday, 20 May 2022 at 00:01 UTC and the maximum number of tweets (going backward in time) was 7,500. The tweets in the network were tweeted over the 2-day, 16-hour, 1-minute period from Tuesday, 17 May 2022 at 07:58 UTC to Friday, 20 May 2022 at 00:00 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods.


Emergent bartering behaviour in multi-agent reinforcement learning

#artificialintelligence

Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviours respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods.


Neuromorphic memory device simulates neurons and synapses: Simultaneous emulation of neuronal and synaptic properties promotes the development of brain-like artificial intelligence

#artificialintelligence

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices. Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain.


Building a 3D-CNN in TensorFlow - Analytics Vidhya

#artificialintelligence

This article was published as a part of the Data Science Blogathon. The MNIST dataset classification is considered the hello world program in the domain of computer vision. The MNIST dataset helps beginners to understand the concept and the implementation of Convolutional Neural Networks. Many think of images as just a normal matrix but in reality, this is not the case. Images possess what is known as spatial information.


Artificial Intelligence - A door opener to a new era in image processing - IDS Imaging Development Systems GmbH

#artificialintelligence

Artificial intelligence in general, and more specifically Deep Learning and neural networks, open the door to a new era in image processing. Why should companies look into this technology, what is important to know and how easy is it actually to set up a new project? After participation, you will have a better grasp of this new technology and be familiar with the essential know-how concerning this field. We also show you that it is actually really easy to set-up your individual, deep learning-based vision solutions, even if you have no prior knowledge.


Deepfake attacks can easily trick facial recognition

#artificialintelligence

In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report. Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user. So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example.


Batch Normalization… or not?

#artificialintelligence

Batch Normalization (BN or BatchNorm) is a technique used to normalize the layer inputs by re-centering and re-scaling. This is done by evaluating the mean and the standard deviation of each input channel (across the whole batch), then normalizing these inputs (check this video) and, finally, both a scaling and a shifting take place through two learnable parameters β and γ. Batch Normalization is quite effective but the real reasons behind this effectiveness remain unclear. Initially, as it was proposed by Sergey Ioffe and Christian Szegedy in their 2015 article, the purpose of BN was to mitigate the internal covariate shift (ICS), defined as "the change in the distribution of network activations due to the change in network parameters during training". In fact, a reason to scale inputs is to get stable training; unfortunately this may be true in the beginning but as the network trains and the weights move away from their initial values there is no guarantee of stability.


GrAI Matter Labs Launches Life-Ready AI 'GrAI VIP', A Full-Stack AI System-On-Chip Platform

#artificialintelligence

GrAI Matter Labs unveils life-ready AI with GrAI VIP at GLOBAL INDUSTRIE. GrAI Matter Labs is a company in brain-inspired ultra-low latency computing that specializes in Life-Ready AI. Artificial Intelligence is the closest thing to natural intelligence. Artificial intelligence that feels alive. They make brain-inspired chips that act like people.