Goto

Collaborating Authors

Deep Learning


Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training

#artificialintelligence

The remarkable development of deep learning over the past decade relies heavily on sophisticated heuristics and tricks. To better exploit its potential in the coming decade, perhaps a rigorous framework for reasoning about deep learning is needed, which, however, is not easy to build due to the intricate details of neural networks. For near-term purposes, a practical alternative is to develop a mathematically tractable surrogate model, yet maintaining many characteristics of neural networks. This paper proposes a model of this kind that we term the Layer-Peeled Model. The effectiveness of this model is evidenced by, among others, its ability to reproduce a known empirical pattern and to predict a hitherto-unknown phenomenon when training deep-learning models on imbalanced datasets. All study data are included in the article and/or supporting information. Our code is publicly available at GitHub ().


Deep Learning Can Now Predict Traffic Crashes Beforehand

#artificialintelligence

The lead scientist at Qatar Computing Research Institute or QCRI and also the author of the paper, Amin Sadeghi, says that the deep learning model can generalize from one city to another by adding multiple clues from unrelated data sources. With the help of AI, he says that the deep learning model can identify and predict the crash maps in uncharted territories. This dataset has included 7,500 kilometers from Los Angeles, Chicago, New York City, and Boston. When considered among the four cities Los Angeles was identified as the highest crash density after which follows New York City, Chicago, and Boston.


Top Deep Learning Algorithms

#artificialintelligence

There are a lots of Deep Learning Algorithms that are used, but what are some of the best? I have already discussed the most basic of Neural Networks or the Perceptron here. If you want to know how a neural network would function in its barest form, please check that out. Deep learning algorithms use ANNs (short for Artificial Neural Networks) to duplicate the functioning of our brain. Not trying to be philosophical or historical, but our brain is the most complicated piece of machinery we know.


How AI is reinventing what computers are

#artificialintelligence

Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. Google's latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a "neural engine," also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera.


MIT CSAIL, TU Wien, and IST Researchers Introduce Deep Learning Models That Require Fewer Neurons

#artificialintelligence

Today's artificial intelligence technology is intended to mimic nature and replicate the same decision-making abilities that people develop naturally in a computer. Artificial neural networks, like living brains, are made up of many individual cells. When a cell becomes active, it transmits a signal to all other cells in the vicinity. The following cell's signals are added together to determine if it will become active as well. The system's behavior is determined by the way one cell influences the activity of the next.


Deeper Is Not Necessarily Better: Princeton & Intel's 12-Layer Parallel Networks Achieve…

#artificialintelligence

In the new paper Non-deep Networks, a research team from Princeton University and Intel Labs argues it is possible to achieve high performance with “non-deep” neural networks, presenting ParNet (Parallel Networks), a novel 12-layer architecture that achieves performance competitive with its state-of-the-art deep counterparts.


The Uselessness of Useful Knowledge

#artificialintelligence

Is artificial intelligence the new alchemy? That is, are the powerful algorithms that control so much of our lives -- from internet searches to social media feeds -- the modern equivalent of turning lead into gold? Moreover: Would that be such a bad thing? According to the prominent AI researcher Ali Rahimi and others, today's fashionable neural networks and deep learning techniques are based on a collection of tricks, topped with a good dash of optimism, rather than systematic analysis. Modern engineers, the thinking goes, assemble their codes with the same wishful thinking and misunderstanding that the ancient alchemists had when mixing their magic potions.


Scientists Are Using Artificial Intelligence To Address Colon Cancer

#artificialintelligence

Recently, a team led by clinicians at Beth Israel Deaconess Medical Center and Harvard Medical School demonstrated that an artificial intelligence (AI)-based computer vision system can enhance screening accuracy of colon cancer. Tyler M Berzin, a gastroenterologist from Beth Israel Deaconess Medical Center, discusses how AI-based computer-vision algorithms can assist physicians. Let us examine how this is accomplished. According to Tyler, this would be a real-time application of artificial intelligence, which is also rather unique. In clinical medicine, the majority of examples of AI applications occur after the initial patient engagement, for example, during the subsequent evaluation of the X-ray.


Deep Learning -- Podcast Posts -- VR/AR Association - The VRARA

#artificialintelligence

Brent Davis, CEO of NomadXR, shares his thoughts on how we can create deep connections using artificial intelligence, deep learning, and virtual twins within the oncoming Metaverse.


The Accident That Led to Machines That Can See - Issue 107: The Edge

Nautilus

For something so effortless and automatic, vision is a tough job for the brain. It's remarkable that we can transform electromagnetic radiation--light--into a meaningful world of objects and scenes. After all, light focused into an eye is merely a stream of photons with different wave properties, projecting continuously on our retinas, a layer of cells on the backside of our eyes. Before it's transduced by our eyes, light has no brightness or color, which are properties of animal perception. Our retinas transform this energy into electrical impulses that propagate within our nervous system. Somehow this comes out as a world: skies, children, art, auroras, and occasionally ghosts and UFOs.