Goto

Collaborating Authors

Council Post: How The Future Of Deep Learning Could Resemble The Human Brain

#artificialintelligence

Dr. Eli David is a leading AI expert specializing in deep learning and evolutionary computation. He is the Co-Founder of DeepCube. Over the last several years, deep learning -- a subset of machine learning in which artificial neural networks imitate the inner workings of the human brain to process data, create patterns and inform decision-making -- has been responsible for significant advancements in the field of artificial intelligence. Building on what is possible with the human brain, deep learning is now capable of unsupervised learning from data that is unstructured or unlabeled. This data, often referred to as big data, can be drawn from various sources such as social media, internet history and e-commerce platforms, among others.


SQuantizer: Simultaneous Learning for Both Sparse and Low-precision Neural Networks

arXiv.org Artificial Intelligence

Deep neural networks have achieved state-of-the-art accuracies in a wide range of computer vision, speech recognition, and machine translation tasks. However the limits of memory bandwidth and computational power constrain the range of devices capable of deploying these modern networks. To address this problem, we propose SQuantizer, a new training method that jointly optimizes for both sparse and low-precision neural networks while maintaining high accuracy and providing a high compression rate. This approach brings sparsification and low-bit quantization into a single training pass, employing these techniques in an order demonstrated to be optimal. Our method achieves state-of-the-art accuracies using 4-bit and 2-bit precision for ResNet18, MobileNet-v2 and ResNet50, even with high degree of sparsity. The compression rates of 18x for ResNet18 and 17x for ResNet50, and 9x for MobileNet-v2 are obtained when SQuantizing both weights and activations within 1% and 2% loss in accuracy for ResNets and MobileNet-v2 respectively. An extension of these techniques to object detection also demonstrates high accuracy on YOLO-v2. Additionally, our method allows for fast single pass training, which is important for rapid prototyping and neural architecture search techniques. Finally extensive results from this simultaneous training approach allows us to draw some useful insights into the relative merits of sparsity and quantization.


Council Post: How Can Businesses Take Deep Learning Out Of The Lab And Onto Intelligent Edge Devices?

#artificialintelligence

Dr. Eli David is a leading AI expert specializing in deep learning and evolutionary computation. He is the co-founder of DeepCube. Over the last several years, deep learning has proved to be the key driver of AI advancement and improvements. Drawing from how the human brain operates, deep learning is responsible for advancing AI applications from computer vision to speech recognition to text and data analysis. Deep learning models are trained in research labs using large amounts of training data to demonstrate how the technology could manifest in real-world deployments.


Global Big Data Conference

#artificialintelligence

Deep learning startup Deci today announced that it raised $9.1 million in a seed funding round led by Israel-based Emerge. According to a spokesperson, the company plans to devote the proceeds to customer acquisition efforts as it expands its Tel Aviv workforce. Machine learning deployments have historically been constrained by the size and speed of algorithms and the need for costly hardware. In fact, a report from MIT found that machine learning might be approaching computational limits. A separate Synced study estimated that the University of Washington's Grover fake news detection model cost $25,000 to train in about two weeks.


The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain

arXiv.org Machine Learning

In this essay, we explore a point of intersection between deep learning and neuroscience, through the lens of large language models, transfer learning and network compression. Just like perceptual and cognitive neurophysiology has inspired effective deep neural network architectures which in turn make a useful model for understanding the brain, here we explore how biological neural development might inspire efficient and robust optimization procedures which in turn serve as a useful model for the maturation and aging of the brain. Hopefully it would inspire the reader in one way or two, or at the very least, kill some boredom during a global pandemic. We are going to touch on the following topics through the lens of large language models: - How do overparameterized deep neural nets generalize? - How does transfer learning help generalization? Before we start, it is prudent to say a few words about the brain metaphor, to clarify this author's position on the issue as it often arises central at debates. The confluence of deep learning and neuroscience arguably took place as early as the conception of artificial neural nets, because artificial neurons abstract characteristic behaviors of biological ones (McCulloch and Pitts, 1943).