"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
The challenge of how best to represent signals is at the core of a host of science and engineering problems. In a new paper, Stanford University researchers propose that implicit neural representations offer a number of benefits over conventional continuous and discrete representations and could be used to address many of these problems. The researchers introduce sinusoidal representation networks (SIRENs) as a method for leveraging periodic activation functions for implicit neural representations and demonstrate their suitability for representing complex natural signals and their derivatives. Traditionally, discrete representations for signals are used when modelling different types of signals in images and videos, processing audio sound waves, performing 3D shape representations via point clouds, etc. The approach can also be used to solve more general boundary value problems such as the Poisson, Helmholtz, or wave equations.
Yesterday, AIM published an article on how difficult it is for the small labs and individual researchers to persevere in the high compute, high-cost industry of deep learning. Today, the policymakers of the US have introduced a new bill that will ensure deep learning is affordable for all. The National AI Research Resource Task Force Act was introduced in the House by Representatives Anna G. Eshoo (D-CA) and her colleagues. This bill was met with unanimous support from the top universities and companies, which are engaged in artificial intelligence (AI) research. Some of the well-known supporters include Stanford University, Princeton University, UCLA, Carnegie Mellon University, Johns Hopkins University, OpenAI, Mozilla, Google, Amazon Web Services, Microsoft, IBM and NVIDIA amongst others.
We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.
Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.
In After Effects, we can get rid of unwanted objects in our video footage using Adobe Sensei AI. The Content-Aware Fill tool in After Effects simply asks us for the region and the duration for the software to "fill" the video frames to mask things we don't want to see. The tool then samples surrounding contextual pixels to generate pixel patterns in the video frames that "blend in" with the scene -- as if the object never existed. This AI is probably built using Generative Adversarial Networks (GANs) -- the same deep learning algorithms that can create incredibly convincing deepfakes. As a (very) concise overview -- a GAN is composed of two competing neural networks: a generator and a discriminator.
We live in a time of unparalleled use of machine learning (ML), but it relies on one approach to training the models that are implemented in artificial neural networks (ANNs) -- so named because they're not neuromorphic. But other training approaches, some of which are more biomimetic than others, are being developed. The big question remains whether any of them will become commercially viable. ML training frequently is divided into two camps -- supervised and unsupervised. As it turns out, the divisions are not so clear-cut. The variety of approaches that exists defies neat pigeonholing.
Critical success factors behind a modern analytics landscape lies from the fact that it is not restricted to technical excellence but comes from answering the trickier "why" questions. This includes understanding deep learning models behind business problems; trusting data model predictions and explaining outcomes in a simple yet comprehensive language. Of late, many of the data scientists are more interested to sharpen their skills and unearth interesting nuggets buried in data than engaging themselves to this softer cause. Though this may sound natural with a narrow focus on data and the tools required to explore it, understanding the critical'why' is more mainstream to reach out to more users across the value chain. To understand the nuances of a Data Strategy, let us understand it from a consulting team's point of view who is assisting a large MNC to develop its data strategy.
With the rise of autonomous vehicles, smart video surveillance, facial detection and various people counting applications, fast and accurate object detection systems are rising in demand. These systems involve not only recognizing and classifying every object in an image, but localizing each one by drawing the appropriate bounding box around it. This makes object detection a significantly harder task than its traditional computer vision predecessor, image classification.
Sometime ago, the world's most affable and recognizable AI leader, Andrew Ng launched a specialization called AI for medicine through his MOOC institution, deeplearning.ai. I have always been a big fan of Andrew Ng, and it was he who had introduced me to the world of machine learning through his grainy Youtube videos of Stanford lectures back in 2012. I was very excited that finally, Andrew Ng has finally turned his attention to the critical shortage of AI experts in the medical field . Truth be told, AI in the medical world has not seen as much progress as other domains like personalized advertisements, recommendations, autonomous driving etc. There are lot of complex issues like data privacy, small sample sizes etc. which I would prefer to discuss in depth in another post.