Goto

Collaborating Authors

compression


One Man Cannot Summon The Future…

#artificialintelligence

JA: In terms of the VMS market itself – it seems the leading players are more clearly defined, and some players are fading away. Would you agree with that? PR: Up to a certain point the basic video recording functionality is commoditised, what's not commoditised is the reliability with which that functionality can be carried out. Regardless, there will always be at least 3 competitors in any market. So, yes, the market is fragmented but is becoming less fragmented. JA: What in your opinion are the major VMS trends of the moment?


Intel 10th Gen Review: The Core i9-10900K is indeed the world's fastest gaming CPU

PCWorld

Intel's 10th gen Core i9-10900K is--without a doubt--exactly as Intel has described it: "the world's fastest gaming CPU." Intel's problem has been weaknesses outside of gaming, and its overall performance value compared to AMD's Ryzen 3000 chips. With the Core i9-10900K, Intel doesn't appear to be eliminating that gap, but it could get close enough that you might not care. Despite its 10th-gen naming, Intel's newest desktop chips continue to be built on the company's aging 14nm process. It was first used with the 5th-gen Core i7-5775C desktop chip from 2015. Many tricks, optimizations, and much binning later, we have the flagship consumer Core i9-10900K, announced April 30.


CoCoPIE: A software solution for putting real artificial intelligence in smaller spaces

#artificialintelligence

Bit by bit, byte by byte, artificial intelligence has been working its way into public consciousness and into everyday computer use. Artificial intelligence and deep learning have been deeply woven into more and more aspects of end-user computing. Smartphones and other mobile devices use AI as well. Up until now, the artificial intelligence work has been done in the cloud, but a new approach to software design aims to arm mobile devices with real artificial-intelligence capability. "A mobile device is very resource-constrained," explained William & Mary computer scientist Bin Ren.


A Foolproof Way to Shrink Deep Learning Models

#artificialintelligence

Researchers have proposed a technique for shrinking deep learning models that they say is simpler and produces more accurate results than state-of-the-art methods. Massachusetts Institute of Technology (MIT) researchers have proposed a technique for compressing deep learning models, by retraining a smaller model whose weakest connections have been "pruned," at its faster, initial rate of learning. The technique's groundwork was partly laid by the AutoML for model compression (AMC) algorithm from MIT's Song Han, which automatically removes redundant neurons and connections, and retrains the model to reinstate its initial accuracy. MIT's Jonathan Frankle and Michael Carbin determined that the model could simply be rewound to its early training rate without tinkering with any parameters. Although greater shrinkage is accompanied by reduced model accuracy, in comparing their method to AMC or earlier work by Frankle on weight-rewinding techniques, Frankle and Carbin found that it performed better regardless of the amount of compression.


Researchers unveil a pruning algorithm to make artificial intelligence applications run faster

#artificialintelligence

As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models. It's so simple that they unveiled it in a tweet last month: Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want. "That's it," says Alex Renda, a Ph.D. student at MIT. "The standard things people do to prune their models are crazy complicated." Renda discussed the technique when the International Conference of Learning Representations (ICLR) convened remotely this month.


A foolproof way to shrink deep learning models

#artificialintelligence

As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models. It's so simple that they unveiled it in a tweet last month: Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want. "That's it," says Alex Renda, a PhD student at MIT. "The standard things people do to prune their models are crazy complicated." Renda discussed the technique when the International Conference of Learning Representations (ICLR) convened remotely this month.


A foolproof way to shrink deep learning models

#artificialintelligence

As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models. It's so simple that they unveiled it in a tweet last month: Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want. "That's it," says Alex Renda, a PhD student at MIT. "The standard things people do to prune their models are crazy complicated." Renda discussed the technique when the International Conference of Learning Representations (ICLR) convened remotely this month.


Compress Data And Win Hutter Prize Worth Half A Million Euros

#artificialintelligence

"Entities should not be multiplied unnecessarily" To incentivize the scientific community to focus on AGI, Marcus Hutter, one of the most prominent researchers of our generation, has renewed his decade-old prize by ten folds to half a million euros (500,000 €). The Hutter prize, named after Marcus Hutter, is given to those who can successfully create new benchmarks for lossless data compression. The data here is a dataset based on Wikipedia. Marcus Hutter, who now works at DeepMind as a senior research scientist, is famous for his work on reinforcement learning along with Juergen Schmidhuber. Dr Hutter proposed AIXI in 2000, which is a reinforcement learning agent that works in line with Occam's razor and sequential decision theory.


Feedback Recurrent Autoencoder for Video Compression

arXiv.org Machine Learning

Recent advances in deep generative modeling have enabled efficient modeling of high dimensional data distributions and opened up a new horizon for solving data compression problems. Specifically, autoencoder based learned image or video compression solutions are emerging as strong competitors to traditional approaches. In this work, We propose a new network architecture, based on common and well studied components, for learned video compression operating in low latency mode. Our method yields state of the art MS-SSIM/rate performance on the high-resolution UVG dataset, among both learned video compression approaches and classical video compression methods (H.265 and H.264) in the rate range of interest for streaming applications. Additionally, we provide an analysis of existing approaches through the lens of their underlying probabilistic graphical models. Finally, we point out issues with temporal consistency and color shift observed in empirical evaluation, and suggest directions forward to alleviate those.


GeneCAI: Genetic Evolution for Acquiring Compact AI

arXiv.org Machine Learning

In the contemporary big data realm, Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy. Model compression techniques can be leveraged to efficiently deploy such compute-intensive architectures on resource-limited mobile devices. Such methods comprise various hyper-parameters that require per-layer customization to ensure high accuracy. Choosing such hyper-parameters is cumbersome as the pertinent search space grows exponentially with model layers. This paper introduces GeneCAI, a novel optimization method that automatically learns how to tune per-layer compression hyper-parameters. We devise a bijective translation scheme that encodes compressed DNNs to the genotype space. The optimality of each genotype is measured using a multi-objective score based on accuracy and number of floating point operations. We develop customized genetic operations to iteratively evolve the non-dominated solutions towards the optimal Pareto front, thus, capturing the optimal trade-off between model accuracy and complexity. GeneCAI optimization method is highly scalable and can achieve a near-linear performance boost on distributed multi-GPU platforms. Our extensive evaluations demonstrate that GeneCAI outperforms existing rule-based and reinforcement learning methods in DNN compression by finding models that lie on a better accuracy-complexity Pareto curve.