Goto

Collaborating Authors

Results


The Decline of Computers as a General Purpose Technology

Communications of the ACM

Perhaps in no other technology has there been so many decades of large year-over-year improvements as in computing. It is estimated that a third of all productivity increases in the U.S. since 1974 have come from information technology,a,4 making it one of the largest contributors to national prosperity. The rise of computers is due to technical successes, but also to the economics forces that financed them. Bresnahan and Trajtenberg3 coined the term general purpose technology (GPT) for products, like computers, that have broad technical applicability and where product improvement and market growth could fuel each other for many decades. But, they also predicted that GPTs could run into challenges at the end of their life cycle: as progress slows, other technologies can displace the GPT in particular niches and undermine this economically reinforcing cycle. We are observing such a transition today as improvements in central processing units (CPUs) slow, and so applications move to specialized processors, for example, graphics processing units (GPUs), which can do fewer things than traditional universal processors, but perform those functions better. Many high profile applications are already following this trend, including deep learning (a form of machine learning) and Bitcoin mining. With this background, we can now be more precise about our thesis: "The Decline of Computers as a General Purpose Technology." We do not mean that computers, taken together, will lose technical abilities and thus'forget' how to do some calculations.


Nvidia Opens the Door to Deep Learning Workshops

#artificialintelligence

Good news for folks looking to learn about the latest AI development techniques: Nvidia is now allowing the general public to access the online workshops it provides through its Deep Learning Institute (DLI). The GPU giant today announced today that selected workshops in the DLI catalog will be open to everybody. These workshops previously were available only to companies that wanted specialized training for their in-house developers, or to folks who had attended the company's GPU Technology Conferences. Two of the open courses will take place next month, including "Fundamentals of Accelerated Computing with CUDA Python," which explores developing parallel workloads with CUDA and NumPy and cost $500. There is also "Applications of AI for Predictive Maintenance," which explores technologies like XGBoost, LSTM, Keras, and Tensorflow, and costs $700.


This 'Quantum Brain' Would Mimic Our Own to Speed Up AI

#artificialintelligence

Yet according to a new paper, it may be the secret sauce for an entirely new kind of computer--one that combines quantum mechanics with the brain's inner workings. The result isn't just a computer with the ability to learn. The mechanisms that allow it to learn are directly embedded in its hardware structure--no extra AI software required. The computer model also simulates how our brains process information, using the language of neuron activity and synapses, rather than the silicon-based churning CPUs in our current laptops. The main trick relies on the quantum spin properties of cobalt atoms.


In a sign of AI's fundamental impact on computing, legendary chip designer Keller joins startup Tenstorrent

ZDNet

In a sign of the profound changes being wrought in computing by artificial intelligence, Toronto-based AI chip startup Tenstorrent on Wednesday announced it has hired legendary chip designer Jim Keller to be its chief technology officer. Keller most recently served at Intel and before that re-invented the microprocessor architecture at Advanced Micro Devices. Keller said in prepared remarks, "Software 2.0 is the largest opportunity for computing innovation in a long time. Victory requires a comprehensive re-thinking of compute and low level software." Added Keller, "Tenstorrent has made impressive progress, and with the most promising architecture out there, we are poised to become a next gen computing giant."


NVIDIA eBook: Guide to Deploying AI

#artificialintelligence

Looking to get started in AI? To realize its full potential, it's essential to know where and how to implement deep learning in workflows, as well as have access to the latest techniques, software, and hardware that can speed up training and deployment. Whether you're building code, experimenting with projects, or rolling out deployments across your organization, we have the resources you need to get started in AI.


A New Trend Of Training GANs With Less Data: NVIDIA Joins The Gang

#artificialintelligence

Following MIT, researchers at NVIDIA have recently developed a new augmented method for training Generative Adversarial Networks (GANs) with a limited amount of data. The approach is an adaptive discriminator augmentation mechanism that significantly stabilised training in limited data regimes. Machine learning models are data-hungry. As a matter of fact, in the past few years, we have seen that models that are fed with silos of data produce outstanding predictive outcomes. Alongside, with significant growth, Generative Adversarial Networks have been successfully used for various applications including high-fidelity natural image synthesis, data augmentation tasks, improving image compressions, etc. From emoting realistic expressions to traversing the deep space, and from bridging the gap between humans and machines to introduce new and unique art forms, GANs have it all covered. Although deep neural network models, including GANs, have shown impressive results, yet there remains a challenge of collecting a large number of specific datasets.


MLCommons Launches

#artificialintelligence

SAN FRANCISCO - December 3, 2020 -- Today, MLCommons, an open engineering consortium, launches its industry-academic partnership to accelerate machine learning innovation and broaden access to this critical technology for the public good. The non-profit organization initially formed as MLPerf, now boasts a founding board that includes representatives from Alibaba, Facebook AI, Google, Intel, and NVIDIA, as well as Professor Vijay Janapa Reddi of Harvard University; and a broad range of more than 50 founding members. The founding membership includes over 15 startups and small companies that focus on semiconductors, systems, and software from across the globe, as well as researchers from universities such as U.C. Berkeley, Stanford, and the University of Toronto. MLCommons will advance development of, and access to, the latest AI and Machine Learning datasets and models, best practices, benchmarks and metrics. An intent is to enable access to machine learning solutions such as computer vision, natural language processing, and speech recognition by as many people, as fast as possible.


Nvidia introduces AI for generating video conference talking heads from 2D images

#artificialintelligence

Nvidia AI researchers have introduced AI to generate talking heads for video conferences from a single 2D image. The team says they are capable of achieving a wide range of manipulation, from rotating and moving a person's head to motion transfer and video reconstruction. The AI uses the first frame in a video as a 2D photo and then uses an unsupervised learning method to gather 3D keypoints within a video. In addition to outperforming other approaches in tests using benchmark datasets, the AI achieves H.264 quality video using one-tenth of the bandwidth that was previously required. Nvidia research scientists Ting-Chun Wang, Arun Mallya, and Ming-Yu Liu published a paper about the model Monday.


EETimes - Nvidia AI Synthesizes Images with Very Little Training Data

#artificialintelligence

At NeurIPS, Nvidia AI presented a new method for creating high-quality synthetic images using a generative adversarial network (GAN) trained on 1500 source images. The neural network, StyleGAN2, usually requires a training data set of tens or hundreds of thousands of images to produce high-quality synthetic pictures. Nvidia's AI research used a dataset of 1500 images of faces from the Metropolitan Museum of Art to create new images that emulate artworks in the Museum's collection. While this breakthrough could be used to recreate the style of rare works and create new art inspired by historical portraits, there are wider implications for medical imaging AI. A key problem facing medical AI models is the lack of available training data due to privacy concerns, but especially for rare diseases where 100,000 images of a certain type of illness might not even exist.


SambaNova claims AI performance rivaling Nvidia, unveils as-a-service offering

ZDNet

SambaNova says just one quarter of a rack's worth of its DataScale computer can replace 64 separate Nvidia DGX-2 machines taking up multiple racks of equipment, when crunching various deep learning tasks such as natural language processing tasks on neural networks with billions of parameters such as Google's BERT-Large. The still very young market for artificial intelligence computers is spawning interesting business models. On Wednesday, SambaNova Systems, the Palo Alto-based startup that has received almost half a billion dollars in venture capital money, announced general availability of its dedicated AI computer, the DataScale and also announced an as-a-service offering where you can have the machine placed in your data center and rent its capacity for $10,000 a month. "What this is, is a way for people to gain quick and easy access at an entry price of $10,000 per month, and consume DataScale product as a service," said Marshall Choy, Vice President of product at SambaNova, in an interview with ZDNet via video. "I'll roll a rack, or many racks, into their data center, I'll own and manage and support the hardware for them, so they truly can just consume this product as a service offering."