Goto

Collaborating Authors

Results


Modern Computing: A Short History, 1945-2022

#artificialintelligence

Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.


Benchmark test of AI's performance, MLPerf, continues to gain adherents

ZDNet

Wednesday, the MLCommons, the industry consortium that oversees a popular test of machine learning performance, MLPerf, released its latest benchmark test report, showing new adherents including computer makers ASUS, H3C, and ZhejiangLab, a research institute formed by the Zhejiang province government in China, Zhejiang University and Chinese retail and AI giant Alibaba. Those parties join frequent submitters Nvidia, Qualcomm, Dell, and Microsoft. The MLCommons's executive director, David Kanter, lauded the record number of submissions, over 3,900. Those results span a wide range of computing, from data centers down to what is known as "TinyML," running on devices such as embedded microchips that sip fractions of a watt of power. "This is a huge dynamic range," said Kanter.


Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence

#artificialintelligence

Graphics chip giant Nvidia mopped up the floor with its competition in a benchmark set of tests released Wednesday afternoon, demonstrating better performance on a host of artificial intelligence tasks. The benchmark, called MLPerf, announced by the MLPerf organization, an industry consortium that administers the tests, showed Nvidia getting better speed on a variety of tasks that use neural networks, from categorizing images to recommending which products a person might like. Predictions are the part of AI where a trained neural network produces output on real data, as opposed to the training phase when the neural network system is first being refined. Benchmark results on training tasks were announced by MLPerf back in July. Many of the scores on the test results pertain to Nvidia's T4 chip that has been in the market for some time, but even more impressive results were reported for its A100 chips unveiled in May.


Nvidia Dominates (Again) Latest MLPerf Inference Results

#artificialintelligence

One wonders where the rest of AI accelerator crowd is? (Cerebras (CS-1), AMD (Radeon), Groq (Tensor Streaming Processor), SambaNova (Reconfigurable Dataflow Unit), Google's (TPU) et. For the moment, Nvidia rules the MLPerf roost. It posted the top performances in categories in which it participated, dominating the'closed' datacenter and closed edge categories. MLPerf's closed categories impose system/network restrictions intended to ensure apples-to-apples comparisons among participating systems. The'open' versions of categories permit customization.


Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence

ZDNet

Graphics chip giant Nvidia mopped up the floor with its competition in a benchmark set of tests released Wednesday afternoon, demonstrating better performance on a host of artificial intelligence tasks. The benchmark, called MLPerf, announced by the MLPerf organization, an industry consortium that administers the tests, showed Nvidia getting better speed on a variety of tasks that use neural networks, from categorizing images to recommending which products a person might like. Predictions are the part of AI where a trained neural network produces output on real data, as opposed to the training phase when the neural network system is first being refined. Benchmark results on training tasks were announced by MLPerf back in July. Many of the scores on the test results pertain to Nvidia's T4 chip that has been in the market for some time, but even more impressive results were reported for its A100 chips unveiled in May.


Serkan Piantino's Company Makes AI for Everyone NVIDIA Blog

#artificialintelligence

Spell, founded by Serkan Piantino, is making machine learning as easy as ABC. Piantino, CEO of the New York-based startup and former director of engineering for Facebook AI Research, explained to AI Podcast host Noah Kravitz how he's bringing compute power to those that don't have easy access to GPU clusters. Spell provides access to hardware as well as a software interface that accelerates execution. Piantino reported that a wide variety of industries has shown interest in Spell, from healthcare to retail, as well as researchers and academia. "You know there's some upfront cost to running an experiment, but if you get that cost down low enough, it disappears mentally" -- Serkan Piantino [11:52] "Providing access to hardware and making things easier -- giving everybody the same sort of beautiful compute cluster that giant research organizations work on -- was a really powerful idea" -- Serkan Piantino [18:36] Deep learning icon and NVIDIA Chief Scientist Bill Dally reflects on his career in AI and offers insight into the AI revolution made possible by GPU-driven deep learning.


Electrical Considerations for Artificial Intelligence Solutions Emerj

#artificialintelligence

It's clear that there's a revolution in how artificial intelligence is done with neural networks as opposed to the old school systems of the '80s and the '90s. It's clear that hardware is beginning to evolve, and it's also quite clear that the way that we power these hardware systems is going to have to change. GPUs and AI hardware are tremendously power-intensive, and this week we speak with Robert Gendron of Vicor Corporation, a company focused on powering AI systems. Vicor is in partnership with Kisaco Research, which is putting on the 2019 AI Hardware Summit September 17 and 18 in Mountain View, California. Robert speaks about why the way that they are powered needs to be different than traditional manufacturing equipment.


To Power AI, This Startup Built a Really, Really Big Chip

#artificialintelligence

Computer chips are usually small. The processor that powers the latest iPhones and iPads is smaller than a fingernail; even the beefy devices used in cloud servers aren't much bigger than a postage stamp. Then there's this new chip from a startup called Cerebras: It's bigger than an iPad all by itself. The silicon monster is almost 22 centimeters--roughly 9 inches--on each side, making it likely the largest computer chip ever, and a monument to the tech industry's hopes for artificial intelligence. Cerebras plans to offer it to tech companies trying to build smarter AI more quickly.


AI Benchmark: Running Deep Neural Networks on Android Smartphones

arXiv.org Artificial Intelligence

Over the last years, the computational power of mobile devices such as smartphones and tablets has grown dramatically, reaching the level of desktop computers available not long ago. While standard smartphone apps are no longer a problem for them, there is still a group of tasks that can easily challenge even high-end devices, namely running artificial intelligence algorithms. In this paper, we present a study of the current state of deep learning in the Android ecosystem and describe available frameworks, programming models and the limitations of running AI on smartphones. We give an overview of the hardware acceleration resources available on four main mobile chipset platforms: Qualcomm, HiSilicon, MediaTek and Samsung. Additionally, we present the real-world performance results of different mobile SoCs collected with AI Benchmark that are covering all main existing hardware configurations.


Nvidia uses AI to create convincing slow-mo video by filling in extra frames

#artificialintelligence

Creating slow motion footage is all about capturing a large number of frames per second. If you don't record enough, then as soon as you slow down your video it becomes choppy and unwatchable. Unless, that is, you use artificial intelligence to imagine the extra frames. New research from chip designer Nvidia does exactly that, using deep learning to turn 30 frames-per-second video into gorgeous, 240 frames-per-second slow-motion. Essentially, the AI system looks at two different frames and then creates intermediary footage by tracking the movement of objects from one frame to the next.