Goto

Collaborating Authors

Results


Modern Computing: A Short History, 1945-2022

#artificialintelligence

Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.


What Is RIVA Speech Skills Container In NVIDIA GPU Cloud? - World-class cloud from India

#artificialintelligence

NVIDIA Graphics Processing Units (GPUs) are computing platforms transforming big data into intelligent information. These are available on the cloud on demand and consist of different containers such as TensorRT and RIVA Speech Skills Container. This article focuses on the RIVA Speech Skills Container in NVIDIA GPU Cloud. RIVA is composed of one or multiple NVIDIA TAO Toolkit models and pre as well as post-processing components to handle deployments of full pipelines. To run RIVA Sever, TAO Toolkit models should be exported to an efficient inference engine.


Vizy Review: Raspberry Pi Computer Vision Made Simple

#artificialintelligence

When the Raspberry Pi 4 burst onto the scene, with four 1.5 GHz CPU cores and up to 8GB of RAM there was a gasp from the community. The extra horsepower provided those interested in machine learning and AI to finally use the Raspberry Pi to power their projects. Over time, TensorFlow and TensorFLow Lite saw numerous upgrades and finally cemented the Raspberry Pi as the ideal low cost introduction to the topic. The problem is, where do we start? Vizy from Charmed Labs, starting at $259 for a unit that comes with a Raspberry Pi 4 2GB or $269 - $299 for 4 or 8GB, is a smart camera for those starting out with machine learning.


Streamline Your Model Builds with PyCaret + RAPIDS on NVIDIA GPUs

#artificialintelligence

PyCaret is a low-code Python machine learning library based on the popular Caret library for R. It automates the data science process from data preprocessing to insights, such that short lines of code can accomplish each step with minimal manual effort. In addition, the ability to compare and tune many models with simple commands streamlines efficiency and productivity with less time spent in the weeds of creating useful models. The PyCaret team added NVIDIA GPU support in version 2.2, including all the latest and greatest from RAPIDS. With GPU acceleration, PyCaret modeling times can be between 2 and 200 times faster depending on the workload. This blog article will go over how to use PyCaret on GPUs to save both development and computation costs by an order of magnitude.


Jensen Huang press Q&A: Nvidia's plans for the Omniverse, Earth-2, and CPUs

#artificialintelligence

Nvidia CEO Jensen Huang recently hosted yet another spring GTC event that drew more than 200,000 participants. And while he didn't succeed in acquiring Arm for $80 billion, he did have a lot of things to show off to those gathering at the big event. He gave an update on Nvidia's plans for Earth-2, a digital twin of our planet that -- with enough supercomputing simulation capability within the Omniverse –could enable scientists to predict climate change for our planet. The Earth 2 simulation will require the best technology -- like Nvidia's newly announced graphics processing unit (GPU) Hopper and its upcoming central processing unit (CPU) Grade. Huang fielded questions about the ongoing semiconductor shortage, the possibility of investing in manufacturing, competition with rivals, and Nvidia's plans in the wake of the collapse of the Arm deal. He conveyed a sense of calm that Nvidia's business is still strong (Nvidia reported revenues of $7.64 billion for its fourth fiscal quarter ended January 30, up 53% from a year earlier). Gaming, datacenter, and professional visualization market platforms each achieved record revenue for the quarter and year. He also talked about Nvidia's continuing commitment to the self-driving vehicle market, which has been slower to take off than expected. Huang held a Q&A with the press during GTC and I asked him the question about Earth-2 and the Omniverse (I also moderated a panel on the industrial metaverse as well at GTC). I was part of a large group of reporters asking questions. Question: With the war in Ukraine and continuing worries about chip supplies and inflation in many countries, how do you feel about the timeline for all the things you've announced? For example, in 2026 you want to do DRIVE Hyperion. With all the things going into that, is there even a slight amount of worry? Jensen Huang: There's plenty to worry about. I have to observe, though, that in the last couple of years, the facts are that Nvidia has moved faster in the last couple of years than potentially its last 10 years combined. It's quite possible that we work better, actually, when we allow our employees to choose when they're most productive and let them optimize, let mature people optimize their work environment, their work time frame, their work style around what best fits for them and their families. It's very possible that all of that is happening. It's also true, absolutely true, that it has forced us to put a lot more energy into the virtual work that we do. For example, the work around OmniVerse went into light speed in the last couple of years because we needed it. Instead of being able to come into our labs to work on our robots, or go to the streets and test our cars, we had to test in virtual worlds, in digital twins.


Nvidia launches a new GPU architecture and the Grace CPU Superchip – TechCrunch

#artificialintelligence

At its annual GTC conference for AI developers, Nvidia today announced its next-gen Hopper GPU architecture and the Hopper H100 GPU, as well as a new data center chip that combines the GPU with a high-performance CPU, which Nvidia calls the "Grace CPU Superchip" (not to be confused with the Grace Hopper Superchip). With Hopper, Nvidia is launching a number of new and updated technologies, but for AI developers, the most important one may just be the architecture's focus on transformer models, which have become the machine learning technique de rigueur for many use cases and which powers models like GPT-3 and asBERT. The new Transformer Engine in the H100 chip promises to speed up model training by up to six times and because this new architecture also features Nvidia's new NVLink Switch system for connecting multiple nodes, large server clusters powered by these chips will be able to scale up to support massive networks with less overhead. "The largest AI models can require months to train on today's computing platforms," Nvidia's Dave Salvator writes in today's announcement. AI, high performance computing and data analytics are growing in complexity with some models, like large language ones, reaching trillions of parameters.


Nvidia speeds AI, climate modeling

#artificialintelligence

It's been years since developers found that Nvidia's main product, the GPU, was useful not just for rendering video games but also for high-performance computing of the kind used in 3D modeling, weather forecasting, or the training of AI models--and it's on enterprise applications such as those that CEO Jensen Huang will focus his attention at the company's GTC 2022 conference this week. Nvidia is hoping to make it easier for CIOs building digital twins and machine learning models to secure enterprise computing, and even to speed the adoption of quantum computing with a range of new hardware and software. Digital twins, numerical models that reflect changes in real-world objects useful in design, manufacturing, and service creation, vary in their level of detail. For some applications, a simple database may suffice to record a product's service history--when it was made, who it shipped to, what modifications have been applied--while others require a full-on 3D model incorporating real-time sensor data that can be used, for example, to provide advanced warning of component failure or of rain. It's at the high end of that range that Nvidia plays.


GTC 2022: Nvidia flexes its GPU and platform muscles

#artificialintelligence

Did you miss a session at the Data Summit? Nvidia packed about three years' worth of news into its GPU Technology Conference today. Flamboyant CEO Jensen Huang's 1 hour, 39-minute keynote covered a lot of ground, but the unifying themes to the majority of the two dozen announcements were GPU-centered and Nvidia's platform approach to everything it builds. Most people know Nvidia as the world's largest manufacturer of a graphics processing unit, or GPU. The GPU is a chip that was first used to accelerate graphics in gaming systems.


NVIDIA's more powerful 'AI brain' for robots is available now for $1,999

Engadget

If you've been eager to use NVIDIA's more powerful robotics'brain' for projects, you now have your chance -- provided you're willing to pay a premium. The company is now selling the Jetson AGX Orin developer kit for $1,999. The palm-sized computing device is now billed as eight times more powerful than Jetson AGX Xavier (275 trillion operations per second, or TOPS) thanks to its 12-core ARM Cortex-A78AE CPUs, Ampere-based GPU and upgrades to its AI accelerators, interfaces, memory bandwidth and sensor support. You'll have to wait a while longer for production-ready units. They'll be available in the fourth quarter of the year starting at $399 for a'basic' Orin NX kit with six CPU cores, a 1,792-core GPU, 8GB of RAM and 70 TOPS of performance.


Nvidia takes the wraps off Hopper, its latest GPU architecture

#artificialintelligence

Did you miss a session at the Data Summit? After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture, a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia's Ampere architecture, which launched roughly two years ago. The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that's designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia's MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs.