Goto

Collaborating Authors

Results


Modern Computing: A Short History, 1945-2022

#artificialintelligence

Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.


Vizy Review: Raspberry Pi Computer Vision Made Simple

#artificialintelligence

When the Raspberry Pi 4 burst onto the scene, with four 1.5 GHz CPU cores and up to 8GB of RAM there was a gasp from the community. The extra horsepower provided those interested in machine learning and AI to finally use the Raspberry Pi to power their projects. Over time, TensorFlow and TensorFLow Lite saw numerous upgrades and finally cemented the Raspberry Pi as the ideal low cost introduction to the topic. The problem is, where do we start? Vizy from Charmed Labs, starting at $259 for a unit that comes with a Raspberry Pi 4 2GB or $269 - $299 for 4 or 8GB, is a smart camera for those starting out with machine learning.


Streamline Your Model Builds with PyCaret + RAPIDS on NVIDIA GPUs

#artificialintelligence

PyCaret is a low-code Python machine learning library based on the popular Caret library for R. It automates the data science process from data preprocessing to insights, such that short lines of code can accomplish each step with minimal manual effort. In addition, the ability to compare and tune many models with simple commands streamlines efficiency and productivity with less time spent in the weeds of creating useful models. The PyCaret team added NVIDIA GPU support in version 2.2, including all the latest and greatest from RAPIDS. With GPU acceleration, PyCaret modeling times can be between 2 and 200 times faster depending on the workload. This blog article will go over how to use PyCaret on GPUs to save both development and computation costs by an order of magnitude.


Nvidia launches a new GPU architecture and the Grace CPU Superchip – TechCrunch

#artificialintelligence

At its annual GTC conference for AI developers, Nvidia today announced its next-gen Hopper GPU architecture and the Hopper H100 GPU, as well as a new data center chip that combines the GPU with a high-performance CPU, which Nvidia calls the "Grace CPU Superchip" (not to be confused with the Grace Hopper Superchip). With Hopper, Nvidia is launching a number of new and updated technologies, but for AI developers, the most important one may just be the architecture's focus on transformer models, which have become the machine learning technique de rigueur for many use cases and which powers models like GPT-3 and asBERT. The new Transformer Engine in the H100 chip promises to speed up model training by up to six times and because this new architecture also features Nvidia's new NVLink Switch system for connecting multiple nodes, large server clusters powered by these chips will be able to scale up to support massive networks with less overhead. "The largest AI models can require months to train on today's computing platforms," Nvidia's Dave Salvator writes in today's announcement. AI, high performance computing and data analytics are growing in complexity with some models, like large language ones, reaching trillions of parameters.


Data containers in NVIDIA GPU Cloud - World-class cloud from India

#artificialintelligence

What Is a Data Container? A fact's field is the answer for transportation of the database, required to run from one laptop device to another. A fact's field is a facts shape that "shops and organizes digital objects (a digital item is a self-contained entity that includes each fact and approach to governing the facts)." It is much like packaging a meal kit where the dealership purchases a container containing recipes, cooking tips, and the required components to make it handy to put together for consumption. Likewise, facts boxes keep and control the facts and deliver the configurations to extraordinary laptop structures for easy database setup and use.


Celebrate Pi Day with these Raspberry Pi projects

USATODAY - Tech Top Stories

A lot of folks celebrate this momentous occasion with a slice of pie, sweet or savory, but Pi Day is a wonderful excuse to immerse yourself in a cool tech project with none other than the Raspberry Pi. If you haven't heard of the Raspberry Pi, it's a tiny computer you can program to do a variety of tasks, like playing retro console games or making music. People of all ages have loved toying with it for years now, so we've dug up some beginner-friendly projects to introduce you to the magic you can create with a Raspberry Pi. Before you venture off to tinker with your new gadget, make sure to set it up with an operating system--then read on. This Raspberry Pi case looks just like the Nintendo Entertainment System console from the late 20th century.


Nvidia's AI-powered scaling makes old games look better without a huge performance hit

#artificialintelligence

Nvidia's latest game-ready driver includes a tool that could let you improve the image quality of games that your graphics card can easily run, alongside optimizations for the new God of War PC port. The tech is called Deep Learning Dynamic Super Resolution, or DLDSR, and Nvidia says you can use it to make "most games" look sharper by running them at a higher resolution than your monitor natively supports. DLDSR builds on Nvidia's Dynamic Super Resolution tech, which has been around for years. Essentially, regular old DSR renders a game at a higher resolution than your monitor can handle and then downscales it to your monitor's native resolution. This leads to an image with better sharpness but usually comes with a dip in performance (you are asking your GPU to do more work, after all). So, for instance, if you had a graphics card capable of running a game at 4K but only had a 1440p monitor, you could use DSR to get a boost in clarity.


Raspberry Pi とTensorFlow ではじめるAI・IoTアプリ開発入門

#artificialintelligence

2018年8月、Google BrainチームはTensorFlow 1.10をリリースし、Raspberry Pi(Raspbian)に正式対応しました。ラズベリーパイでディープラーニング・IoTにチャレンジしましょう!


Nvidia DLSS Is Building a Walled Garden, and It's Working

#artificialintelligence

I just reviewed AMD's new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It's a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I've seen brewing over the past few graphics card launches. Nvidia's Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD. In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR).


ProAI: An Efficient Embedded AI Hardware for Automotive Applications - a Benchmark Study

arXiv.org Artificial Intelligence

Development in the field of Single Board Computers (SBC) have been increasing for several years. They provide a good balance between computing performance and power consumption which is usually required for mobile platforms, like application in vehicles for Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD). However, there is an ever-increasing need of more powerful and efficient SBCs which can run power intensive Deep Neural Networks (DNNs) in real-time and can also satisfy necessary functional safety requirements such as Automotive Safety Integrity Level (ASIL). ProAI is being developed by ZF mainly to run powerful and efficient applications such as multitask DNNs and on top of that it also has the required safety certification for AD. In this work, we compare and discuss state of the art SBC on the basis of power intensive multitask DNN architecture called Multitask-CenterNet with respect to performance measures such as, FPS and power efficiency. As an automotive supercomputer, ProAI delivers an excellent combination of performance and efficiency, managing nearly twice the number of FPS per watt than a modern workstation laptop and almost four times compared to the Jetson Nano. Furthermore, it was also shown that there is still power in reserve for further and more complex tasks on the ProAI, based on the CPU and GPU utilization during the benchmark.