Collaborating Authors

information technology hardware

Quickly Embed AI Into Your Projects With Nvidia's Jetson Nano


When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006. Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades.

Nvidia and Its Co-Founder Donate $50 Million for University of Florida AI Data Center - AI Forum


American tech company Nvidia, which designs graphics processing units for the gaming and professional markets, and co-founder Chris Malachowsky, have donated $50 million in funds, services and technology to the University of Florida, to create an artificial intelligence data center. The donation – supported by an additional $20 million from the university, will be used to create the "world's fastest AI supercomputer in higher education." Nvidia's $25 million gift will provide discounts on hardware, software, services and training – while Malachowsky's donation is a financial contribution. Nvidia's Deep Learning Institute will help develop new courses for the university, including programming for young adults and teens, as part of the partnership, which will also see collaboration on AI projects between University of Florida graduate fellows and Nvidia employees. "They should be able to make marked advancements in science, but the use of AI, to infuse it into the workforce and to help businesses upskill and become more competitive, it's something that [the University of Florida] is quite interested in, and they have the resources and the reach to make that happen," explained Malachowsky, who attended the college and met his future wife there.

Royal Bank of Canada and Borealis AI announce new AI


RBC's AI private cloud platform is the first-of-its-kind in Canada to deliver intelligent software applications and boost operational efficiency Royal Bank of Canada (RBC) and its AI research institute Borealis AI have partnered with Red Hat and NVIDIA to develop a new AI computing platform designed to transform the customer banking experience and help keep pace with rapid technology changes and evolving customer expectations. "Modern AI cannot exist without access to high performance computing. This collaboration means that we can conduct research at scale, and deploy machine learning applications in production with improved efficiency and speed to market." As AI models become more efficient and accurate, so do the computational complexities associated with them. RBC and Borealis AI set out to build an in-house AI infrastructure that would allow transformative intelligent applications to be brought to market faster and deliver an enhanced experience for clients.

Apple Just Upgraded Its iMacs, a Little


Apple is refreshing its 27-inch iMac, though you'll need a keen eye to spot the differences. The new model doesn't look any different from its predecessors, sporting the same classic look Apple has used for several years now with thick bezels surrounding the 5K display. You won't find any radically new features here either. There's still no biometric authentication, meaning there's no Face ID or Touch ID, and the screen uses the exact same panel and pixel resolution as before. Most of the changes are on the inside, and impact performance.

Autonomous Robot Performing Different Tasks #piday #raspberrypi @Raspberry_Pi


This cool looking robot is made using a Raspberry Pi and 3D printed parts. In this project, a dedicated algorithm is made so that the robot can autonomously navigate the track as well as perform tasks such as line following, detecting an obstacle, grabbing and delivering an object. In addition, robustness is also considered it is because if robot navigate the pathway multiple times its performance will not affect. Each Friday is PiDay here at Adafruit! Be sure to check out our posts, tutorials and new Raspberry Pi related products.

Nvidia in advanced talks to buy chipmaker Arm from SoftBank

The Japan Times

LONDON/NEW YORK – Nvidia Corp. is in advanced talks to acquire Arm Ltd., the chip designer that SoftBank Group Corp. bought for $32 billion four years ago, according to people familiar with the matter. The two parties aim to reach a deal in the next few weeks, the people said, asking not to be identified because the information is private. Nvidia is the only suitor in concrete discussions with SoftBank, according to the people. A deal for Arm could be the largest ever in the semiconductor industry, which has been consolidating in recent years as companies seek to diversify and add scale. But any deal with Nvidia, which is a customer of Arm, would likely trigger regulatory scrutiny as well as a wave of opposition from other firms.

University of Florida, NVIDIA to Build Fastest AI Supercomputer in Academia – The Official NVIDIA Blog


The University of Florida and NVIDIA Tuesday unveiled a plan to build the world's fastest AI supercomputer in academia, delivering 700 petaflops of AI performance. The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA. "We've created a replicable, powerful model of public-private cooperation for everyone's benefit," said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA. UF will invest an additional $20 million to create an AI-centric supercomputing and data center. The $70 million public-private partnership promises to make UF one of the leading AI universities in the country, advance academic research and help address some of the state's most complex challenges.

EETimes - Nvidia, Google Both Claim MLPerf Training Crown


The third round of MLPerf training benchmark scores for eight different AI models are out, with rivals Nvidia and Google both staking a claim to the crown. While both companies claimed victory, the results bear further scrutiny. Scores are based on systems, not individual accelerator chips. While Nvidia swept the board for commercially available systems with its Ampere A100-based supercomputer, Google's massive TPU v3 system and smaller TPU v4 systems, which it entered under the research category, makes the search giant a strong contender. Nvida took first place in normalized results for all benchmarks in the commercially available systems category with its A100-based systems.

Nvidia Dominates Latest MLPerf Training Benchmark Results

#artificialintelligence released its third round of training benchmark (v0.7) results today and Nvidia again dominated, claiming 16 new records. Meanwhile, Google provided early benchmarks for its next generation TPU 4.0 accelerator and Intel previewed performance on third-gen processors (Cooper Lake). Notably, the MLPerf benchmarking organization continues to demonstrate growth; it now has 70 members, a jump from 40 last July when training benchmarks were last released. Fresh from the launch of its new A100 GPU in May and a top ten finish by Selene (DGX A100 SuperPOD) in June on the most recent Top500 List, Nvidia was able run the MLPerf training benchmarks on its new offerings in time for the July MLPerf release. Impressively, Nvidia set records for scaled out system performance and single node performance (see slides below).

Nvidia and Google claim bragging rights in MLPerf benchmarks as AI computers get bigger and bigger


Nvidia and Google on Wednesday each announced that they had aced a series of tests called MLPerf to be the biggest and best in hardware and software to crunch common artificial intelligence tasks. The devil's in the details, but both companies' achievements show the trend in AI continues to be that of bigger and bigger machine learning endeavors, backed by more-brawny computers. Benchmark tests are never without controversy, and some upstart competitors of Nvidia and Google, notably Cerebras Systems and Graphcore, continued to avoid the benchmark competition. In the results announced Wednesday by the MLPerf organization, an industry consortium that administers the tests, Nvidia took top marks across the board for a variety of machine learning "training" tasks, meaning the computing operations required to develop a machine learning neural network from scratch. The full roster of results can be seen in a spreadsheet form.