You may have noticed that last week, Microsoft and Nvidia announced they had trained "the world's largest and most powerful generative language model," known as "Megatron-Turing NLG 530B," as ZDNet's Chris Duckett reported. The model, in this case, is a neural network program based on the "Transformer" approach that has become widely popular in deep learning. Megatron-Turing is able to produce realistic-seeming text and also perform on various language tests such as sentence completion. The news was somewhat perplexing in that Microsoft had already announced a program a year ago that seemed to be bigger and more powerful. While Megatron-Turing NLG 530B uses 530 billion neural "weights," or parameters, to compose its language model, what's known as "1T" has one trillion parameters.
The collaboration will improve the way ROS and NVIDIA's line of products such as Isaac SIM and the Jetson line of embedded boards operate together. NVIDIA's Isaac SIM lets developers build robust and scalable simulations. Their Jetson line of embedded boards is core to many robotics architectures, leveraging hardware-optimized chips for machine learning, computer vision, video processing, and more. The improvements to ROS will allow robotics companies to better utilize the available computational power, while still developing on the robotics-centric platform familiar to many. Amit Goel Amit Goel is Director of Product Management for Autonomous Machines at NVIDIA, where he leads the product development of NVIDIA Jetson, the most advanced platform for AI computing at the edge.
Artificial intelligence adoption is increasing in higher education for both academic and research purposes. Too often, though, universities lack the IT infrastructure needed to sustainably power these systems. "To do AI at scale, you need data, but you also need compute power, networking, storage and software," says Cheryl Martin, director of global business development for higher education and research at NVIDIA. "Universities need a platform to bring all those things together." Modern AI requires purpose-built infrastructure that can handle its massively parallel computational demands.
In the late 19th and early 20th century, a new genre of literature emerged from eventually realized fears that the world was vulnerable to massive global conflict sparked by foreign invasions. This "invasion literature" was a phenomenon that explored these conflicts through a terrestrial lens, exploring scenarios where France invaded England, or Prussia got randy with Germany; it wasn't until 1897 that one such story looked beyond Earth for the next generation of fictionalized threat. That was the year H.G. Wells serialized War of the Worlds in Pearson's Magazine and invented the alien invasion, arguably the single most influential concept in the history of science fiction. The press campaign for Apple TV's new series Invasion frequently invoked War of the Worlds as a source of inspiration and tonal match for the project. On it surface they are similar -- belligerent aliens from another planet attack the earth with weapons so superior they bring nations to their knees within weeks -- but while War of the Worlds is primarily concerned with the invasion's effect on England, Invasion follows six individuals in different countries to show the devastation from a variety of political, social, and emotional lenses.
This is an important partnership that Milestone Systems, a provider of open platform VMS IP video management software and Nvidia have signed. This new partnership between Nvidia and Milestone Systems should accelerate the adoption of artificial intelligence in IP video technology, notably through the development of a unique gateway between the two platforms: Milestone XProtect & NVidia Metropolis / EGX. Turning data into action and boosting the performance of video analysis and prediction is the objective of this ambitious partnership between two major players in video. Milestone and Nvidia intend to develop a platform that will provide better analysis of data from sensors connected to Milestone Systems' open Video Management Systems (VMS) by facilitating the integration of AI solutions. The technology, based on Nvidia's Metropolis platform, will facilitate natural integration between the two companies through the creation of a secure platform.
Raspberry Pi has launched a new product that would make it easier to build robots out of LEGO components. The Build HAT (or Hardware Attached on Top), as it is called, is an add-on device that plugs into the Pi's 40-pin GPIO header. It was specifically designed to make it easy to use Pi hardware to control up to four LEGO Technic motors and sensors from the the toy company's Education Spike kits. Those sets are meant as a STEAM (Science, Technology, Engineering, the Arts and Mathematics) learning tool for young students. The HAT also works with motors and sensors from the Mindstorms Robot Inventor kit. In addition to the Build HAT itself, the company has created a Python library that can help students build prototypes using a Raspberry Pi and LEGO components.
More than a quarter of a million developers, researchers, innovators, and creators are gearing up for the long-awaited #1 AI conference – NVIDIA GTC which is kick-starting on November 8, 2021. The four-day virtual event will highlight some of the latest advancements in AI, deep learning, data science, high-performance computing (HPC), robotics, data science, networking, graphics and more. The Keynote by Jensen Huang, NVIDIA Founder, President and CEO, named as one of the world's most influential people of 2021, is expected to inspire and showcase the latest developments in AI, new solutions and latest products that will help solve the world's toughest challenges. Don't miss this Keynote, which will be live on November 9, at 1:30 PM IST GTC will provide a great opportunity for developers to learn the advancements in the latest technologies from the world's top innovators, scientists, and researchers. In addition, startups, academia, and the largest enterprises will all come together at GTC, giving participants a unique opportunity to share ideas and collaborate on creating the future.
Microsoft and chip manufacturer Nvidia have created a vast artificial intelligence that can mimic human language more convincingly than ever before. But the cost and time involved in creating the neural network has called into question whether such AIs can continue to scale up. The new neural network, known as the Megatron-Turing Natural Language Generation (MT-NLG) has 530 billion parameters, more than tripling the scale of OpenAI's groundbreaking GPT-3 neural network that was considered the state of the art up until now.
NVIDIA GTC is more than a must-attend AI conference for developers. It's a global experience that brings together thousands of innovators, researchers, thought leaders, and decision-makers who are shaping our world with the power of AI, computer graphics, data science, and more. Don't miss what's coming next.
I just reviewed AMD's new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It's a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I've seen brewing over the past few graphics card launches. Nvidia's Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD. In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR).