The Internet of Things (IoT) has sparked the proliferation of connected devices. These devices, which house sensors to collect data of the day-to-day activities or monitoring purposes, are embedded with microcontrollers and microprocessors chips. These chips are mounted based on the data sensor needed to complete an assigned task. So we don't have a one processor fits all architecture. For example, some devices will perform a limited amount of processing on data sets such as temperature, humidity, pressure, or gravity; more complicated systems, however, will need to handle (multiple) high-resolution sound or video streams.
The touchscreens which we use at supermarkets and ATMs were accidentally invented by a group of atomic physicists back in 1970. The conception of touchscreens can be traced back to the 1940s, even before science fiction writers warmed up to the innovation. Today, the use of touchscreens is only bounded by the creativity of users. You can pinch, zoom, type and move the world literally with your fingers. However, a typical user might have had the experience of typos, unwanted clicks and many other mishits, which couldn't be undone.
Nvidia, Intel and AMD have announced their support for Microsoft's new effort to bring graphics processor support to the Windows 10 Windows Subsystem for Linux to enhance machine-learning training. GPU support for WSL arrived on Wednesday in the Dev Channel preview of Windows 10 build 20150 under Microsoft's reorganized testing structure, which lets it test Windows 10 builds that aren't tied to a specific future feature release. Microsoft announced upcoming GPU support for WSL a few weeks ago at Build 2020, along with support for running Linux GUI apps. The move on GPU access for WSL is intended to bring the performance of applications running in WSL2 up to par with those running on Windows. GPU compute support is the feature most requested by WSL users, according to Microsoft. The 20150 update includes support for Nvidia's CUDA parallel computing platform and GPUs, as well as GPUs from AMD and Intel.
Microsoft released improvements to its Windows Subsystem for Linux 2 (WSL) in a Windows 10 preview build on Wednesday, with features benefiting newcomers and developers alike. As part of the update, WSL2 can now perform GPU compute functions, including using Nvidia's CUDA technology. The new additions deliver on the promises Microsoft made at May's Build 2020 conference, where the company also teased graphical user interface support for the Windows Subsystem for Linux. WSL's improvements are part of Windows 10 Build 20150, part of the Dev Channel of Insider builds. Formerly known as the Fast Ring, the Dev Channel is devoted to testing new features which aren't necessarily tied to any upcoming Windows 10 feature release. As the name suggests, the Windows Subsystem for Linux 2 allows you to run a Linux kernel from within Windows.
At Altair, chief technology officer Sam Mahalingam is heads-down testing the company's newest software for designing cars, buildings, windmills, and other complex systems. The engineering and design software company, whose customers include BMW, Daimler, Airbus, and General Electric, is developing software that combines computer models of wind and fluid flows with machine design in the same process--so an engineer could design a turbine blade while simultaneously seeing its draft's effect on neighboring mills in a wind farm. What Altair needs for a job as hard as this, though, is a particular kind of computing power, provided by graphics processing units (GPUs) made by Silicon Valley's Nvidia and others. "When solving complex design challenges like the interaction between wind structures in windmills, GPUs help expedite computing so faster business decisions can be made," Mahalingam says. An aerodynamics simulation performed with Altair ultraFluidX on the Altair CX-1 concept design, modeled in Altair Inspire Studio.
The GPU Technology Conference is the most exciting event for the AI and ML ecosystem. From researchers in academia to product managers at hyperscale cloud companies to IoT builders and makers, this conference has something relevant for each of them. As an AIoT enthusiast and a maker, I eagerly look forward to GTC. Due to the current COVID-19 situation, I was a bit disappointed to see the event turning into a virtual conference. But the keynote delivered by Jensen Huang, the CEO of NVIDIA made me forget that it was a virtual event.
Nvidia launched the Jetson Xavier NX embedded System-on-Module (SoM) at the end of last year. It is pin-compatible with the Jetson Nano SoM and includes a CPU, a GPU, PMICs, DRAM, and flash storage. However, it was missing an important accessory, its own development kit. Since a SoM is an embedded board with just a row of connector pins, it is hard to use out-of-the-box. A development board connects all the pins on the module to ports like HDMI, Ethernet, and USB.
Nearly a year and a half after the GeForce RTX 20-series launched with Nvidia's Turing architecture inside, and three years after the launch of the data center-focused Volta GPUs, CEO Jensen Huang unveiled graphics cards powered by the new Ampere architecture during a digital GTC 2020 keynote on Thursday morning. It looks like an absolute monster. Ampere debuts in the form of the A100, a humongous data center GPU powering Nvidia's new DGX-A100 systems. Make no mistake: This 6,912 CUDA core-packing beast targets data scientists, with internal hardware optimized around deep learning tasks. You won't be using it to play Cyberpunk 2077.
At its GPU Technology Conference (GTC) event today, consumer graphics and AI silicon powerhouse Nvidia is announcing its next-generation Graphical Processing Unit (GPU) architecture, dubbed Ampere, and its first Ampere-based GPU, the A100. For more details, please see ZDNet's Natalie Gagliordi's coverage of all the Nvidia Ampere-related news today. Specifically, Nvidia is announcing new GPU-acceleration capabilities coming to Apache Spark 3.0, the release of which is anticipated in late spring. The GPU acceleration functionality is based on the open source RAPIDS suite of software libraries, themselves built on CUDA-X AI. The acceleration technology, named (logically enough) the RAPIDS Accelerator for Apache Spark, was collaboratively developed by Nvidia and Databricks (the company founded by Spark's creators).
The "HGX" is the world's most complex motherboard, according to Nvidia CEO Jensen Huang, able to accommodate eight of the company's "A100" GPU chips, shown here as eight giant heat sinks. Nvidia chief executive officer Jensen Huang on Thursday held the virtual version of the company's "GTC" annual conference, and unveiled the latest architectural innovations for the company's flagship data center graphics processing unit, or "GPU" chips. As in past, the company has dipped into names of famous scientists, in this case French scientist André-Marie Ampère, following on previous branding exercises that have included Volta and Pascal and Maxwell in recent years. The first chip manufactured using the new architecture, "A100," is already shipping to customers. Huang said all cloud providers, including Microsoft's Azure, Google GCP, and Amazon AWS will be using the new part, in servers of various sorts made for the thing.