Hardware


How AI Is Fueling NVIDIA GTC

#artificialintelligence

FNVIDIA GTC (GPU Technology Conference) is a global AI conference for developers that brings together developers, engineers, researchers, inventors, and IT professionals. The conference focuses on…


Intel Introduces Videogame Graphics Chips to Challenge Nvidia

WSJ.com: WSJD - Technology

Intel dominates in semiconductors at the computational heart of personal computers, but it has long ceded the market for videogaming graphics chips to Nvidia and AMD. Intel Chief Executive Pat Gelsinger on Tuesday signaled the company would re-enter that field, releasing a graphics card for gamers that is slated to be available on Oct. 12.


NVIDIA's new AI model generates objects quickly; How stars would look if they were alive;How Will AI Unmake Coding? Tesla AI Day Brings Optimus Robot!

#artificialintelligence

I hope that you enjoy the latest AI news, insights, and the Web3 section at the end! The model should be able to whip up shapes quickly too. It won't replace many coding jobs, but many coding jobs will be increasingly AI-dependent Expect Elon Musk to update us on steps toward self-driving cars, too. What if a user doesn't ask for explicit or derogatory information, but it is generated regardless? Congrats to the first winner of the GeForce 3080Ti Thanks to NVIDIA, I am giving away the 2nd GeForce 3080Ti and 5 Deep Learning Institute credits, you can win just by participating in one of the GTC sessions and sharing a screenshot of your attendance under this post!


NVIDIA AI Research Helps Populate Virtual Worlds With 3D Objects

#artificialintelligence

The massive virtual worlds created by growing numbers of companies and creators could be more easily populated with a diverse array of 3D buildings, vehicles, characters and more -- thanks to a new AI model from NVIDIA Research. Trained using only 2D images, NVIDIA GET3D generates 3D shapes with high-fidelity textures and complex geometric details. These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing. The generated objects could be used in 3D representations of buildings, outdoor spaces or entire cities, designed for industries including gaming, robotics, architecture and social media. GET3D can generate a virtually unlimited number of 3D shapes based on the data it's trained on.


Nvidia unveils new gaming chip with AI features, taps TSMC for manufacturing

#artificialintelligence

Nvidia has gained attention in recent years with its booming data center business, which sells chips used in artificial intelligence work such as natural language processing. But the company's roots are in graphics chips, which still provided 59% of its $26.9 billion in revenue in its most recent fiscal year. In a streamed online keynote address, Nvidia Chief Executive Jensen Huang on Tuesday introduced the company's newest "Ada Lovelace" series of graphics chips, named for the 19th-century British mathematician regarded as an early pioneer in computer science. The flagship GeForce RTX 4090 model of the chip will sell for $1,599 and go on sale on Oct. 12. Two less costly RTX 4080 models will start at $899 and $1,199, respectively, and go on sale in November.


What is AI hardware? How GPUs and TPUs give artificial intelligence algorithms a boost

#artificialintelligence

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Most computers and algorithms -- including, at this point, many artificial intelligence (AI) applications -- run on general-purpose circuits called central processing units or CPUs. Though, when some calculations are done often, computer scientists and electrical engineers design special circuits that can perform the same work faster or with more accuracy. Now that AI algorithms are becoming so common and essential, specialized circuits or chips are becoming more and more common and essential.


FPT Software Launches Chipmaking Subsidiary; Produces First Semiconductor Chips

#artificialintelligence

Vietnam's leading ICT company FPT Software has launched a new subsidiary, FPT Semiconductor, marking a key milestone for the company as it enters the booming semiconductor industry. Through the new subsidiary, FPT Software aims to gain a slice of Asia Pacific's semiconductor market which accounts for 60 percent of global sales1. IDC projects that worldwide semiconductor revenue will reach $661 billion in 2022, an increase of 13.7 percent over 20212. "The launch of FPT Semiconductor is a testament to Vietnamese intelligence as well as our commitment to continue seeking areas to grow, bringing technological advancement to the business community", said FPT Software Chief Operating Officer Tran Dang Hoa FPT Semiconductor released its first integrated circuits (ICs), which were designed in Vietnam and manufactured in South Korea, in August 2022. These ICs will be used in Internet of Things (IoT) medical devices.


The GPUs for Deep Learning: NVIDIA vs AWS vs Azure and More

#artificialintelligence

As technology advances and more organizations implement machine learning operations (MLOps), people are looking for ways to speed up processes. This is especially true for organizations with deep learning (DL) processes that can be incredibly long to run. You can speed up this process by using graphical processing units (GPUs) on-premises or in the cloud. GPUs are microprocessors that are specially designed to perform specific tasks. These units enable parallel processing of tasks and can be optimized to increase performance in artificial intelligence and deep learning processes.


Nvidia's Ada Lovelace GPU generation: $1,599 for RTX 4090, $899 and up for 4080

#artificialintelligence

After weeks of teases, Nvidia's newest computer graphics cards, the "Ada Lovelace" generation of RTX 4000 GPUs, are here. Nvidia CEO Jensen Huang debuted two new models on Tuesday: the RTX 4090, which will start at a whopping $1,599, and the RTX 4080, which will launch in two configurations. The pricier card, slated to launch on October 12, occupies the same highest-end category as Nvidia's 2020 megaton RTX 3090 (previously designated by the company as its "Titan" product). The 4090's increase in physical size will demand three slots on your PC build of choice. The specs are indicative of a highest-end GPU: 16,384 CUDA cores (up from the 3090's 10,496 CUDA cores) and 2.52 GHz of boost clock (up from 1.695 GHz on the 3090).


Optimizing TF, XLA and JAX for LLM Training on NVIDIA GPUs

#artificialintelligence

Posted by Douglas Yarrington (Google TPgM), James Rubin (Google PM), Neal Vaidya (NVIDIA TME), Jay Rodge (NVIDIA PMM)Together, NVIDIA and Google are delighted to announce new milestones and plans to optimize TensorFlow and JAX for the Ampere and recently announced Hopper GPU architectures by leveraging the power of XLA: a performant, flexible and extensible ML compiler built by Google.