Goto

Collaborating Authors

Results


News

#artificialintelligence

NVIDIA opened the door for enterprises worldwide to develop and deploy large language models (LLM) by enabling them to build their own domain-specific chatbots, personal assistants and other AI applications that understand language with unprecedented levels of subtlety and nuance. The company unveiled the NVIDIA NeMo Megatron framework for training language models with trillions of parameters, the Megatron 530B customizable LLM that can be trained for new domains and languages, and NVIDIA Triton Inference Server with multi-GPU, multinode distributed inference functionality. Combined with NVIDIA DGX systems, these tools provide a production-ready, enterprise-grade solution to simplify the development and deployment of large language models. "Large language models have proven to be flexible and capable, able to answer deep domain questions, translate languages, comprehend and summarize documents, write stories and compute programs, all without specialized training or supervision," said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA. "Building large language models for new languages and domains is likely the largest supercomputing application yet, and now these capabilities are within reach for the world's enterprises."


NVIDIA and the battle for the future of AI chips

#artificialintelligence

THERE'S AN APOCRYPHAL story about how NVIDIA pivoted from games and graphics hardware to dominate AI chips – and it involves cats. Back in 2010, Bill Dally, now chief scientist at NVIDIA, was having breakfast with a former colleague from Stanford University, the computer scientist Andrew Ng, who was working on a project with Google. "He was trying to find cats on the internet – he didn't put it that way, but that's what he was doing," Dally says. Ng was working at the Google X lab on a project to build a neural network that could learn on its own. The neural network was shown ten million YouTube videos and learned how to pick out human faces, bodies and cats – but to do so accurately, the system required thousands of CPUs (central processing units), the workhorse processors that power computers. "I said, 'I bet we could do it with just a few GPUs,'" Dally says. GPUs (graphics processing units) are specialised for more intense workloads such as 3D rendering – and that makes them better than CPUs at powering AI. Dally turned to Bryan Catanzaro, who now leads deep learning research at NVIDIA, to make it happen.


Nvidia AI research points to an evolution of the chip business 7wData

#artificialintelligence

What happens as more of the world's computer tasks get handed over to neural networks? That's an intriguing prospect, of course, for Nvidia, a company selling a whole heck of a lot of chips to train neural networks. The prospect cheers Bryan Catanzaro, who is the head of applied deep learning research at Nvidia. "We would love for model-based to be more of the workload," Catanzaro told ZDNetthis week during an interview at Nvidia's booth at the NeurIPS machine learning conference in Montreal. Catanzaro was the first person doing neural network work at Nvidia when he took a job there in 2011 after receiving his PhD from the University of California at Berkeley in electrical engineering and computer science.


Nvidia AI research points to an evolution of the chip business 7wData

#artificialintelligence

What happens as more of the world's computer tasks get handed over to neural networks? That's an intriguing prospect, of course, for Nvidia, a company selling a whole heck of a lot of chips to train neural networks. The prospect cheers Bryan Catanzaro, who is the head of applied deep learning research at Nvidia. "We would love for model-based to be more of the workload," Catanzaro told ZDNetthis week during an interview at Nvidia's booth at the NeurIPS machine learning conference in Montreal. Catanzaro was the first person doing neural network work at Nvidia when he took a job there in 2011 after receiving his PhD from the University of California at Berkeley in electrical engineering and computer science.


Global Big Data Conference

#artificialintelligence

The GPU maker says its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date. Nvidia is touting advancements to its artificial intelligence (AI) technology for language understanding that it said sets new performance records for conversational AI. The GPU maker said its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date. By adding key optimizations to its AI platform and GPUs, Nvidia is aiming to become the premier provider of conversational AI services, which it says have been limited up to this point due to a broad inability to deploy large AI models in real time. Unlike the much simpler transactional AI, conversational AI uses context and nuance and the responses are instantaneous, explained Nvidia's vice president of applied deep learning research, Bryan Catanzaro, on a press briefing.


Nvidia unveiled a new AI engine that renders virtual world's in real time – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."


Nvidia GauGAN takes rough sketches and creates 'photo-realistic' landscape images

ZDNet

Researchers at Nvidia have created a new generative adversarial network model for producing realistic landscape images from a rough sketch or segmentation map, and while it's not perfect, it is certainly a step towards allowing people to create their own synthetic scenery. The GauGAN model is initially being touted as a tool to help urban planners, game designers, and architects quickly create synthetic images. The model was trained on over a million images, including 41,000 from Flickr, with researchers stating it acts as a "smart paintbrush" as it fills in the details on the sketch. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colours, based on what it has learned about real images."


Nvidia has created the first video game demo using AI-generated graphics

#artificialintelligence

The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality. "It's a new way to render video content using deep learning," Nvidia's vice president of applied deep learning, Bryan Catanzaro, told The Verge. "Obviously Nvidia cares a lot about generating graphics [and] we're thinking about how AI is going to revolutionize the field."


Nvidia AI research points to an evolution of the chip business

ZDNet

What happens as more of the world's computer tasks get handed over to neural networks? That's an intriguing prospect, of course, for Nvidia, a company selling a whole heck of a lot of chips to train neural networks. The prospect cheers Bryan Catanzaro, who is the head of applied deep learning research at Nvidia. "We would love for model-based to be more of the workload," Catanzaro told ZDNet this week during an interview at Nvidia's booth at the NeurIPS machine learning conference in Montreal. Catanzaro was the first person doing neural network work at Nvidia when he took a job there in 2011 after receiving his PhD from the University of California at Berkeley in electrical engineering and computer science.


Top 5 Deep Learning and AI Stories - October 20, 2017

#artificialintelligence

Forbes: How AI can transform businesses 2. GigaOm's "Voices in AI" podcast features AI luminaries 3. NVIDIA's Inception Program for AI startups adds 2,000th member 4. NVIDIA teaches the world about deep learning in finance through workshops 5. What AI can accomplish right now – and the silicon powering it all 5. HOW ARTIFICIAL INTELLIGENCE CAN TRANSFORM BUSINESSES A question originally asked on Quora, Forbes shares the leading answer by Tony Paikeday, product marketing director at NVIDIA. After providing several examples of businesses across industries who are already using AI, he shares the quickest method to adopting AI: "The fastest way to accelerate AI for business is leveraging powerful and energy-efficient GPUs. READ ARTICLE 6. GIGAOM'S "VOICES IN AI" PODCAST FEATURES AI LUMINARIES GigaOm's "Voices in AI" podcast debuted this month, featuring episodes with leading researchers and AI luminaries – including NVIDIA's Bryan Catanzaro. In episode 13, Bryan focuses on AI and the future of work: "I like to think about artificial intelligence as making tools that can perform intellectual work. LISTEN TO PODCAST 7. NVIDIA'S INCEPTION PROGRAM FOR AI STARTUPS ADDS 2,000TH MEMBER Less than 18 months after its launch, NVIDIA's Inception program -- which helps accelerate startups pushing the frontiers of AI and data science -- has signed up its 2,000th member company.