Goto

Collaborating Authors

Catanzaro Province


Fighting water wastages with IoT and machine learning

#artificialintelligence

Italy suffers from a serious problem of water wastage, linked both to factors of education in the use of resources by citizens and to leaks in the pipelines due to obsolescence and wear and tear of the pipes, as well as the malfunctioning of the meters. The problems of the distribution network also determine inefficiencies (in particular interruptions in the water supply), which in the south of the country occur three times more frequently than in northern regions. Revelis, the company where I work, developed an IoT platform able to monitor a water delivery network in a district of Catanzaro (a small italian town that you'd probably didn't know before). The project is still under development but few milestones has been achieved. This component is responsible for the monitoring of several tracked objects.


News

#artificialintelligence

NVIDIA opened the door for enterprises worldwide to develop and deploy large language models (LLM) by enabling them to build their own domain-specific chatbots, personal assistants and other AI applications that understand language with unprecedented levels of subtlety and nuance. The company unveiled the NVIDIA NeMo Megatron framework for training language models with trillions of parameters, the Megatron 530B customizable LLM that can be trained for new domains and languages, and NVIDIA Triton Inference Server with multi-GPU, multinode distributed inference functionality. Combined with NVIDIA DGX systems, these tools provide a production-ready, enterprise-grade solution to simplify the development and deployment of large language models. "Large language models have proven to be flexible and capable, able to answer deep domain questions, translate languages, comprehend and summarize documents, write stories and compute programs, all without specialized training or supervision," said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA. "Building large language models for new languages and domains is likely the largest supercomputing application yet, and now these capabilities are within reach for the world's enterprises."


NVIDIA's Canvas app turns doodles into AI-generated 'photos'

Engadget

NVIDIA has launched a new app you can use to paint life-like landscape images -- even if you have zero artistic skills and a first grader can draw better than you. The new application is called Canvas, and it can turn childlike doodles and sketches into photorealistic landscape images in real time. It's now available for download as a free beta, though you can only use it if your machine is equipped with an NVIDIA RTX GPU. Canvas is powered by the GauGAN AI painting tool, which NVIDIA Research developed and trained using 5 million images. When the company first introduced GauGAN to the world, NVIDIA VP Bryan Catanzaro, described its technology as a "smart paintbrush."


NVIDIA and the battle for the future of AI chips

#artificialintelligence

THERE'S AN APOCRYPHAL story about how NVIDIA pivoted from games and graphics hardware to dominate AI chips – and it involves cats. Back in 2010, Bill Dally, now chief scientist at NVIDIA, was having breakfast with a former colleague from Stanford University, the computer scientist Andrew Ng, who was working on a project with Google. "He was trying to find cats on the internet – he didn't put it that way, but that's what he was doing," Dally says. Ng was working at the Google X lab on a project to build a neural network that could learn on its own. The neural network was shown ten million YouTube videos and learned how to pick out human faces, bodies and cats – but to do so accurately, the system required thousands of CPUs (central processing units), the workhorse processors that power computers. "I said, 'I bet we could do it with just a few GPUs,'" Dally says. GPUs (graphics processing units) are specialised for more intense workloads such as 3D rendering – and that makes them better than CPUs at powering AI. Dally turned to Bryan Catanzaro, who now leads deep learning research at NVIDIA, to make it happen.


Fantastic Futures 2019 Conference

#artificialintelligence

Stanford Libraries will host the 2nd International Conference on AI for Libraries, Archives, and Museums over three days, December 4, 5 & 6, 2019. The first'Fantastic Futures' conference, which took place in December 2018 at the National Library of Norway in Oslo, initiated a community-focused approach to addressing the challenges and possibilities for libraries, archives, and museums in the era of artificial intelligence. The Stanford conference will expand that charge, adding to the plenary gathering a full day of workshops and a half day'unconference' shaped by the interests of those assembled. Wednesday, December 4, will be a day of plenary sessions to introduce attendees to a range of topics in AI, from the concerns of algorithmic bias and data privacy to the exciting developments in transforming discovery and digital content curation (see the full program). The two keynote addresses reflect Stanford Library's position as an academic center in close proximity to Silicon Valley: Bryan Catanzaro, the Vice President of Applied Deep Learning at Nvidia, will speak to the important contribution he thinks libraries can make in AI.


Nvidia AI research points to an evolution of the chip business 7wData

#artificialintelligence

What happens as more of the world's computer tasks get handed over to neural networks? That's an intriguing prospect, of course, for Nvidia, a company selling a whole heck of a lot of chips to train neural networks. The prospect cheers Bryan Catanzaro, who is the head of applied deep learning research at Nvidia. "We would love for model-based to be more of the workload," Catanzaro told ZDNetthis week during an interview at Nvidia's booth at the NeurIPS machine learning conference in Montreal. Catanzaro was the first person doing neural network work at Nvidia when he took a job there in 2011 after receiving his PhD from the University of California at Berkeley in electrical engineering and computer science.


Nvidia AI research points to an evolution of the chip business 7wData

#artificialintelligence

What happens as more of the world's computer tasks get handed over to neural networks? That's an intriguing prospect, of course, for Nvidia, a company selling a whole heck of a lot of chips to train neural networks. The prospect cheers Bryan Catanzaro, who is the head of applied deep learning research at Nvidia. "We would love for model-based to be more of the workload," Catanzaro told ZDNetthis week during an interview at Nvidia's booth at the NeurIPS machine learning conference in Montreal. Catanzaro was the first person doing neural network work at Nvidia when he took a job there in 2011 after receiving his PhD from the University of California at Berkeley in electrical engineering and computer science.


Global Big Data Conference

#artificialintelligence

The GPU maker says its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date. Nvidia is touting advancements to its artificial intelligence (AI) technology for language understanding that it said sets new performance records for conversational AI. The GPU maker said its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date. By adding key optimizations to its AI platform and GPUs, Nvidia is aiming to become the premier provider of conversational AI services, which it says have been limited up to this point due to a broad inability to deploy large AI models in real time. Unlike the much simpler transactional AI, conversational AI uses context and nuance and the responses are instantaneous, explained Nvidia's vice president of applied deep learning research, Bryan Catanzaro, on a press briefing.


Nvidia unveiled a new AI engine that renders virtual world's in real time – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."


Nvidia GauGAN takes rough sketches and creates 'photo-realistic' landscape images

ZDNet

Researchers at Nvidia have created a new generative adversarial network model for producing realistic landscape images from a rough sketch or segmentation map, and while it's not perfect, it is certainly a step towards allowing people to create their own synthetic scenery. The GauGAN model is initially being touted as a tool to help urban planners, game designers, and architects quickly create synthetic images. The model was trained on over a million images, including 41,000 from Flickr, with researchers stating it acts as a "smart paintbrush" as it fills in the details on the sketch. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colours, based on what it has learned about real images."