Goto

Collaborating Authors

Encord Taps Finance Micro Models for Data Annotation

#artificialintelligence

After meeting at an entrepreneur matchmaking event, Ulrik Hansen and Eric Landau teamed up to parlay their experience in financial trading systems into a platform for faster data labeling. In 2020, the pair of finance industry veterans founded Encord to adapt micromodels typical in finance to automated data annotation. Micromodels are neural networks that require less time to deploy because they're trained on less data and used for specific tasks. Encord's NVIDIA GPU-driven service promises to automate as much as 99 percent of businesses' manual data labeling with its micromodels. "Instead of building one big model that does everything, we're just combining a lot of smaller models together, and that's very similar to how a lot of these trading systems work," said Landau.


NVIDIA doubles down on AI language models and inference as a substrate for the Metaverse, in data centers, the cloud and at the edge

ZDNet

GTC, NVIDIA's flagship event, is always a source of announcements around all things AI. The fall 2021 edition is no exception. Omniverse is NVIDIA's virtual world simulation and collaboration platform for 3D workflows, bringing its technologies together. Based on what we've seen, we would describe the Omniverse as NVIDIA's take on Metaverse. You will be able to read more about the Omniverse in Stephanie Condon and Larry Dignan's coverage here on ZDNet.


Nvidia doubles down on AI language models and inference as a substrate for the Metaverse, in data centers, the cloud and at the edge

#artificialintelligence

Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them. GTC, Nvidia's flagship event, is always a source of announcements around all things AI. The fall 2021 edition is no exception. Omniverse is Nvidia's virtual world simulation and collaboration platform for 3D workflows, bringing its technologies together.


News

#artificialintelligence

NVIDIA opened the door for enterprises worldwide to develop and deploy large language models (LLM) by enabling them to build their own domain-specific chatbots, personal assistants and other AI applications that understand language with unprecedented levels of subtlety and nuance. The company unveiled the NVIDIA NeMo Megatron framework for training language models with trillions of parameters, the Megatron 530B customizable LLM that can be trained for new domains and languages, and NVIDIA Triton Inference Server with multi-GPU, multinode distributed inference functionality. Combined with NVIDIA DGX systems, these tools provide a production-ready, enterprise-grade solution to simplify the development and deployment of large language models. "Large language models have proven to be flexible and capable, able to answer deep domain questions, translate languages, comprehend and summarize documents, write stories and compute programs, all without specialized training or supervision," said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA. "Building large language models for new languages and domains is likely the largest supercomputing application yet, and now these capabilities are within reach for the world's enterprises."


Deploy fast and scalable AI with NVIDIA Triton Inference Server in Amazon SageMaker

#artificialintelligence

Machine learning (ML) and deep learning (DL) are becoming effective tools for solving diverse computing problems, from image classification in medical diagnosis, conversational AI in chatbots, to recommender systems in ecommerce. However, ML models that have specific latency or high throughput requirements can become prohibitively expensive to run at scale on generic computing infrastructure. To achieve performance and deliver inference at the lowest cost, ML models require inference accelerators such as GPUs to meet the stringent throughput, scale, and latency requirements businesses and customers expect. The deployment of trained models and accompanying code in the data center, public cloud, or at the edge is called inference serving. We are proud to announce the integration of NVIDIA Triton Inference Server in Amazon SageMaker.