Goto

Collaborating Authors

Results


What Is Artificial Intelligence (AI)

#artificialintelligence

Natural language processing (NLP) enables an intuitive form of communication between humans and intelligent systems using human languages. NLP drives modern interactive voice response (IVR) systems by processing language to improve communication. Chatbots are the most common application of NLP in business. Advanced virtual assistants, sometimes called conversational AI agents, are powered by conversational user interfaces, NLP, and semantic and deep learning techniques. Progressing beyond chatbots, advanced virtual assistants listen to and observe behaviors, build and maintain data models, and predict and recommend actions to assist people with and automate tasks that were previously only possible for humans to accomplish.


Pytorch & C++ #1

#artificialintelligence

In this series, I will try to provide examples/practices/projects with Pytorch c API. If you are new to this series, you can check the first blog before diving. In this story, let's play with Torch Tensors to get familiar with tensor operations in cpp. After this background, we will understand the following projects & practices better. All codes are available in this Github repo.


NVIDIA AI Platform Delivers Big Gains for Large Language Models

#artificialintelligence

As the size and complexity of large language models (LLMs) continue to grow, NVIDIA is today announcing updates to the NeMo Megatron framework that provide training speed-ups of up to 30%. These updates–which include two trailblazing techniques and a hyperparameter tool to optimize and scale training of LLMs on any number of GPUs–offer new capabilities to train and deploy models using the NVIDIA AI platform. BLOOM, the world's largest open-science, open-access multilingual language model, with 176 billion parameters, was recently trained on the NVIDIA AI platform, enabling text generation in 46 languages and 13 programming languages. The NVIDIA AI platform has also powered one of the most powerful transformer language models, with 530 billion parameters, Megatron-Turing NLG model (MT-NLG). LLMs are one of today's most important advanced technologies, involving up to trillions of parameters that learn from text.


La veille de la cybersécurité

#artificialintelligence

Deep Learning is the subset of Machine Learning that primarily deals with Neural Networks. Deep Learning skills are the key skills that students today need to be able to thrive in the global economy. Deep learning skills can help them land prestigious job positions at FAANG companies. FAANG is an acronym that indicates the stocks of five prominent American technology companies: Facebook, Amazon, Apple, Netflix, and Google. Read on to find out more about the key deep learning skills in demand for FAANG.


Carbon Footprint Management with Data-Driven AI and IoT

#artificialintelligence

We have been chosen as winners at Climate Hackathon 2022 competition organized by Microsoft. The aim of this competition was to find new solutions to prevent climate change by utilizing new technologies. We entered the competition with a solution that we had already started designing and working on, but this hackathon gave us some needed urgency to finalize it. Going forward, we are ready to continue turning the proposed solution into a marketable product, that can help other companies improve their environmental sustainability. The competition had three distinct challenges, from which teams could choose one to solve.


InfoQ AI, ML and Data Engineering Trends Report 2022

#artificialintelligence

Welcome to the InfoQ podcast Annual Trends Report in AI, ML and data engineering topics. I am joined today by the InfoQ editorial team, and also an external panelist. There have been a lot of innovations and developments happening in AI and ML space. Before we jump into the main part of this podcast, let's start with the introductions of our panelists. Rags, can you please introduce yourself? Rags Srinivas: Glad to be here. I was here for the previous podcast last year as well. So, things have changed quite a bit, but I focus really mainly on the big data infrastructure and the confluence of that. So quite a few developments happening there that I'd love to talk about when we get there. Myself, I work for DataStax as a developer advocate, and essentially, again, it's all about data, AI, infrastructure and how to manage your costs and how to do it efficiently. And hopefully, we'll cover all that. I'm Roland, I'm a machine learning engineer, and I hope to talk a lot about transformer models and large-scale foundational models. For InfoQ, I like to write about some of the latest innovations in deep learning, and definitely want to talk about NLP and some of the multi-modal text and image models. Srini Penchikala: Next is Daniel Dominguez. Thank you for the invitation. I like to write about the metaverse, new technologies, deep learning.


Last Week in AI #174: Cerebras sets record for largest AI model on one device, open source large language model, robotaxis paralyzed, and more!

#artificialintelligence

Cerebras Systems, with its latest WSE-2 chip, has set the record for the largest AI model ever trained on a single device. The chip, which has 850k cores and 2.6 trillion transistors, is much larger than the largest GPUs. It has 123x more cores, 1k times more memory, and 12k times more bandwidth than the largest GPU. This allowed Cerebras to train a 20 billion parameter neural network model on a single chip. Doing so with GPUs would require complex compute cluster engineering and management, which could be much more expensive and only doable at large tech companies.


Remote Computer Vision Engineer openings in California on August 14, 2022 – Data Science Jobs

#artificialintelligence

Role requiring'No experience data provided' months of experience in None Samsara (NYSE: IOT) is the pioneer of the Connected Operations Cloud, which allows businesses that depend on physical operations to harness IoT (Internet of Things) data to develop actionable business insights and improve their operations. Founded in San Francisco in 2015, we now employ more than 1,800 people globally and have over 1.5 million active devices. Samsara also went public in December 2021 and we're just getting started. Recent awards we've won include: • #2 in the Financial Times' Fastest Growing Companies in Americas list 2021 • Named as a Best Place to Work in Built In 2022 • #19 in the Forbes Cloud 100 2021 • IoT Analytics Company of the Year in 2022's IoT Breakthrough Winners • Forbes Advisor named us the Best Solution for Large Companies – Fleet management software for 2022! We're driving change in industries that are yet to fully embrace digital transformation. Physical operations make up a massive slice of the global economy but haven't benefited from innovation and actionable information in the way that other sectors have.


KDD: Graph neural networks, fairness, and inclusivity

#artificialintelligence

As general chair of this year's ACM Conference on Knowledge Discovery and Data Mining (KDD), Huzefa Rangwala, a senior manager at the Amazon Machine Learning Solutions Lab, has a broad view of the topics under discussion there. Two of the most prominent, he says, are graph neural networks and fairness in AI. Graphs are data representations that can encode relationships between different data items, and graph neural networks are machine learning models that are useful for knowledge discovery because they can be used to infer graph structures. "Our world is connected in lots of ways, so you'll see graph neural networks find applications in lots of different domains, all the way from social networks and transportation networks to knowledge graphs and drug discovery," Rangwala says. The Amazon Machine Learning Solutions Lab brings the expertise of Amazon scientists and the resources of Amazon Web Services to bear on customers' machine learning problems.


Learn about Deep Learning Accelerators on the Jetson Orin with NVIDIA

#artificialintelligence

Developers or those of you interested in learning more about the Deep Learning Accelerator on NVIDIA's Jetson Orin mini PC will be pleased to know that NVIDIA has published a new article over on its technical blog providing an overview of the Deep Learning Accelerator (DLA) when used with the Jetson system that combines a CPU and GPU into a single module. Providing developers with an expansive NVIDIA software stack in a small, low-power package that can be deployed at the edge. "Though the DLA doesn't have as many supported layers as the GPU, it still supports a wide variety of layers used in many popular neural network architectures. In many instances, the layer support may cover the requirements of your model. For example, the NVIDIA TAO Toolkit includes a wide variety of pre-trained models that are supported by the DLA, ranging from object detection to action recognition. "While it's important to note that the DLA throughput is typically lower than that of the GPU, it is power-efficient and allows you to offload deep learning workloads, freeing the GPU for other tasks.