Goto

Collaborating Authors

Results


What is Neural Network Libraries container available in NVIDIA GPU Cloud - World-class cloud from India

#artificialintelligence

With the applications of artificial intelligence and deep learning (DL) on the rise, organisations seek easy and faster solutions to the problems presented by AI and deep learning. The challenge has always been about how to imitate the human brain and be able to deploy its logic artificially. Result: Neural Networks that are essentially designed on the human brain wiring. Neural Networks can be described as a set of algorithms that are loosely modelled on human brain. They are designed to recognise patterns.


NVIDIA's Kaolin: A 3D Deep Learning Library - Analytics India Magazine

#artificialintelligence

Unlike 2D data, 3D data is complex with more parameters and features. Collecting 3D data and transforming it from one representation to another is a tedious process. Thus 3D deep learning is more time consuming and error-prone than 2D Computer Vision. Though there are nicely-performing models, datasets, metrics, graphics tools, and visualization tools published in recent years, integrating different approaches is quite a non-trivial job for researchers and practitioners. In this scenario, NVIDIA introduced a PyTorch-based library named Kaolin and has recently released its latest optimized version.


Register For Data Science Meetup: NVIDIA RAPIDS GPU-Accelerated Data Analytics & Machine Learning Workshop, 2nd Edition

#artificialintelligence

A GPU is one of the most important components of modern-day artificial intelligence and deep learning architecture. Enterprises and developers are constantly on the lookout for tools that help them build and manage end-to-end data science and analytics pipelines seamlessly. RAPIDS is one such a tool incubated by NVIDIA based on the company's expert experience in hardware and data science. RAPIDS uses NVIDIA CUDA primitives for low-level compute optimisation, and lets developers use GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces. RAPIDS's also helps with data preparation tasks for data science pipelines.


An IoT Based Framework For Activity Recognition Using Deep Learning Technique

arXiv.org Machine Learning

Activity recognition is the ability to identify and recognize the action or goals of the agent. The agent can be any object or entity that performs action that has end goals. The agents can be a single agent performing the action or group of agents performing the actions or having some interaction. Human activity recognition has gained popularity due to its demands in many practical applications such as entertainment, healthcare, simulations and surveillance systems. Vision based activity recognition is gaining advantage as it does not require any human intervention or physical contact with humans. Moreover, there are set of cameras that are networked with the intention to track and recognize the activities of the agent. Traditional applications that were required to track or recognize human activities made use of wearable devices. However, such applications require physical contact of the person. To overcome such challenges, vision based activity recognition system can be used, which uses a camera to record the video and a processor that performs the task of recognition. The work is implemented in two stages. In the first stage, an approach for the Implementation of Activity recognition is proposed using background subtraction of images, followed by 3D- Convolutional Neural Networks. The impact of using Background subtraction prior to 3D-Convolutional Neural Networks has been reported. In the second stage, the work is further extended and implemented on Raspberry Pi, that can be used to record a stream of video, followed by recognizing the activity that was involved in the video. Thus, a proof-of-concept for activity recognition using small, IoT based device, is provided, which can enhance the system and extend its applications in various forms like, increase in portability, networking, and other capabilities of the device.


CPUs vs GPUs: Which chips will give firms the AI edge?

#artificialintelligence

Mumbai: Early this month at the Intel AI Devcon 2018 in Bengaluru, a holographic avatar called Ella listened intently to composer Kevin Doucette playing notes on his synthesizer. When he paused, she began composing her own notes, complementing his music in real-time. Ella was learning about features such as tempo, scale and pitch from the music data that was being sent in real-time to an Intel Movidius Neural Compute Stick. Intel used a class of artificial neural networks, the recurrent neural network or RNN that depends on previous calculations to work on current ones, to perform this artificial intelligence (AI) task. This Neural Compute Stick is simply a case in point that Intel--a company which most people identify with central processing units (CPUs) inside personal computers (PCs), mobiles and servers--is widening its portfolio to stay in the AI race that has strong contenders including Nvidia, Microsoft, Google, Facebook, IBM, Amazon, Apple, Alibaba and Baidu.


Over 5,000 Indian developers in 6 cities acquire deep learning skills, Prepare for AI era at NVIDIA Developer Connect 2017

#artificialintelligence

December 21, 2017: Business Wire India NVIDIA brought together the best minds in research, academia and industry across Hyderabad, Chennai, Mumbai, Pune, Delhi and Bangalore 42 speaker sessions from leading experts in fields such as computer vision, sensor fusion, software development, regulation and HD mapping provide expertise NVIDIA today completed its first edition of Developer Connect 2017 in Bangalore. The six-city developer roadshow witnessed over 5,000 attendees who experienced some of the highest quality workshops and demonstrations of AI and deep learning tools, designed to meet the challenges big data presents. Attendees got a closer look at NVIDIA's DGX systems, as well as the opportunity to learn more about its new Volta architecture. Both the DGX-1 and DGX Station were on display to demonstrate the full power of these AI supercomputers. The concluding segment witnessed prominent speakers from organizations such as Ola, Cognitive Computing, Microsoft, Hewlett Packard Enterprise Labs, Shell India, Sony India and Aditya Imaging Information Technologies provide their views.


TPL and NVIDIA's Deep Learning workshop a roaring success

#artificialintelligence

Hyderabad, 15th November 2017: Times Professional Learning recently conducted a Deep Learning Workshop at Hyderabad, in association with its technology partner NVIDIA. The one day workshop got a good response from the technology enthusiasts of Hyderabad. The instructor-led NVIDIA Deep Learning Institute (DLI) Master Class on deep learning helped students and professionals understand various aspects of Machine Learning and Artificial Intelligence (AI).


Fujitsu adds deep learning to nVidia GPUs

@machinelearnbot

Fujitsu today announces the addition of nVidia Volta Graphical Processing Units (GPUs) to accelerate advances in artificial intelligence and support deep learning processing on its latest Primergy x86 servers. Available to customers in Europe, the Middle East, India and Africa beginning December 2017, select Primegy models are certified for the new-generation of nVidia Tesla V100 GPU accelerators. AI and deep learning computing involves large amounts of raw data and highly demanding compute environments. Fujitsu is rising to this challenge by introducing native deep learning processing capabilities to select Fujitsu Primergy CX and RX server models. To achieve the highest possible levels of system performance, Fujitsu is introducing native support for NVIDIA GPUs via direct connection to the mainboard.


Intel, NVIDIA battle it out in data centre market - The Economic Times

#artificialintelligence

BENGALURU: Intel and NVIDIA battle are locked in new battle for turf, the booming data centre market and at the heart of this skirmish the technology that's changing the world: Artificial Interlligence (AI). In the recent quarter ended April 30, NVIDIA's revenue increased by 48% reaching $1.94 billion compared to previous year. A big revenue bump came from its Data centre business which recorded $409 million revenue in the first quarter of this fiscal, up 186% year-on-year. The reason for the exponential increase is the spike in demand for a specific kind of microprocessor called Graphic Processing Unit (GPU) made by NVIDIA. Large technology companies like Google, Amazon, Microsoft, Facebook, IBM, and Alibaba have all installed NVIDIA's elite Tesla GPUs to power their data centres to perform machine learning to analyse data gathered from the cloud and derive insights.


Microsoft made its AI work on a $10 Raspberry Pi

Engadget

When you're far from a cell tower and need to figure out if that bluebird is Sialia sialis or Sialia mexicana, no cloud server is going to help you. That's why companies are squeezing AI onto portable devices, and Microsoft has just taken that to a new extreme by putting deep learning algorithms onto a Raspberry Pi. The goals is to get AI onto "dumb" devices like sprinklers, medical implants and soil sensors to make them more useful, even if there's no supercomputer or internet connection in sight. The idea came about from Microsoft Labs teams in Redmond and Bangalore, India. Ofer Dekel, who manages an AI optimization group at the Redmond Lab, was trying to figure out a way to stop squirrels from eating flower bulbs and seeds from his bird feeder.