information technology hardware

I built a DIY license plate reader with a Raspberry Pi and machine learning


A few months ago, I started entertaining the idea of giving my car the ability to detect and recognize objects. I mostly fancied this idea because I've seen what Teslas are capable of, and while I didn't want to buy a Tesla right away (Model 3 is looking juicier with each passing day I gotta say), I thought I'd try meeting my dream halfway. Below, I've documented each step in the project. If you just want to see a video of the detector in action/the GitHub link, skip to the bottom. I started by thinking of what such a system should be capable of.

Artificial Intelligence in Security Market May Set New Growth Nvidia, Intel, Xilinx - Chronicles 99


Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia. About Author: HTF Market Report is a wholly owned brand of HTF market Intelligence Consulting Private Limited. HTF Market Report global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the "Accurate Forecast" in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their "Goals & Objectives".

BERT Fine Tuning Benchmark on Quadro RTX 8000 GPUs


For this post, we measured fine tuning performance (training and inference) for the BERT (Bidirectional Encoder Representations from Transformers) implementation in TensorFlow using NVIDIA Quadro RTX 8000 GPUs. For testing, we used an Exxact Valence Workstation fitted with 4x Quadro RTX 8000's with NVLink, giving us 192 GB of GPU memory for our system. These tests measure performance for a popular use case for BERT and NLP in general, and are meant to show typical GPU performance for such a task. We made slight modifications to the training benchmark script to get the larger batch size metrics. The script runs multiple tests on the SQuAD v1.1 dataset using batch sizes 1, 2, 4, 8, 16, 32, and 64 for training, and 1, 2, 4, and 8 for inference.



A guide showing how to train TensorFlow Lite object detection models and run them on Android, the Raspberry Pi, and more! TensorFlow Lite is an optimized framework for deploying lightweight deep learning models on resource-constrained edge devices. TensorFlow Lite models have faster inference time and require less processing power, so they can be used to obtain faster performance in realtime applications. This guide provides step-by-step instructions for how train a custom TensorFlow Object Detection model, convert it into an optimized format that can be used by TensorFlow Lite, and run it on Android phones or the Raspberry Pi. The guide is broken into three major portions. Each portion will have its own dedicated README file in this repository. This repository also contains Python code for running the newly converted TensorFlow Lite model to perform detection on images, videos, or webcam feeds.

The new "6 x Tesla T4" configuration is now available for rent at LeaderGPU


Tesla T4 shows excellent results in machine learning, analytics and rendering. This card has been specifically designed for inferencing. The range of server offers from LeaderGPU has been updated recently with another powerful option that will be appreciated by rendering specialists and experts in the field of artificial intelligence and neural networks. Among the advantages of the Tesla T4 graphics card is the impressive single-precision HPC performance. The Tesla T4 is based on Turing architecture and includes 2560 CUDA cores, as well as 320 Tensor cores.

How Machine Learning & Data Storage Could Help Save Plant Species


Lamb-succory (arnoseris minima), davall's sedge (carex davalliana) and red helleborine (cephalanthera rubra) are plants, native to the United Kingdom, that are endangered or already extinct1. The disappearances of these species might seem inconsequential in the grand scheme of things, but they're part of a global trend: A decrease in plant (and animal) biodiversity. Biodiversity is a critical component of the survival of any ecosystem. The variety of traits found in each plant (like a resistance to a certain type of insect, or prone to wilting) are critical to resilience of all species against shocks and stresses -- whether it be the arrival of invasive species, a natural disaster event or even climate change. Luckily, the growing availability of data storage and increasingly sophisticated machine learning techniques might be able to help.

Facebook AI Researchers Achieve a 107x Speedup for Training Virtual Agents – NVIDIA Developer News Center


Navigating a new indoor space without any prior knowledge or even a map is a challenging task for a human, let alone a robot. To help develop intelligent machines that interact more effectively with complex 3D environments, Facebook researchers developed a GPU-accelerated deep reinforcement learning model that achieves near 100 percent success in navigating a variety of virtual environments without a pre-provided map. To achieve this breakthrough, the team focused their work on developing an efficient approach to scaling RL models, which require a significant number of training samples, using multi-node distribution. "A single parameter server and thousands of (typically CPU) workers may be fundamentally incompatible with the needs of modern computer vision and robotics communities," the researchers explained in their post, Near-perfect point-goal navigation from 2.5 billion frames of experience. "Unlike Gym or Atari, 3D simulators require GPU acceleration…. The desired agents operate from high-dimensional inputs (pixels) and use deep networks, such as ResNet50, which strain the parameter server. Thus, existing distributed RL architectures do not scale and there is a need to develop a new distributed architecture."

Getting Started with TensorFlow and Keras – Digi-Key Electronics


In this tutorial, we show you how to configure TensorFlow with Keras on a computer and build a simple linear regression model. If you have access to a modern NVIDIA graphics card (GPU), you can enable tensorflow-gpu to take advantage of the parallel processing afforded by CUDA. The field of Artificial Intelligence (AI) has been around for quite some time. As we move to build an understanding and use cases for Edge AI, we first need to understand some of the popular frameworks for building machine learning models on personal computers (and servers!). These models can then be deployed to edge devices, such as single-board computers (like the Raspberry Pi) and microcontrollers.

CodeReef: an open platform for portable MLOps, reusable automation actions and reproducible benchmarking Machine Learning

We present CodeReef - an open platform to share all the components necessary to enable cross-platform MLOps (MLSysOps), i.e. automating the deployment of ML models across diverse systems in the most efficient way. We also introduce the CodeReef solution - a way to package and share models as non-virtualized, portable, customizable and reproducible archive files. Such ML packages include JSON meta description of models with all dependencies, Python APIs, CLI actions and portable workflows necessary to automatically build, benchmark, test and customize models across diverse platforms, AI frameworks, libraries, compilers and datasets. We demonstrate several CodeReef solutions to automatically build, run and measure object detection based on SSD-Mobilenets, TensorFlow and COCO dataset from the latest MLPerf inference benchmark across a wide range of platforms from Raspberry Pi, Android phones and IoT devices to data centers. Our long-term goal is to help researchers share their new techniques as production-ready packages along with research papers to participate in collaborative and reproducible benchmarking, compare the different ML/software/hardware stacks and select the most efficient ones on a Pareto frontier using online CodeReef dashboards.

Join Jetson Developer Days Session


For embedded enthusiasts, we are offering a focused agenda at GTC 2020. Be the first to learn about our newest AI products and developer tools at NVIDIA Jetson Developer Days. With sessions and tutorials for all experience levels, this is the perfect place to learn more about AI and its applications.