pip install
Train a Custom Object Detector with Detectron2 and FiftyOne
Combine the dataset curation of FiftyOne with the model training of Detectron2 to easily train custom detection modelsImage 71df582bfb39b541 from the Open Images V6 dataset (CC-BY 2.0) visualized in FiftyOneIn recent years, every aspect of the Machine Learning (ML) lifecycle has had tooling developed to make it easier to bring a custom model from an idea to a reality. The most exciting part is that the community has a propensity for open-source tools, like Pytorch and Tensorflow, allowing the model development process to be more transparent and replicable.In this post, we take a look at how to integrate two open-source tools tackling different parts of an ML project: FiftyOne and Detectron2. Detectron2 is a library developed by Facebook AI Research designed to allow you to easily train state-of-the-art detection and segmentation algorithms on your own data. FiftyOne is a toolkit designed to let you easily visualize your data, curate high-quality datasets, and analyze your model results.Together, you can use FiftyOne to curate your custom dataset, use Detectron2 to train a model on your FiftyOne dataset, then evaluate the Detectron2 model results back in FiftyOne to learn how to improve your dataset, continuing the cycle until you have a high-performing model. This post closely follows the official Detectron2 tutorial, augmenting it to show how to work with FiftyOne datasets and evaluations.Follow along in Colab!Check out this notebook to follow along with this post right in your browser.Screenshot of Colab notebook (image by author)SetupTo start, we’ll need to install FiftyOne and Detectron2.# Install FiftyOnepip install fiftyone # Install Detectron2 from Source (Other options available)python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'# (add --user if you don't have permission)# Or, to install it from a local clone:git clone https://github.com/facebookresearch/detectron2.gitpython -m pip install -e detectron2# On macOS, you may need to prepend the above commands with a few environment variables:CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install ...Now let’s import FiftyOne and Detectron2 in Python.https://medium.com/media/aeed86d37435228fabf6d9c9ba9de189/hrefPrepare the DatasetIn this post, we show how to use a custom FiftyOne Dataset to train a Detectron2 model. We’ll train a license plate segmentation model from an existing model pre-trained on the COCO dataset, available in Detectron2’s model zoo.Since the COCO dataset doesn’t have a “Vehicle registration plate” category, we will be using segmentations of license plates from the Open Images v6 dataset in the FiftyOne Dataset Zoo to train the model to recognize this new category.Note: Images in the Open Images v6 dataset are under the CC-BY 2.0 license.For this example, we will just use some of the samples from the official “validation” split of the dataset. To improve model performance, we could always add in more data from the official “train” split as well but that will take longer to train so we’ll just stick to the “validation” split for this walkthrough.https://medium.com/media/199e938638b63c513645062845d0a30c/hrefSpecifying a classes when downloading a dataset from the zoo will ensure that only samples with one of the given classes will be present. However, these samples may still contain other labels, so we can use the powerful filtering capability of FiftyOne to easily keep only the “Vehicle registration plate” labels. We will also untag these samples as “validation” and create our own splits out of them.https://medium.com/media/752bb3531d42324afb97a185630c61a2/hrefhttps://medium.com/media/637aec3dc2829cfc944ddeba3235408f/hrefNext, we need to parse the dataset from FiftyOne’s format to Detectron2's format so that we can register it in the relevant Detectron2 catalogs for training. This is the most important code snippet to integrate FiftyOne and Detectron2.Note: In this example, we are specifically parsing the segmentations into bounding boxes and polylines. This function may require tweaks depending on the model being trained and the data it expects.https://medium.com/media/dab5dc327d07f670d088852b01d8cd08/hrefLet’s visualize some of the samples to make sure everything is being loaded properly:https://medium.com/media/f482d61d21f5dfe480845e047745fb31/hrefVisualizing Open Images V6 training dataset in FiftyOne (Image by author)Load the Model and Train!Following the official Detectron2 tutorial, we now fine-tune a COCO-pretrained R50-FPN Mask R-CNN model on the FiftyOne dataset. This will take a couple of minutes to run if using the linked Colab notebook.https://medium.com/media/a6294adcd080b451d88f5fc75646cda5/href# Look at training curves in tensorboard:tensorboard --logdir outputTensorboard training metrics visualization (Image by author)Inference & evaluation using the trained modelNow that the model is trained, we can run it on the validation split of our dataset and see how it performs! To start,
Object Detection at the Edge with TF lite model-maker
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Do you wonder what is the easiest and fastest way to train an object detection network on a custom dataset?
Use Google Colab Like A Pro
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Regardless of whether you're a Free, Pro, or Pro user, we all love Colab for the resources and ease of sharing it makes available to all of us.
- Media (0.55)
- Information Technology > Services (0.30)
GitHub - salesforce/Merlion: Merlion: A Machine Learning Framework for Time Series Intelligence
Merlion is a Python library for time series intelligence. It provides an end-to-end machine learning framework that includes loading and transforming data, building and training models, post-processing model outputs, and evaluating model performance. It supports various time series learning tasks, including forecasting and anomaly detection for both univariate and multivariate time series. This library aims to provide engineers and researchers a one-stop solution to rapidly develop models for their specific time series needs, and benchmark them across multiple time series datasets. The table below provides a visual overview of how Merlion's key features compare to other libraries for time series anomaly detection and/or forecasting.
TensorFlow Turns 5: Here Are Top Libraries Released Over The Years
TensorFlow is one of the greatest gifts to the machine learning community by Google. An end-to-end open-source framework for machine learning with a comprehensive ecosystem of tools, libraries and community resources, TensorFlow lets researchers push the state-of-the-art in ML and developers can easily build and deploy ML-powered applications. Ever since its release to the public back in November 2015, TensorFlow has grown to become one of the most popular deep learning frameworks. This month, TensorFlow turned five, and in this article, we take a look at its popular libraries. TensorFlow Lattice library implements constrained and interpretable lattice-based models that enable users to inject domain knowledge into the learning process through common-sense.
Implementation of Google Assistant & Amazon Alexa on Raspberry Pi
Arya, Shailesh D., Patel, Samir
This paper investigates the implementation of voice-enabled Google Assistant and Amazon Alexa on Raspberry Pi. Virtual Assistants are being a new trend in how we interact or do computations with physical devices. A voice-enabled system essentially means a system that processes voice as an input, decodes, or understands the meaning of that input and generates an appropriate voice output. In this paper, we are developing a smart speaker prototype that has the functionalities of both in the same Raspberry Pi. Users can invoke a virtual assistant by saying the hot words and can leverage the best services of both eco-systems. This paper also explains the complex architecture of Google Assistant and Amazon Alexa and the working of both assistants as well. Later, this system can be used to control the smart home IoT devices.
- Asia > India > Gujarat > Gandhinagar (0.05)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- Africa > Middle East > Tunisia > Sousse Governorate > Sousse (0.04)
Introduction to computer vision with openCV [READ]
For a very long time, computer scientists and engineers have been working to make computers perform tasks achievable by humans. Close to achieving this is artificial intelligence. Amongst these, computer vision is one of the most advanced and has had a greater impact for good. So what exactly is computer vision?. Computer vision is simply a branch of computer science that deals with making computers see or perceive the world the way the human eye does.
Facial Recognition with Python and the face_recognition library
In this Python tutorial, you'll learn how to facial recognition with Python and the face_recognition library Welcome to a tutorial for implementing the face recognition package for Python. The purpose of this package is to make facial recognition (identifying a face) fairly simple. Whether it's for security, smart homes, or something else entirely, the area of application for facial recognition is quite large, so let's learn how we can use this technology. To begin, we need to install everything. Installation instruction splits between Windows and Linux for some dependencies, then there is a common part for them.
How To Create Your first Artificial Neural Network In Python
All machine Learning beginners and enthusiasts need some hands-on experience with Python, especially with creating neural networks. This tutorial aims to equip anyone with zero experience in coding to understand and create an Artificial Neural network in Python, provided you have the basic understanding of how an ANN works. Before dipping your hands in the code jar be aware that we will not be using any specific dataset with the aim to generalize the concept. The codes can be used as templates for creating simple neural networks that can get you started with Machine Learning. We will use the Keras API with Tensorflow or Theano backends for creating our neural network.
Jupyter Notebook -- Forget CSV, fetch data from DB with Python
If you read a book, article or blog about Machine Learning -- high chances it will use training data from CSV file. Nothing wrong with CSV, but let's think if it is really practical. Wouldn't be better to read data directly from the DB? Often you can't feed business data directly into ML training, it needs pre-processing -- changing categorial data, calculating new data features, etc. Data preparation/transformation step can be done quite easily with SQL while fetching original business data. Another advantage of reading data directly from DB -- when data changes, it is easier to automate ML model re-train process.