custom vision
GitHub - elbruno/CustomVisionAndAzureFunctions: Step by Step on how to create an object recognition model using Custom Vision, export the model and run the model in an Azure Function
These little ones, are extremelly funny, and they literally don't care about the cold . So, I decided to help them and build an Automatic Feeder using Azure IoT, a Wio Terminal and maybe some more devices. You can check the Azure IoT project here Azure IoT - Squirrel Feeder. Once the feeder was ready, I decided to add a new feature to the scenario, detecting when a squirrel is nearby the feeder. Azure Custom Vision is an image recognition service that lets you build, deploy, and improve your own image identifier models.
Building Image Classifiers made easy with Azure Custom Vision
In our previous blog, we outlined that Supervised Machine Learning (ML) models need labeled data, but majority of the data collected in the raw format lacks labels. So, the first step before building a ML model would be to get the raw data labeled by domain experts. To do so, we had outlined how Doccano is an easy tool for collaborative text annotation. However, not all data that gets collected is in text format, many a times we end up with a bunch of images but the end goal is again to build a Supervised ML model. Like stated previously, the first step would be to tag these images with specific labels.
December 2019: "Top 40" New R Packages
One hundred fifty-two packages made it to CRAN in December. Here are my "Top 40" picks in ten categories: Data, Genomics, Machine Learning, Mathematics, Medicine, Science, Statistics, Time Series, Utilities, and Visualization. Look here for more information as well as the vignette. Loads and creates spatial data, including layers and tools that are relevant to the activities of the Commission for the Conservation of Antarctic Marine Living Resources ( CCAMLR). Have a look at the vignette.
- North America > United States > Wyoming (0.05)
- North America > Canada (0.05)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.71)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.30)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.30)
Combine the Power of Video Indexer and Computer Vision
We are pleased to introduce the ability to export high-resolution keyframes from Azure Media Service's Video Indexer. Whereas keyframes were previously exported in reduced resolution compared to the source video, high resolution keyframes extraction gives you original quality images and allows you to make use of the image-based artificial intelligence models provided by the Microsoft Computer Vision and Custom Vision services to gain even more insights from your video. This unlocks a wealth of pre-trained and custom model capabilities. You can use the keyframes extracted from Video Indexer, for example, to identify logos for monetization and brand safety needs, to add scene description for accessibility needs or to accurately identify very specific objects relevant for your organization, like identifying a type of car or a place. Let's look at some of the use cases we can enable with this new introduction.
Machine Learning in iOS: Azure Custom Vision and CoreML
This is a Part 2 of my Machine Learning in iOS tutorials, check Part 1 first. You may have read my previous article about Machine Learning in iOS: IBM Watson and CoreML. So you know that Machine Learning can be intimidating with lots of concepts and frameworks to learn, not to mention that we need to understand algorithms in Python. Talking about image labelling, you might have a bunch of images and you want to train the machine to understand and classify them. Training your own custom deep learning models can be challenging.
Microsoft and Qualcomm accelerate AI with Vision AI Developer Kit
Artificial intelligence (AI) workloads include megabytes of data and potentially billions of calculations. With advancements in hardware, it is now possible to run time-sensitive AI workloads on the edge while also sending outputs to the cloud for downstream applications. AI scenarios processed on the edge can facilitate important business scenarios, such as verifying if every person on a construction site is wearing a hardhat, or detecting whether items are out-of-stock on a store shelf. The combination of hardware, software, and AI models needed to support these scenarios can be difficult to organize. To remove this barrier, we announced a developer kit last year with Qualcomm, to accelerate AI inferencing at the intelligent edge.
- Telecommunications (0.63)
- Semiconductors & Electronics (0.63)
Fight Blight with ArcGIS and Artificial Intelligence
In order for communities to fight blight, it is essential to know the condition of properties. Historically performing property condition surveys required inspectors to physically visit properties, fill out forms and take pictures. For many communities, this was a slow and expensive process. The Property Condition Survey configuration available with ArcGIS significantly reduces the time and cost associated with performing property surveys by applying artificial intelligence (AI) and machine learning (ML) to street-level property photos and automatically calculate a blight probability for every property. Once blight probabilities are calculated communities can take action against individual properties to fix blight or make policy changes to reverse neighborhoods that are blighted or trending toward blighted.
Serverless AI in my backyard
Recently I added some cool stuff to my home automation platform using computer vision, serverless code and MQTT. When you have kids, you know they leave doors open. This does not have to be a big problem, except when there are bikes unlocked in the backyard. After all: we don't want them to be stolen! I have a Home Automation platform running, but I don't want to equip my bikes with sensors.
Microsoft empowers developers with new and updated Cognitive Services
The blog post was authored by Andy Hickl, Principal Group Program Manager, Microsoft Cognitive Services. Today at the Build 2018 conference, we are unveiling several exciting new innovations for Microsoft Cognitive Services on Azure. At Microsoft, we believe any developer should be able to integrate the best AI has to offer into their apps and services. That's why we started Microsoft Cognitive Services three years ago – and why we continue to invest in AI services on Azure today. Microsoft Cognitive Services make it easy for developers to easily add high-quality vision, speech, language, knowledge and search technologies in their apps -- with only a few lines of code.