If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Pandas library comes in handy while performing data-related operations. Everyone starting with their Data Science journey has to get a good understanding of this library. Pandas can handle a significant amount of data and process it most efficiently. But at the core, it is still running on CPUs. Parallel processing can be achieved to speed up the process but it is still not efficient to handle large amounts of data.
Azure Machine Learning (AML) team is excited to announce the availability of Azure Arc-enabled Machine Learning (ML) public preview release. All customers of Azure Arc-enabled Kubernetes now can deploy AzureML extension release and bring AML to, and the edge using Kubernetes on their hardware of choice. The design for Azure Arc-enabled ML helps IT Operators leverage native Kubernetes concepts such as namespace, node selector, and resources requests/limits for ML compute utilization and optimization. By letting the IT operator manage ML compute setup, Azure Arc-enabled ML creates a seamless AML experience for data scientists who do not need to learn or use Kubernetes directly. Data scientists now can focus on models and work with tools such as Azure Machine Learning AML Studio, AML 2.0 CLI, AML Python SDK, productivity tools like Jupyter notebook, and ML frameworks like TensorFlow and PyTorch.
Imagine you are buying a new car, and you have shortlisted 3 different models to finalize from. How would you go about it? There are so many aspects to consider, like -- power, safety, price, service network, looks, brand, colour, space and so on. How would you take a balanced decision? Would you use some tool for evaluation?
The two frameworks I use for TinyML inference are TensorFlow Lite for Microcontrollers and GLOW (more specifically, the GLOW Ahead of Time (AOT) compiler). Since I haven't really seen a comparison between the two, I decided to compare the implementation of both frameworks and perform some benchmarks. While both frameworks offer tools to quantize models, I will focus on the inference engine since the approach taken by the two frameworks for the inference is very different. TensorFlow converts the model to a FlatBuffer containing serialized steps required to perform the inference, used along with a library that runs on target MCU, which interprets the FlatBuffer. The information contained in the FlatBuffer is the weights and operations used in the model.
This blog was written in collaboration with Hua Ai, Data Science Manager at Delta Air Lines. In this piece, Hua and Aparna Dhinakaran, CPO and co-founder of Arize AI, discuss how to monitor and troubleshoot model drift. As an ML practitioner, you probably have heard of drift. In this piece, we will dive into what drift is, why it's important to keep track of, and how to troubleshoot and resolve the underlying issue when drift occurs. First things first, what is drift?
GasHis-Transformer is a model for realizing gastric histopathological image classification (GHIC), which automatically classifies microscopic images of the stomach into normal and abnormal cases in gastric cancer diagnosis, as shown in the figure. GasHis-Transformer is a multi-scale image classification model that combines the best features of Vision Transformer (ViT) and CNN, where ViT is good for global information and CNN is good for local information. GasHis-Transformer consists of two important modules, Global Information Module ( GIM) and Local Information Module ( LIM), as shown in the figure below. GasHisTransformer has high classification performance on the test data of gastric histopathology dataset, with estimate precision, recall, F1-score, and accuracy of 98.0%, 100.0%, 96.0%, and 98.0%, respectively. GasHisTransformer consists of two modules: Global Information Module (GIM) and Local Information Module (LIM).
I use transfer learning method from InceptionV3 in this project. InceptionV3 is one of convolutional neural networks that are commonly used for assisting image analysis. The reason why I chose to use transfer learning is to minimize training time and increase model's accuracy since it already has pretrained-convolutional layers to extract and break down image features. Here is a snippet of code to import the InceptionV3 into the repository. I set the input image size into (256,256,3) meaning that the image should have 256x256 pixels and have a color in RGB format.
Advances in machine learning, data management, and cloud computing are having a significant impact on the market for drone-based mapping and intelligence gathering. Even as satellite-based imaging gains steam, drones appear to be extending their lead closer to Earth. We are in the midst of a renaissance in drone-based aerial intelligence. From counting the number of koalas in the Australian outback to detecting enemy combatants inside of buildings, drones seem to be everywhere at the moment. The surge in drone use is great news for Krishnan Hariharan, the CEO of Kespry, a 30-person California drone AI startup.
Adrian Rosebrock, a known CV researcher, states in his "Gentle guide to deep learning object detection" that: "object detection, regardless of whether performed via deep learning or other computer vision techniques, builds on image classification and seeks to localize precisely an area where an object appears". One approach to build a custom object detector, as he suggests, is to choose any classifier and precede it with an algorithm to select and provide regions of an image that may contain an object. Within this method, you are free to decide whether to use a traditional ML algorithm for image classification (utilising or not CNN as a feature extractor) or train a simple neural network to handle arbitrary large datasets. Despite its proven efficiency, this two-stage object detection paradigm, known as R-CNN, still relies on heavy computations and is not suitable for real-time application. It is further said in the abovementioned post that "another approach is to treat a pre-trained classification network as a base (backbone) network in a multi-component deep learning object detection framework (such as Faster R-CNN, SSD, or YOLO)".
This article is part of "Deconstructing artificial intelligence," a series of posts that explore the details of how AI applications work (In partnership with Paperspace). Deep neural networks have gained fame for their capability to process visual information. And in the past few years, they have become a key component of many computer vision applications. Among the key problems neural networks can solve is detecting and localizing objects in images. Object detection is used in many different domains, including autonomous driving, video surveillance, and healthcare.