Goto

Collaborating Authors

Modern CNNs for IoT Based Farms

arXiv.org Machine Learning

Recent introduction of ICT in agriculture has brought a number of changes in the way farming is done. This means use of Internet of Things(IoT), Cloud Computing(CC), Big Data (BD) and automation to gain better control over the process of farming. As the use of these technologies in farms has grown exponentially with massive data production, there is need to develop and use state-of-the-art tools in order to gain more insight from the data within reasonable time. In this paper, we present an initial understanding of Convolutional Neural Network (CNN), the recent architectures of state-of-the-art CNN and their underlying complexities. Then we propose a classification taxonomy tailored for agricultural application of CNN. Finally, we present a comprehensive review of research dedicated to applications of state-of-the-art CNNs in agricultural production systems. Our contribution is in two-fold. First, for end users of agricultural deep learning tools, our benchmarking finding can serve as a guide to selecting appropriate architecture to use. Second, for agricultural software developers of deep learning tools, our in-depth analysis explains the state-of-the-art CNN complexities and points out possible future directions to further optimize the running performance.


Disaster Monitoring using Unmanned Aerial Vehicles and Deep Learning

arXiv.org Artificial Intelligence

Monitoring of disasters is crucial for mitigating their effects on the environment and human population, and can be facilitated by the use of unmanned aerial vehicles (UAV), equipped with camera sensors that produce aerial photos of the areas of interest. A modern technique for recognition of events based on aerial photos is deep learning. In this paper, we present the state of the art work related to the use of deep learning techniques for disaster identification. We demonstrate the potential of this technique in identifying disasters with high accuracy, by means of a relatively simple deep learning model. Based on a dataset of 544 images (containing disaster images such as fires, earthquakes, collapsed buildings, tsunami and flooding, as well as non-disaster scenes), our results show an accuracy of 91% achieved, indicating that deep learning, combined with UAV equipped with camera sensors, have the potential to predict disasters with high accuracy.


DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning

arXiv.org Machine Learning

Robotic weed control has seen increased research in the past decade with its potential for boosting productivity in agriculture. Majority of works focus on developing robotics for arable croplands, ignoring the significant weed management problems facing rangeland stock farmers. Perhaps the greatest obstacle to widespread uptake of robotic weed control is the robust detection of weed species in their natural environment. The unparalleled successes of deep learning make it an ideal candidate for recognising various weed species in the highly complex Australian rangeland environment. This work contributes the first large, public, multiclass image dataset of weed species from the Australian rangelands; allowing for the development of robust detection methods to make robotic weed control viable. The DeepWeeds dataset consists of 17,509 labelled images of eight nationally significant weed species native to eight locations across northern Australia. This paper also presents a baseline for classification performance on the dataset using the benchmark deep learning models, Inception-v3 and ResNet-50. These models achieved an average classification performance of 87.9% and 90.5%, respectively. This strong result bodes well for future field implementation of robotic weed control methods in the Australian rangelands.



Algorithms for Semantic Segmentation of Multispectral Remote Sensing Imagery using Deep Learning

arXiv.org Artificial Intelligence

Deep convolutional neural networks (DCNNs) have been used to achieve state-of-the-art performance on many computer vision tasks (e.g., object recognition, object detection, semantic segmentation) thanks to a large repository of annotated image data. Large labeled datasets for other sensor modalities, e.g., multispectral imagery (MSI), are not available due to the large cost and manpower required. In this paper, we adapt state-of-the-art DCNN frameworks in computer vision for semantic segmentation for MSI imagery. To overcome label scarcity for MSI data, we substitute real MSI for generated synthetic MSI in order to initialize a DCNN framework. We evaluate our network initialization scheme on the new RIT-18 dataset that we present in this paper. This dataset contains very-high resolution MSI collected by an unmanned aircraft system. The models initialized with synthetic imagery were less prone to over-fitting and provide a state-of-the-art baseline for future work.