Collaborating Authors

I'm out of the layers -- how to make a custom TensorFlow 2 layer.


TensorFlow 2 made the machine learning framework far easier to use, still retaining its flexibility to build its models. One of its new features is building new layers through integrated Keras API and easily debugging this API with the usage of eager-execution. In this article, you will learn how to build custom neural network layers in TensorFlow 2 framework. Writing this article I assume you have a basic understanding of object-oriented programming in Python 3. The best would be if you review __init__, __call__, class inheritance and method overriding before reading this article. Let's start from a template, based on it you will build most of your layers.

1D Convolutional Neural Network Models for Sleep Arousal Detection Machine Learning

Sleep arousals transition the depth of sleep to a more superficial stage. The occurrence of such events is often considered as a protective mechanism to alert the body of harmful stimuli. Thus, accurate sleep arousal detection can lead to an enhanced understanding of the underlying causes and influencing the assessment of sleep quality. Previous studies and guidelines have suggested that sleep arousals are linked mainly to abrupt frequency shifts in EEG signals, but the proposed rules are shown to be insufficient for a comprehensive characterization of arousals. This study investigates the application of five recent convolutional neural networks (CNNs) for sleep arousal detection and performs comparative evaluations to determine the best model for this task. The investigated state-of-the-art CNN models have originally been designed for image or speech processing. A detailed set of evaluations is performed on the benchmark dataset provided by PhysioNet/Computing in Cardiology Challenge 2018, and the results show that the best 1D CNN model has achieved an average of 0.31 and 0.84 for the area under the precision-recall and area under the ROC curves, respectively.

End-to-end Training for Whole Image Breast Cancer Diagnosis using An All Convolutional Design Machine Learning

We develop an end-to-end training algorithm for whole-image breast cancer diagnosis based on mammograms. It requires lesion annotations only at the first stage of training. After that, a whole image classifier can be trained using only image level labels. This greatly reduced the reliance on lesion annotations. Our approach is implemented using an all convolutional design that is simple yet provides superior performance in comparison with the previous methods. On DDSM, our best single-model achieves a per-image AUC score of 0.88 and three-model averaging increases the score to 0.91. On INbreast, our best single-model achieves a per-image AUC score of 0.96. Using DDSM as benchmark, our models compare favorably with the current state-of-the-art. We also demonstrate that a whole image model trained on DDSM can be easily transferred to INbreast without using its lesion annotations and using only a small amount of training data. Code and model availability:

Towards Ophthalmologist Level Accurate Deep Learning System for OCT Screening and Diagnosis Artificial Intelligence

Abstract--In this work, we propose an advanced AIbased grading system for OCT images. The proposed system is a very deep fully convolutional attentive classification network trained with end-to-end advanced transfer learning with online random augmentation. It uses quasi-random augmentation that outputs confidence values for diseases prevalence during inference. Its a fully automated retinal OCT analysis AI system capable of pathological lesions understanding without any offline preprocessing/postprocessing stepor manual feature extraction. We present a state-of-the-art performance on the publicly available Mendeley OCT dataset. I. INTRODUCTION Sight-threatening retinal diseases are one of the major prevailed diseases among the population of varied age groups.

Generating Large Images from Latent Vectors - Part Two


In a previous post, we've looked at a generative algorithm that can produce images of digits at arbitrary high resolutions, while training on on a set of low resolution images, such as MNIST or CIFAR-10. This post explores several changes to the previous model to produce more interesting results. Specifically, we removed the use of pixel-by-pixel reconstruction loss in the Variation Autoencoder. The discriminator network used to detect fake images is replaced by a classifier network. The generator network used previously had been a relatively large network consisting of 4 layers of 128 fully connected nodes, and we explore replacing this network with a much deeper network of 96 layers, but only with only 6 nodes in each layer.