Fookes, Clinton
Dense Deformation Network for High Resolution Tissue Cleared Image Registration
Nazib, Abdullah, Fookes, Clinton, Perrin, Dimitri
The recent application of Deep Learning in various areas of medical image analysis has brought excellent performance gain. The application of deep learning technologies in medical image registration successfully outperformed traditional optimization based registration algorithms both in registration time and accuracy. In this paper, we present a densely connected convolutional architecture for deformable image registration. The training of the network is unsupervised and does not require ground-truth deformation or any synthetic deformation as a label. The proposed architecture is trained and tested on two different version of tissue cleared data, 10\% and 25\% resolution of high resolution dataset respectively and demonstrated comparable registration performance with the state-of-the-art ANTS registration method. The proposed method is also compared with the deep-learning based Voxelmorph registration method. Due to the memory limitation, original voxelmorph can work at most 15\% resolution of Tissue cleared data. For rigorous experimental comparison we developed a patch-based version of Voxelmorph network, and trained it on 10\% and 25\% resolution. In both resolution, proposed DenseDeformation network outperformed Voxelmorph in registration accuracy.
Memory Augmented Deep Generative models for Forecasting the Next Shot Location in Tennis
Fernando, Tharindu, Denman, Simon, Sridharan, Sridha, Fookes, Clinton
Considering the fact that present day ball speeds exceed 130mph, the time required by the receiver to make a decision regarding the opponents' intention, and initiate a response could exceed the flight time for the ball [1], [2], [3], [4]. Several studies have shown that this reactive ability is the product of pattern recognition skills that are obtained through a "biological probabilistic engine", that derives theories regardingopponents intentions with the partial information available[1], [5], [6]. For instance, it has been shown that expert tennis players are better at detecting events in advance [1], [7] and posses better knowledge/ expertise of situational probabilities [3]. Further investigation of human neurological structures have revealed that those capabilities occur due to a bottom-up computational process [1] within the human brain, from sensory memory to the experiences stored in episodic memory [8], [9] and knowledge derived in semantic memory [9], [10]. Despite the growing interest among researchers in the machine learning domain in better understanding factors influencing decision making in fastball sports, there have been very few studies transferring the observations of the underlying neural mechanisms to neural modelling in machine learning.Current state-of-the-art methodologies try to capture the underlying semantics through a handful of handcrafted features, without paying attention to essential mechanisms in the human brain, where the expertise and observations are stored and knowledge is derived.
Multi-component Image Translation for Deep Domain Generalization
Rahman, Mohammad Mahfujur, Fookes, Clinton, Baktashmotlagh, Mahsa, Sridharan, Sridha
Domain adaption (DA) and domain generalization (DG) are two closely related methods which are both concerned with the task of assigning labels to an unlabeled data set. The only dissimilarity between these approaches is that DA can access the target data during the training phase, while the target data is totally unseen during the training phase in DG. The task of DG is challenging as we have no earlier knowledge of the target samples. If DA methods are applied directly to DG by a simple exclusion of the target data from training, poor performance will result for a given task. In this paper, we tackle the domain generalization challenge in two ways. In our first approach, we propose a novel deep domain generalization architecture utilizing synthetic data generated by a Generative Adversarial Network (GAN). The discrepancy between the generated images and synthetic images is minimized using existing domain discrepancy metrics such as maximum mean discrepancy or correlation alignment. In our second approach, we introduce a protocol for applying DA methods to a DG scenario by excluding the target data from the training phase, splitting the source data to training and validation parts, and treating the validation data as target data for DA. We conduct extensive experiments on four cross-domain benchmark datasets. Experimental results signify our proposed model outperforms the current state-of-the-art methods for DG.