Not enough data to create a plot.
Try a different view from the menu above.
Stoyanov, Danail
Placental Vessel Segmentation and Registration in Fetoscopy: Literature Review and MICCAI FetReg2021 Challenge Findings
Bano, Sophia, Casella, Alessandro, Vasconcelos, Francisco, Qayyum, Abdul, Benzinou, Abdesslam, Mazher, Moona, Meriaudeau, Fabrice, Lena, Chiara, Cintorrino, Ilaria Anita, De Paolis, Gaia Romana, Biagioli, Jessica, Grechishnikova, Daria, Jiao, Jing, Bai, Bizhe, Qiao, Yanyan, Bhattarai, Binod, Gaire, Rebati Raman, Subedi, Ronast, Vazquez, Eduard, Płotka, Szymon, Lisowska, Aneta, Sitek, Arkadiusz, Attilakos, George, Wimalasundera, Ruwan, David, Anna L, Paladini, Dario, Deprest, Jan, De Momi, Elena, Mattos, Leonardo S, Moccia, Sara, Stoyanov, Danail
Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to regulate blood exchange among twins. The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation. Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision challenge, we released the first largescale multicentre TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. The challenge provided an opportunity for creating generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-centre fetoscopic data, we provide a benchmark for future research in this field.
FetReg: Placental Vessel Segmentation and Registration in Fetoscopy Challenge Dataset
Bano, Sophia, Casella, Alessandro, Vasconcelos, Francisco, Moccia, Sara, Attilakos, George, Wimalasundera, Ruwan, David, Anna L., Paladini, Dario, Deprest, Jan, De Momi, Elena, Mattos, Leonardo S., Stoyanov, Danail
Fetoscopy laser photocoagulation is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS), that occur in mono-chorionic multiple pregnancies due to placental vascular anastomoses. This procedure is particularly challenging due to limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to fluid turbidity, variability in light source, and unusual position of the placenta. This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS. Computer-assisted intervention may help overcome these challenges by expanding the fetoscopic field of view through video mosaicking and providing better visualization of the vessel network. However, the research and development in this domain remain limited due to unavailability of high-quality data to encode the intra- and inter-procedure variability. Through the \textit{Fetoscopic Placental Vessel Segmentation and Registration (FetReg)} challenge, we present a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos. In this paper, we provide an overview of the FetReg dataset, challenge tasks, evaluation metrics and baseline methods for both segmentation and registration. Baseline methods results on the FetReg dataset shows that our dataset poses interesting challenges, offering large opportunity for the creation of novel methods and models through a community effort initiative guided by the FetReg challenge.
A translational pathway of deep learning methods in GastroIntestinal Endoscopy
Ali, Sharib, Dmitrieva, Mariia, Ghatwary, Noha, Bano, Sophia, Polat, Gorkem, Temizel, Alptekin, Krenzer, Adrian, Hekalo, Amar, Guo, Yun Bo, Matuszewski, Bogdan, Gridach, Mourad, Voiculescu, Irina, Yoganand, Vishnusai, Chavan, Arnav, Raj, Aryan, Nguyen, Nhan T., Tran, Dat Q., Huynh, Le Duy, Boutry, Nicolas, Rezvy, Shahadate, Chen, Haijian, Choi, Yoon Ho, Subramanian, Anand, Balasubramanian, Velmurugan, Gao, Xiaohong W., Hu, Hongyu, Liao, Yusheng, Stoyanov, Danail, Daul, Christian, Realdon, Stefano, Cannizzaro, Renato, Lamarque, Dominique, Tran-Nguyen, Terry, Bailey, Adam, Braden, Barbara, East, James, Rittscher, Jens
The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies. Whilst endoscopy is a widely used diagnostic and treatment tool for hollow-organs, there are several core challenges often faced by endoscopists, mainly: 1) presence of multi-class artefacts that hinder their visual interpretation, and 2) difficulty in identifying subtle precancerous precursors and cancer abnormalities. Artefacts often affect the robustness of deep learning methods applied to the gastrointestinal tract organs as they can be confused with tissue of interest. EndoCV2020 challenges are designed to address research questions in these remits. In this paper, we present a summary of methods developed by the top 17 teams and provide an objective comparison of state-of-the-art methods and methods designed by the participants for two sub-challenges: i) artefact detection and segmentation (EAD2020), and ii) disease detection and segmentation (EDD2020). Multi-center, multi-organ, multi-class, and multi-modal clinical endoscopy datasets were compiled for both EAD2020 and EDD2020 sub-challenges. An out-of-sample generalisation ability of detection algorithms was also evaluated. Whilst most teams focused on accuracy improvements, only a few methods hold credibility for clinical usability. The best performing teams provided solutions to tackle class imbalance, and variabilities in size, origin, modality and occurrences by exploring data augmentation, data fusion, and optimal class thresholding techniques.
Real-Time Segmentation of Non-Rigid Surgical Tools based on Deep Learning and Tracking
García-Peraza-Herrera, Luis C., Li, Wenqi, Gruijthuijsen, Caspar, Devreker, Alain, Attilakos, George, Deprest, Jan, Poorten, Emmanuel Vander, Stoyanov, Danail, Vercauteren, Tom, Ourselin, Sébastien
Real-time tool segmentation is an essential component in computer-assisted surgical systems. We propose a novel real-time automatic method based on Fully Convolutional Networks (FCN) and optical flow tracking. Our method exploits the ability of deep neural networks to produce accurate segmentations of highly deformable parts along with the high speed of optical flow. Furthermore, the pre-trained FCN can be fine-tuned on a small amount of medical images without the need to hand-craft features. We validated our method using existing and new benchmark datasets, covering both ex vivo and in vivo real clinical cases where different surgical instruments are employed. Two versions of the method are presented, non-real-time and real-time. The former, using only deep learning, achieves a balanced accuracy of 89.6% on a real clinical dataset, outperforming the (non-real-time) state of the art by 3.8% points. The latter, a combination of deep learning with optical flow tracking, yields an average balanced accuracy of 78.2% across all the validated datasets.
More unlabelled data or label more data? A study on semi-supervised laparoscopic image segmentation
Fu, Yunguan, Robu, Maria R., Koo, Bongjin, Schneider, Crispin, van Laarhoven, Stijn, Stoyanov, Danail, Davidson, Brian, Clarkson, Matthew J., Hu, Yipeng
Improving a semi-supervised image segmentation task has the option of adding more unlabelled images, labelling the unlabelled images or combining both, as neither image acquisition nor expert labelling can be considered trivial in most clinical applications. With a laparoscopic liver image segmentation application, we investigate the performance impact by altering the quantities of labelled and unlabelled training data, using a semi-supervised segmentation algorithm based on the mean teacher learning paradigm. We first report a significantly higher segmentation accuracy, compared with supervised learning. Interestingly, this comparison reveals that the training strategy adopted in the semi-supervised algorithm is also responsible for this observed improvement, in addition to the added unlabelled data. We then compare different combinations of labelled and unlabelled data set sizes for training semi-supervised segmentation networks, to provide a quantitative example of the practically useful trade-off between the two data planning strategies in this surgical guidance application.
Deep Sequential Mosaicking of Fetoscopic Videos
Bano, Sophia, Vasconcelos, Francisco, Amo, Marcel Tella, Dwyer, George, Gruijthuijsen, Caspar, Deprest, Jan, Ourselin, Sebastien, Poorten, Emmanuel Vander, Vercauteren, Tom, Stoyanov, Danail
Twin-to-twin transfusion syndrome treatment requires fetoscopic laser photocoagulation of placental vascular anastomoses to regulate blood flow to both fetuses. Limited field-of-view (FoV) and low visual quality during fetoscopy make it challenging to identify all vascular connections. Mosaicking can align multiple overlapping images to generate an image with increased FoV, however, existing techniques apply poorly to fetoscopy due to the low visual quality, texture paucity, and hence fail in longer sequences due to the drift accumulated over time. Deep learning techniques can facilitate in overcoming these challenges. Therefore, we present a new generalized Deep Sequential Mosaicking (DSM) framework for fetoscopic videos captured from different settings such as simulation, phantom, and real environments. DSM extends an existing deep image-based homography model to sequential data by proposing controlled data augmentation and outlier rejection methods. Unlike existing methods, DSM can handle visual variations due to specular highlights and reflection across adjacent frames, hence reducing the accumulated drift. We perform experimental validation and comparison using 5 diverse fetoscopic videos to demonstrate the robustness of our framework.
DeepPhase: Surgical Phase Recognition in CATARACTS Videos
Zisimopoulos, Odysseas, Flouty, Evangello, Luengo, Imanol, Giataganas, Petros, Nehme, Jean, Chow, Andre, Stoyanov, Danail
Automated surgical workflow analysis and understanding can assist surgeons to standardize procedures and enhance post-surgical assessment and indexing, as well as, interventional monitoring. Computer-assisted interventional (CAI) systems based on video can perform workflow estimation through surgical instruments' recognition while linking them to an ontology of procedural phases. In this work, we adopt a deep learning paradigm to detect surgical instruments in cataract surgery videos which in turn feed a surgical phase inference recurrent network that encodes temporal aspects of phase steps within the phase classification. Our models present comparable to state-of-the-art results for surgical tool detection and phase recognition with accuracies of 99 and 78% respectively.