Join the audience for an AI in Medical Physics Week live webinar at 3 p.m. BST on 23 June 2022 based on IOP Publishing's special issue, Focus on Machine Learning Models in Medical Imaging Want to take part in this webinar? An overview will be given of the role of artificial intelligence (AI) in automatic delineation (contouring) of organs in preclinical cancer research models. It will be shown how AI can increase efficiency in preclinical research. Speaker: Frank Verhaegen is head of radiotherapy physics research at Maastro Clinic, and also professor at the University of Maastricht, both located in the Netherlands. He is also a co-founder of the company SmART Scientific Solutions BV, which develops research software for preclinical cancer research.
How much is too much? These are questions that cut to the heart of a complex issue currently preoccupying senior medical physicists when it comes to the training and continuing professional development (CPD) of the radiotherapy physics workforce. What's exercising management and educators specifically is the extent to which the core expertise and domain knowledge of radiotherapy physicists should evolve to reflect – and, in so doing, best support – the relentless progress of artificial intelligence (AI) and machine-learning technologies within the radiation oncology workflow. In an effort to bring a degree of clarity and consensus to the collective conversation, the ESTRO 2022 Annual Congress in Copenhagen last month featured a dedicated workshop session entitled "Every radiotherapy physicist should know about AI/machine learning…but how much?" With several hundred delegates packed into Room D5 at the Bella Center, speakers were tasked by the session moderators with defending a range of "optimum scenarios" to align the know-how of medical physicists versus emerging AI/machine-learning opportunities in the radiotherapy clinic.
Which innovations will have the greatest impact in radiotherapy by 2030? That was the question posed in the closing session of last week's ESTRO 2022 congress; and five experts stepped up to respond. As often seen in debate-style ESTRO sessions, competition was intense and gimmicks were plentiful, with all talk titles based on movies and a definite sci-fi twist. Before battle commenced, the audience voted for their preferred innovation based on the presentation titles. This opening vote put personalized inter-fraction adaptation as the winner.
This article was originally published in the July/August edition of CERN Courier magazine. Today, the tools of experimental particle physics are ubiquitous in hospitals and biomedical research. Particle beams damage cancer cells; high-performance computing infrastructures accelerate drug discoveries; computer simulations of how particles interact with matter are used to model the effects of radiation on biological tissues; and a diverse range of particle-physics-inspired detectors, from wire chambers to scintillating crystals to pixel detectors, all find new vocations imaging the human body. CERN has actively pursued medical applications of its technologies as far back as the 1970s. At that time, knowledge transfer happened – mostly serendipitously – through the initiative of individual researchers.
Artificial intelligence (AI), especially deep learning, requires vast amounts of data for training, testing, and validation. Collecting these data and the corresponding annotations requires the implementation of imaging biobanks that provide access to these data in a standardized way. This requires careful design and implementation based on the current standards and guidelines and complying with the current legal restrictions. However, the realization of proper imaging data collections is not sufficient to train, validate and deploy AI as resource demands are high and require a careful hybrid implementation of AI pipelines both on-premise and in the cloud. This chapter aims to help the reader when technical considerations have to be made about the AI environment by providing a technical background of different concepts and implementation aspects involved in data storage, cloud usage, and AI pipelines.
Sen, Jaydip, Mehtab, Sidra, Sen, Rajdeep, Dutta, Abhishek, Kherwa, Pooja, Ahmed, Saheel, Berry, Pranay, Khurana, Sahil, Singh, Sonali, Cadotte, David W. W, Anderson, David W., Ost, Kalum J., Akinbo, Racheal S., Daramola, Oladunni A., Lainjo, Bongs
Recent times are witnessing rapid development in machine learning algorithm systems, especially in reinforcement learning, natural language processing, computer and robot vision, image processing, speech, and emotional processing and understanding. In tune with the increasing importance and relevance of machine learning models, algorithms, and their applications, and with the emergence of more innovative uses cases of deep learning and artificial intelligence, the current volume presents a few innovative research works and their applications in real world, such as stock trading, medical and healthcare systems, and software automation. The chapters in the book illustrate how machine learning and deep learning algorithms and models are designed, optimized, and deployed. The volume will be useful for advanced graduate and doctoral students, researchers, faculty members of universities, practicing data scientists and data engineers, professionals, and consultants working on the broad areas of machine learning, deep learning, and artificial intelligence.
The TriRhenaTech alliance presents the accepted papers of the 'Upper-Rhine Artificial Intelligence Symposium' held on October 27th 2021 in Kaiserslautern, Germany. Topics of the conference are applications of Artificial Intellgence in life sciences, intelligent systems, industry 4.0, mobility and others. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, Offenburg and Trier, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.
Segmentation of head and neck (H\&N) tumours and prediction of patient outcome are crucial for patient's disease diagnosis and treatment monitoring. Current developments of robust deep learning models are hindered by the lack of large multi-centre, multi-modal data with quality annotations. The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods of the primary gross target volume on fluoro-deoxyglucose (FDG)-PET and Computed Tomography images and prediction of progression-free survival in H\&N oropharyngeal cancer.For the segmentation task, we proposed a new network based on an encoder-decoder architecture with full inter- and intra-skip connections to take advantage of low-level and high-level semantics at full scales. Additionally, we used Conditional Random Fields as a post-processing step to refine the predicted segmentation maps. We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation, and 0.76 on the challenge testing data set. For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features. Our survival prediction model achieved a concordance index of 0.82 in cross-validation, and 0.62 on the challenge testing data set.
The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. To investigate how an artificial intelligence (AI) system performs on digital mammography (DM) from a screening population with ground truth defined by digital breast tomosynthesis (DBT), and whether AI could detect breast cancers on DM that had originally only been detected on DBT. In this secondary analysis of data from a prospective study, DM examinations from 14768 women (mean age, 57 years), examined with both DM and DBT with independent double reading in the Malmö Breast Tomosynthesis Screening Trial (MBTST; ClinicalTrials.gov