The current availability of ever-increasing computational power, highly developed pattern recognition algorithms and advanced image processing software working at very high speeds has led to the emergence of computer-based systems that are trained to perform complex tasks in bioinformatics, medical imaging and medical robotics. Accessibility to'big data' enables the'cognitive' computer to scan billions of bits of unstructured information, extract the relevant information and recognize complex patterns with increasing confidence. Computer-based decision-support systems based on machine learning (ML) have the potential to revolutionize medicine by performing complex tasks that are currently assigned to specialists to improve diagnostic accuracy, increase efficiency of throughputs, improve clinical workflow, decrease human resource costs and improve treatment choices. These characteristics could be especially helpful in the management of prostate cancer, with growing applications in diagnostic imaging, surgical interventions, skills training and assessment, digital pathology and genomics. Medicine must adapt to this changing world, and urologists, oncologists, radiologists and pathologists, as high-volume users of imaging and pathology, need to understand this burgeoning science and acknowledge that the development of highly accurate AI-based decision-support applications of ML will require collaboration between data scientists, computer researchers and engineers.
Rundo, Leonardo, Han, Changhee, Zhang, Jin, Hataya, Ryuichiro, Nagano, Yudai, Militello, Carmelo, Ferretti, Claudio, Nobile, Marco S., Tangherloni, Andrea, Gilardi, Maria Carla, Vitabile, Salvatore, Nakayama, Hideki, Mauri, Giancarlo
Prostate cancer is the most common cancer among US men. However, prostate imaging is still challenging despite the advances in multi-parametric Magnetic Resonance Imaging (MRI), which provides both morphologic and functional information pertaining to the pathological regions. Along with whole prostate gland segmentation, distinguishing between the Central Gland (CG) and Peripheral Zone (PZ) can guide towards differential diagnosis, since the frequency and severity of tumors differ in these regions; however, their boundary is often weak and fuzzy. This work presents a preliminary study on Deep Learning to automatically delineate the CG and PZ, aiming at evaluating the generalization ability of Convolutional Neural Networks (CNNs) on two multi-centric MRI prostate datasets. Especially, we compared three CNN-based architectures: SegNet, U-Net, and pix2pix. In such a context, the segmentation performances achieved with/without pre-training were compared in 4-fold cross-validation. In general, U-Net outperforms the other methods, especially when training and testing are performed on multiple datasets.
With more and more VR content popping up every day, viewers need the latest hardware and software to experience it all. The app is currently in development, and would give audiences a centralized resource for all 360-degree and VR content. He also gives an assessment of the current state of the VR industry. Vergara points to promising earnings reports from the primary VR content and hardware manufacturers as indicators that VR is becoming mainstream. However, he warns that any advancements to the technology are moot without a growing and prestigious content library.
In this paper, we address the problem of synthesizing multi-parameter magnetic resonance imaging (mp-MRI) data, i.e. Apparent Diffusion Coefficients (ADC) and T2-weighted (T2w), containing clinically significant (CS) prostate cancer (PCa) via semi-supervised adversarial learning. Specifically, our synthesizer generates mp-MRI data in a sequential manner: first generating ADC maps from 128-d latent vectors, followed by translating them to the T2w images. The synthesizer is trained in a semisupervised manner. In the supervised training process, a limited amount of paired ADC-T2w images and the corresponding ADC encodings are provided and the synthesizer learns the paired relationship by explicitly minimizing the reconstruction losses between synthetic and real images. To avoid overfitting limited ADC encodings, an unlimited amount of random latent vectors and unpaired ADC-T2w Images are utilized in the unsupervised training process for learning the marginal image distributions of real images. To improve the robustness of synthesizing, we decompose the difficult task of generating full-size images into several simpler tasks which generate sub-images only. A StitchLayer is then employed to fuse sub-images together in an interlaced manner into a full-size image. To enforce the synthetic images to indeed contain distinguishable CS PCa lesions, we propose to also maximize an auxiliary distance of Jensen-Shannon divergence (JSD) between CS and nonCS images. Experimental results show that our method can effectively synthesize a large variety of mpMRI images which contain meaningful CS PCa lesions, display a good visual quality and have the correct paired relationship. Compared to the state-of-the-art synthesis methods, our method achieves a significant improvement in terms of both visual and quantitative evaluation metrics.
A fuzzy expert system (FES) for the prediction of prostate cancer (PC) is prescribed in this article. Age, prostate-specific antigen (PSA), prostate volume (PV) and $\%$ Free PSA ($\%$FPSA) are fed as inputs into the FES and prostate cancer risk (PCR) is obtained as the output. Using knowledge based rules in Mamdani type inference method the output is calculated. If PCR $\ge 50\%$, then the patient shall be advised to go for a biopsy test for confirmation. The efficacy of the designed FES is tested against a clinical data set. The true prediction for all the patients turns out to be $68.91\%$ whereas only for positive biopsy cases it rises to $73.77\%$. This simple yet effective FES can be used as supportive tool for decision making in medical diagnosis.
Millions of women over the last two decades have undergone vaginal mesh surgery, but it has recently become clear just how many have experienced severe complications. In our main interview this week, we hear from Sohier Elneil, one of the few surgeons in the UK qualified to remove mesh. Here, Kath Sansom shares her story of what it's like to undergo the treatment and the impact it had on her life. She had a mesh sling implanted in March 2015 to treat mild stress urinary incontinence. It was removed seven months later.
It's obvious that it takes years to train doctors, especially those who handle serious and complicated medical issues – pathologists, cardiologists, dermatologists and the rest, that's why there's always a shortage of these lifesaving experts. Thanks to artificial intelligence because now, machines can be trained to help fill that shortage. In fact, already, we have AI tools that can diagnose pneumonia, fungi, depression and certain eye infections -- all with an average accuracy rate of over 92 percent. And you know what, the list is expanding further! Chinese researchers have managed to develop a new system that diagnoses prostate cancer, as accurately as pathologists do.
The researcher who invented Viagra and a colleague at Cambridge University have become the latest to join the ranks of drug developers using artificial intelligence and attracting attention from venture capitalists. Cambridge, UK-based Healx said Thursday that it had raised $10 million in a Series A funding round, led by London-based venture capital firm Balderton Capital. Fellow British venture capital firm Amadeus Capital Partners and Jonathan Milner – founder of life sciences supplier Abcam – also participated. Cambridge Rare Diseases Network founder Tim Guilliams and David Brown – who invented Pfizer's erectile dysfunction drug, which is now available as a generic – are the founders of Healx. The company uses the HealNet database, which maps more than 1 billion disease, patient and drug interactions and was built and maintained using machine learning techniques.
Like all everyday miracles of technology, the longer you watch a robot perform surgery on a human being, the more it begins to look like an inevitable natural wonder. Earlier this month I was in an operating theatre at University College Hospital in central London watching a 59-year-old man from Potters Bar having his cancerous prostate gland removed by the four dexterous metal arms of an American-made machine, in what is likely a glimpse of the future of most surgical procedures. The robot was being controlled by Greg Shaw, a consultant urologist and surgeon sitting in the far corner of the room with his head under the black hood of a 3D monitor, like a Victorian wedding photographer. Shaw was directing the arms of the remote surgical tool with a fluid mixture of joystick control and foot-pedal pressure and amplified instruction to his theatre team standing at the patient's side. The surgeon, 43, has performed a thousand of such procedures, which are particularly useful for pelvic operations; those, he says, in which you are otherwise "looking down a deep, dark hole with a flashlight".