The IBM approach trains the AI with anonymized mammography images linked to biomarkers (such as reproductive history) and clinical data, allowing the creation of an algorithm with comparatively high accuracy. It can reduce the chance of a bad diagnosis by establishing connections between traits you wouldn't spot in imagery alone, such as iron deficiencies and thyroid function. IBM even pulls in data from biopsies, lab tests, cancer registries and codes from other diagnoses and procedures. You wouldn't want to rely solely on the algorithm to make predictions, especially when it correctly interprets just 77 percent of non-cancerous instances. However, the accuracy is good enough that it could serve as a "second set of eyes," according to IBM.
Published today in the peer-reviewed journal Radiology, an IBM Research team created a new artificial intelligence (AI) model that can predict breast cancer malignancy and identify normal digital mammography exams as accurately as radiologists. Mammography, a low-dose x-ray procedure to image breasts, is considered the best breast cancer screening test available according to the American Cancer Society. However, mammograms are not always accurate. According to a U.S. 10-year study published in the New England Journal of Medicine, 23.8 percent of study participants had at least one false positive mammogram where breast cancer was not actually present. Furthermore, the American Cancer Society estimates that one in five screening mammograms are false-negatives that fail to detect existing breast cancer.
Breast cancer is the second leading cancer-related cause of death among women in the US. Early detection, through routine annual screening mammography, is the best first line of defense against breast cancer. However, these screening mammograms require expert radiologists (i.e. A radiologist can spend up to 10 hours a day working through these mammograms, in the process experiencing both eye-strain and mental fatigue. Modern computer vision models, built principally on Convolutional Neural Networks (CNNs), have seen incredible progress in recent years.
Biopharmas are warming up to artificial intelligence (AI), but a series of challenges will need to be addressed before it becomes widely used by drug developers, a panel of industry executives agreed. Speaking at the 2019 Annual Meeting of NewYorkBIO in New York City yesterday, panelists identified those challenges as finding more and better data, integrating data from multiple sources, and creating partnerships to gather and analyze that data. The panel also cited challenges that go beyond data, such as attracting a new generation of professionals capable of applying AI and related technologies such as machine learning--and adapting biopharmas to the new technologies. Those observations are in line with a study released today by The Pistoia Alliance, a global not-for-profit organization of more than 150 members established by executives from AstraZeneca, GlaxoSmithKline (GSK), Novartis, and Pfizer. The Alliance surveyed 190 life sciences professionals in the US and Europe, with 52% citing access to data, and 44% a lack of skills, as the two key barriers of adoption of AI and machine learning.
Lung and pancreatic cancers are very often difficult to treat, especially when the cancer has spread to other organs. Today, the five-year survival rate for lung cancer that has spread is 5%. Now a group of researchers have developed a system to use computers to increase a patient's odds. One of the biggest challenges in cancer treatment is catching it at an early stage before it has spread. "Something around 30% to 40% of cancers is missed during the early stages of screening," said Naji Khosravan, a PhD Candidate at the Center for Research in Computer Vision (CRCV), University of Central Florida.
Automated classifiers may be better than physicians when it comes to diagnosing pigmented skin lesions, but human supervision is still needed, researchers found. All machine-learning algorithms reached a mean of 2.01 more correct diagnoses than did all human readers (17.91 vs 19.92; P 0.0001), reported Harald Kittler, MD, of the Medical University of Vienna in Austria, and colleagues in The Lancet Oncology. When comparing the top three machine learning algorithms with 27 human experts with over a decade of experience, the algorithms still outperformed the experts (18.78 vs 25.43; P 0.0001), the investigators found. Notably, the difference between the top three algorithms and experts was significantly lower for images that were gathered from centers that did not contribute images for the training set when compared with other image sets, although there was human under-performance once again (11.4% vs 3.6%; P 0.0001), the researchers wrote. In this study, machine-learning classifiers performed better than experienced human readers in the diagnosis of pigmented skin lesions, suggesting that machine learning should have a more important role in clinical practice, the investigators said.
Beyond the hype surrounding #ArtificalIntelligence, #MachineLearning and #DeepLearning, there are several serious research groups working on developing AI based technologies to improve healthcare. At ASCO19, several academic and industry groups presented the outcomes of their research in developing tools for improving oncology healthcare.The range of presentations was wide - Identifying new biological targets, algorithm driven diagnosis (including staging) and prognosis, treatment response analysis and predicting patient outcomes. Beyond the classical AI discovery-to-outcomes applications, few groups are working on socializing oncology care i.e. AI based tools are being developed to help improve patient education, compliance, adherence and satisfaction with the cancer treatments.
Since its founding in 1910, Japanese company Hitachi has been at the forefront of innovation with a philosophy to contribute to society through "the development of superior, original technology and products." Today, Hitachi is a multinational conglomerate that offers operational products and services as well as IT-related digital technologies such as artificial intelligence and big data analysis. Its artificial intelligence and machine learning technologies are impacting not only their own services and products but how other industries such as healthcare, shipping, finance operate. Announced in 2015, H is Hitachi's solution for a generalized artificial intelligence technology that can be applied to many applications rather than just built for a specific application. H supports a wide range of applications and can generate hypotheses from the data itself and select the best options given to it by humans.
Academics and researchers typically accuse the media of scaremongering and painting dystopian scenarios, especially when it comes to the coverage of Artificial Intelligence (AI)-powered algorithms. They do have a point, especially as machine learning and deep learning algorithms are exactly what power most of the software and smart devices that we use in our daily lives, be they smartphones, cameras, Internet of Things (IoT) devices or voice assistants. Smartphone penetration and advances in image recognition, for instance, are turning phones into powerful at-home diagnostic tools, while these cutting-edge algorithms are helping doctors, researchers and technology companies revolutionize healthcare. Besides, AI in conjunction with IoT (sensors and wearables), robotics, virtual reality (VR) and augmented reality (AR) is playing a very important role. Further, researchers use AI systems to help radiologists improve their ability to diagnose and track prostate cancer.
Network Embedding (NE) methods, which map network nodes to low-dimensional feature vectors, have wide applications in network analysis and bioinformatics. Many existing NE methods rely only on network structure, overlooking other information associated with the nodes, e.g., text describing the nodes. Recent attempts to combine the two sources of information only consider local network structure. We extend NODE2VEC, a well-known NE method that considers broader network structure, to also consider textual node descriptors using recurrent neural encoders. Our method is evaluated on link prediction in two networks derived from UMLS. Experimental results demonstrate the effectiveness of the proposed approach compared to previous work.