We use the term imaging sciences to refer to the overarching spectrum of scientific and technological contexts which involve images in digital format including, among others, image and video processing, scientific visualization, computer graphics, animations in games and simulators, remote sensing imagery, and also the wide set of associated application areas that have become ubiquitous during the last decade in science, art, human-computer interaction, entertainment, social networks, and many others. As an area that combines mathematics, engineering, and computer science, this discipline arose in a few universities in Argentina mostly in the form of elective classes and small research projects in electrical engineering or computer science departments. Only in the mid-2000s did some initiatives aiming to generate joint activities and to provide identity and visibility to the discipline start to appear. In this short paper, we present a brief history of the three laboratories with the most relevant research and development (R&D) activities in the discipline in Argentina, namely the Imaging Sciences Laboratory of the Universidad Nacional del Sur, the PLADEMA Institute at the Universidad Nacional del Centro de la Provincia de Buenos Aires, and the Image Processing Laboratory at the Universidad Nacional de Mar del Plata. The Imaging Sciences Laboratorya of the Electrical and Computer Engineering Department of the Universidad Nacional del Sur Bahía Blanca began its activities in the 1990s as a pioneer in Argentina and Latin America in research and teaching in computer graphics, and in visualization.
Within the last decade, some of the most notable breakthroughs in artificial intelligence (AI) have come in the form of computer vision. Essentially giving robotics systems'eyesight', in the ability to identify and classify objects using image or video recognition, the technology has been put to use in anything from facial recognition systems and quality control in manufacturing to anomalies in MRI scans and self-driving vehicle systems. And while computer vision applications are still comparatively nascent, the'breakthrough' AI applications of the decade ahead might well come in the form of advances in language-based applications. AI research and deployment company OpenAI developed the largest language model ever created this year, GPT-3. The software can generate human-like text on demand and is set to be turned into a commercial product later this year, as a paid-for subscription via the cloud for businesses.
Google and startups like Qure.ai, Aidoc, and DarwinAI are developing AI and machine learning systems that classify chest X-rays to help identify conditions like fractures, collapsed lungs, and fractures. Several hospitals including Mount Sinai have piloted computer vision algorithms that analyze scans from patients with the novel coronavirus. But research from the University of Toronto, the Vector Institute, and MIT reveals that chest X-ray datasets used to train diagnostic models exhibit imbalance, biasing them against certain gender, socioeconomic, and racial groups. Partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases may perpetuate inequalities. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries.
Radiology extenders who read chest X-rays save attending radiologists more time during the day than radiology residents do, potentially streamlining workflow and alleviating provider burnout. At least that has been the experience for researchers at the University of Pennsylvania. Radiologists in their department read more cases per hour when the drafts came from radiology extenders than from residents, resulting in nearly an hour – 51 minutes – of provider time saved each day. The authors shared their experience on Oct. 13 in the Journal of the American College of Radiology. "Interpreting these radiographs entails a disproportionate amount of work (eg., retrieving patient history, completing standard dictation templates, and ensuring proper communication of important findings before finalization of reports). Given low reimbursement rates for these studies, economic necessities push radiologists to provide faster interpretations, contributing to burnout," said the team led by Arijitt Borthakur, MBA, Ph.D., senior research investigator in the Perelman School of Medicine radiology department.
In research published today in Patterns, a team of engineers led by Wang demonstrated how a deep learning algorithm can be applied to a conventional computerized tomography (CT) scan in order to produce images that would typically require a higher level of imaging technology known as dual-energy CT. Wenxiang Cong, a research scientist at Rensselaer, is first author on this paper. Wang and Cong were also joined by coauthors from Shanghai First-Imaging Tech, and researchers from GE Research. "We hope that this technique will help extract more information from a regular single-spectrum X-ray CT scan, make it more quantitative, and improve diagnosis," said Wang, who is also the director of the Biomedical Imaging Center within the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer. Conventional CT scans produce images that show the shape of tissues within the body, but they don't give doctors sufficient information about the composition of those tissues.
Teledyne DALSA, a Teledyne Technologies company and global leader in digital X-ray image sensing technology, is pleased to announce its presence at the CMEF 2020 Technical Exhibition, October 19-22, Shanghai, PR of China, in booth #4.1C27. Teledyne DALSA's full portfolio of groundbreaking CMOS imaging technology delivers superior resolution in real-time, with switchable saturation dose, high dynamic range, and calibration stability and unsurpassed low dose signal-to-noise performance. The innovative Xineos CMOS X-ray detectors deliver superior quality and performance, and provide the best digital imaging solutions for mobile C-arm surgical systems, diagnostic 2D and 3D mammography, CBCT and other demanding X-ray imaging applications. Teledyne DALSA will showcase its proprietary software for generic tomosynthesis reconstruction, which supports various imaging geometries and scanning approaches. The new software technology enables X-ray tomosynthesis imaging modalities that speed the integration of X-ray detectors and deliver new opportunities for system-level innovation.
"A Google Maps for surgeons" is how Perimeter Medical Imaging AI Inc. (TSXV: PINK) President and CFO Jeremy Sobotta described the AI software currently being developed by the company to complement its FDA-cleared medical imaging system at a recent investment conference. Perimeter is a medical technology company working to transform cancer surgery by creating ultra-high-resolution, real-time, advanced imaging tools to address unmet medical needs. The imaging tools have already been developed and are approved in ophthalmology and cardiology (optical coherence tomography or OCT). Perimeter is using this imaging technology (OTIS or Optical Tissue Imaging Console) to assess the tissues surrounding the known cancerous target area to determine whether more tissue should be removed during the ongoing surgery. The imaging technology has the ability to rapidly image large and complex surfaces.
Artificial intelligence (AI) algorithms designed to independently interpret mammograms might be gaining ground when it comes to radiologist confidence, but the public-at-large does not share this sentiment. In fact, nearly 80 percent of women say they oppose letting AI interpret their studies without a radiologist being directly involved somehow. In a study published Oct. 12 in the Journal of the American College of Radiology, a team of investigators from the University of Groningen, in The Netherlands, revealed that, when asked whether they were comfortable with having AI alone analyze their mammograms for signs of breast cancer, 78 percent of women surveyed said no. "Despite recent breakthroughs in the diagnostic performance of AI algorithms for the interpretation of screening mammograms, the general population currently does not support a fully independent use of such systems without involving a radiologist," said the team led by Yfke Ongena, Ph.D. assistant professor with the university's Center of Language and Cognition. Related Content: Artificial Intelligence & Mammography: Where It's Been & Where It's Going Given those advancements and study results that show AI can outperform radiologists, this response is surprising, they said. But, it shows that radiology as an industry should pump the brakes on implementing any AI algorithms as the sole interpreter of mammograms, as well as actively work to educate women about the capabilities and efficacy of these algorithms.
Improving radiologist efficiency and preventing burnout is a primary goal for healthcare providers. A nationwide study published in Mayo Clinic Proceedings in 2015 showed radiologist burnout percentage at a concerning 61% . In additon, the report concludes that "burnout and satisfaction with work-life balance in US physicians worsened from 2011 to 2014. More than half of US physicians are now experiencing professional burnout." As technologists, we're looking for ways to put new and innovative solutions in the hands of physicians to make them more efficient, reduce burnout, and improve care quality.
"The first time I saw an obstetrics ultrasound exam, I thought it was a little bit like magic," said Dr. Susanne Johnson, Associate Specialist in Gynecology at Princess Anne Hospital in the United Kingdom. "I thought to myself, 'now that's something I want to do.'" Now as a gynecologist who specializes in helping diagnose some of the most difficult gynecological diseases and teaching others her skills, Dr. Johnson is using advanced ultrasound technology that has a new bit of'magic' built-in: automated tools and artificial intelligence. "Every patient I have is a bit of a puzzle and my job is to try and develop a hypothesis of what's the problem. Then, I can use ultrasound to try and prove it and point the patient in the right direction to get the care they need," said Dr. Johnson.