Goto

Collaborating Authors

imaging


Deep learning microscope for rapid tissue imaging

AIHub

When surgeons remove cancer, one of the first questions is, "Did they get it all?" Researchers from Rice University and the University of Texas MD Anderson Cancer Center have created a new microscope that can quickly and inexpensively image large tissue sections, potentially during surgery, to find the answer. The microscope can rapidly image relatively thick pieces of tissue with cellular resolution, and could allow surgeons to inspect the margins of tumors within minutes of their removal. It was created by engineers and applied physicists at Rice and is described in a study published in the Proceedings of the National Academy of Sciences. "The main goal of the surgery is to remove all the cancer cells, but the only way to know if you got everything is to look at the tumor under a microscope," said Rice's Mary Jin, a Ph.D. student in electrical and computer engineering and co-lead author of the study.


Hitting the Books: AI doctors and the dangers tiered medical care

Engadget

Healthcare is a human right, however, nobody said all coverage is created equal. Artificial intelligence and machine learning systems are already making impressive inroads into the myriad fields of medicine -- from IBM's Watson: Hospital Edition and Amazon's AI-generated medical records to machine-formulated medications and AI-enabled diagnoses. But in the excerpt below from Frank Pasquale's New Laws of Robotics we can see how the promise of faster, cheaper, and more efficient medical diagnoses generated by AI/ML systems can also serve as a double-edged sword, potentially cutting off access to cutting-edge, high quality care provided by human doctors. Excerpted from New Laws of Robotics: Defending Human Expertise in the Age of AI by Frank Pasquale, published by The Belknap Press of Harvard University Press. We might once have categorized a melanoma simply as a type of skin cancer.


Learned Block Iterative Shrinkage Thresholding Algorithm for Photothermal Super Resolution Imaging

arXiv.org Artificial Intelligence

Block-sparse regularization is already well-known in active thermal imaging and is used for multiple measurement based inverse problems. The main bottleneck of this method is the choice of regularization parameters which differs for each experiment. To avoid time-consuming manually selected regularization parameter, we propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network. More precisely, we show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters. In addition, this algorithm enables the determination of a suitable weight matrix to solve the underlying inverse problem. Therefore, in this paper we present the algorithm and compare it with state of the art block iterative shrinkage thresholding using synthetically generated test data and experimental test data from active thermography for defect reconstruction. Our results show that the use of the learned block-sparse optimization approach provides smaller normalized mean square errors for a small fixed number of iterations than without learning. Thus, this new approach allows to improve the convergence speed and only needs a few iterations to generate accurate defect reconstruction in photothermal super resolution imaging.


New method uses artificial intelligence to study live cells

#artificialintelligence

IMAGE: Time-lapse gradient light interference microscopy, or GLIM, left, and phase imaging with computational specificity imaged over seven days. Researchers at the University of Illinois Urbana Champaign have developed a new technique that combines label-free imaging with artificial intelligence to visualize unlabeled live cells over a prolonged time. This technique has potential applications in studying cell viability and pathology. The study "Phase imaging with computational specificity (PICS) for measuring dry mass changes in sub-cellular compartments" was published in Nature Communications. "Our lab specializes in label-free imaging, which allows us to visualize cells without using toxic chemicals," said Gabriel Popescu, a professor of electrical and computer engineering and the director of the Quantitative Light Imaging Laboratory at the Beckman Institute for Advanced Science and Technology.


Photoacoustic Image Reconstruction Beyond Supervised to Compensate Limit-view and Remove Artifacts

arXiv.org Artificial Intelligence

Photoacoustic computed tomography (PACT) reconstructs the initial pressure distribution from raw PA signals. Standard reconstruction always induces artifacts using limited-view signals, which are influenced by limited angle coverage of transducers, finite bandwidth, and uncertain heterogeneous biological tissue. Recently, supervised deep learning has been used to overcome limited-view problem that requires ground-truth. However, even full-view sampling still induces artifacts that cannot be used to train the model. It causes a dilemma that we could not acquire perfect ground-truth in practice. To reduce the dependence on the quality of ground-truth, in this paper, for the first time, we propose a beyond supervised reconstruction framework (BSR-Net) based on deep learning to compensate the limited-view issue by feeding limited-view position-wise data. A quarter position-wise data is fed into model and outputs a group full-view data. Specifically, our method introduces a residual structure, which generates beyond supervised reconstruction result, whose artifacts are drastically reduced in the output compared to ground-truth. Moreover, two novel losses are designed to restrain the artifacts. The numerical and in-vivo results have demonstrated the performance of our method to reconstruct the full-view image without artifacts.


Hover secures $60M for 3D imaging to assess and fix properties – TechCrunch

#artificialintelligence

The U.S. property market has proven to be more resilient than you might have assumed it would be in the midst of a coronavirus pandemic, and today a startup that's built a computer vision tool to help owners assess and fix those properties more easily is announcing a significant round of funding as it sees a surge of growth in usage. Hover -- which has built a platform that uses eight basic smartphone photos to patch together a 3D image of your home that can then be used by contractors, insurance companies and others to assess a repair, price out the job and then order the parts to do the work -- has raised $60 million in new funding. The Series D values the company at $490 million post-money, and significantly, it included a number of strategic investors. Three of the biggest insurance companies in the U.S. -- Travelers, State Farm Ventures and Nationwide -- led the round, with building materials giant Standard Industries, and other unnamed building tech firms, also participating. Past financial backers Menlo Ventures, GV (formerly Google Ventures) and Alsop Louie Partners, as well as new backer Guidewire Software, were also in this round.


Medical imaging, AI, and the cloud: what's next? - Microsoft Industry Blogs

#artificialintelligence

Today marks the start of RSNA 2020, the annual meeting of the Radiological Society of North America. I participated in my first RSNA 35 years ago and I am super excited--as I am every year--to reconnect with my radiology colleagues and friends and learn about the latest medical and scientific advances in our field. Of course, RSNA will be very different this year. Instead of traveling to Chicago to attend sessions and presentations, and wander the exhibits, I'll experience it all online. While I will miss the fun, excitement, and opportunities to connect that come with being there in person, I am amazed by what a rich and comprehensive conference the organizers of RSNA 2020 have put together using the advanced digital tools that we have at hand now.


Robot nurse with a human-like face perform coronavirus tests and reminds patients to wear a mask

Daily Mail - Science & tech

A young Egyptian engineer has invented a remote-control robot that can take patient's temperature, test for COVID-19 and even reprimand those not wearing a mask. With a human-like face and robotic arms, 'Cira-03' is capable of drawing blood and performing EKGs and x-rays, then display test results on a screen on its chest. Cira-03 tests patients for coronavirus by cupping their chin and then extending an arm with a swab into their mouth. While the goal was to limit exposure of healthcare workers, El-Komy also wanted to put patients at ease in a harrowing situation. 'I tried to make the robot seem more human, so that the patient doesn't fear it,' El-Komy said.


Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted Deep Learning

arXiv.org Artificial Intelligence

Noises, artifacts, and loss of information caused by the magnetic resonance (MR) reconstruction may compromise the final performance of the downstream applications. In this paper, we develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset and then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data. The multi-task deep learning framework is equipped with two network sub-modules, which are integrated and trained by our designed iterative teacher forcing scheme (ITFS) under the dynamic re-weighted loss constraint (DRLC). The ITFS is designed to avoid error accumulation by injecting the fully-sampled data into the training process. The DRLC is proposed to dynamically balance the contributions from the reconstruction and segmentation sub-modules so as to co-prompt the multi-task accuracy. The proposed method has been evaluated on two open datasets and one in vivo in-house dataset and compared to six state-of-the-art methods. Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation.


A survey on artificial intelligence in chest imaging of COVID-19

#artificialintelligence

In this review article the authors Yun Chen, Gongfa Jiang, Yue Li, Yutao Tang, Yanfang Xu, Siqi Ding, Yanqi Xin and Yao Lu from Xiangtan University, Xiangtan, China and Sun Yat-sen University, Guangzhou, China consider the application of artificial intelligence imaging analysis methods for COVID-19 clinical diagnosis. The world is facing a key health threat because of the outbreak of COVID-19. Intelligent medical imaging analysis is urgently needed to make full use of chest images in COVID- 19 diagnosis and its management due to the important role of typical imaging findings in this disease. The authors review artificial intelligence (AI) assisted chest imaging analysis methods for COVID-19 which provide accurate, fast, and safe imaging solutions. In particular, medical images from X-ray and CT scans are used to demonstrate that AI techniques based on deep learning can be applied to COVID-19 diagnosis.