Goto

Collaborating Authors

 denker


Score-Based Generative Models for PET Image Reconstruction

Singh, Imraj RD, Denker, Alexander, Barbano, Riccardo, Kereta, Željko, Jin, Bangti, Thielemans, Kris, Maass, Peter, Arridge, Simon

arXiv.org Artificial Intelligence

Score-based generative models have demonstrated highly promising results for medical image reconstruction tasks in magnetic resonance imaging or computed tomography. However, their application to Positron Emission Tomography (PET) is still largely unexplored. PET image reconstruction involves a variety of challenges, including Poisson noise with high variance and a wide dynamic range. To address these challenges, we propose several PET-specific adaptations of score-based generative models. The proposed framework is developed for both 2D and 3D PET. In addition, we provide an extension to guided reconstruction using magnetic resonance images. We validate the approach through extensive 2D and 3D $\textit{in-silico}$ experiments with a model trained on patient-realistic data without lesions, and evaluate on data without lesions as well as out-of-distribution data with lesions. This demonstrates the proposed method's robustness and significant potential for improved PET reconstruction.


HITECH CHESS REPORT

AI Magazine

In response to this need, Shelby Lyman, the host of past Public Broadcasting Station (PBS) series on world chess championship matches, organized the AGS Challenge Match at the New School for Social Research in New York City. Funding for this event was provided by AGS Computers, Inc., a New Jersey-based software firm. The match was held September 22-25, with one game played each day, and was widely covered by the international press. Participating were Hitech, at 2407 then the highest-rated computer in the world, and International Grandmaster Arnold S. Denker, a former U.S. champion. Denker's rating of 2410 was comparable to that of Hitech.


Desktop Assistant Guesses Your Needs

AITopics Original Links

In a small, dark, room off a long hallway within a sprawling complex of buildings in Silicon Valley, an array of massive flat-panel displays and video cameras track Grit Denker's every move. Denker, a senior computer scientist at the nonprofit R&D institute SRI, is showing off Bright, an intelligent assistant that could someday know what information you need before you even ask. Initially, Bright is meant to cut down on the cognitive overload faced by workers in high-stress, data-intensive jobs like emergency response and network security. Bright may, for instance, aid network administrators in trying to stop the spread of a fast-moving virus by quickly providing crucial infection information, or help 911 operators send the right kind of assistance to the scene of an accident. But like many other technologies developed at SRI, such as the digital personal assistant Siri (now owned by Apple), Bright could eventually trickle down to laptops and smartphones.


Handwritten Digit Recognition with a Back-Propagation Network

LeCun, Yann, Boser, Bernhard E., Denker, John S., Henderson, Donnie, Howard, R. E., Hubbard, Wayne E., Jackel, Lawrence D.

Neural Information Processing Systems

We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service. 1 INTRODUCTION The main point of this paper is to show that large back-propagation (BP) networks can be applied to real image-recognition problems without a large, complex preprocessing stage requiring detailed engineering. Unlike most previous work on the subject (Denker et al., 1989), the learning network is directly fed with images, rather than feature vectors, thus demonstrating the ability of BP networks to deal with large amounts of low level information. Previous work performed on simple digit images (Le Cun, 1989) showed that the architecture of the network strongly influences the network's generalization ability. Good generalization can only be obtained by designing a network architecture that contains a certain amount of a priori knowledge about the problem. The basic design principle is to minimize the number of free parameters that must be determined by the learning algorithm, without overly reducing the computational power of the network.


Optimal Brain Damage

LeCun, Yann, Denker, John S., Solla, Sara A.

Neural Information Processing Systems

We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application. 1 INTRODUCTION Most successful applications of neural network learning to real-world problems have been achieved using highly structured networks of rather large size [for example (Waibel, 1989; Le Cun et al., 1990a)]. As applications become more complex, the networks will presumably become even larger and more structured.


Handwritten Digit Recognition with a Back-Propagation Network

LeCun, Yann, Boser, Bernhard E., Denker, John S., Henderson, Donnie, Howard, R. E., Hubbard, Wayne E., Jackel, Lawrence D.

Neural Information Processing Systems

We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service. 1 INTRODUCTION The main point of this paper is to show that large back-propagation (BP) networks can be applied to real image-recognition problems without a large, complex preprocessing stage requiring detailed engineering. Unlike most previous work on the subject (Denker et al., 1989), the learning network is directly fed with images, rather than feature vectors, thus demonstrating the ability of BP networks to deal with large amounts of low level information. Previous work performed on simple digit images (Le Cun, 1989) showed that the architecture of the network strongly influences the network's generalization ability. Good generalization can only be obtained by designing a network architecture that contains a certain amount of a priori knowledge about the problem. The basic design principle is to minimize the number of free parameters that must be determined by the learning algorithm, without overly reducing the computational power of the network.


Handwritten Digit Recognition with a Back-Propagation Network

LeCun, Yann, Boser, Bernhard E., Denker, John S., Henderson, Donnie, Howard, R. E., Hubbard, Wayne E., Jackel, Lawrence D.

Neural Information Processing Systems

We present an application of back-propagation networks to handwritten digitrecognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service. 1 INTRODUCTION The main point of this paper is to show that large back-propagation (BP) networks canbe applied to real image-recognition problems without a large, complex preprocessing stage requiring detailed engineering. Unlike most previous work on the subject (Denker et al., 1989), the learning network is directly fed with images, rather than feature vectors, thus demonstrating the ability of BP networks to deal with large amounts of low level information. Previous work performed on simple digit images (Le Cun, 1989) showed that the architecture of the network strongly influences the network's generalization ability. Good generalization can only be obtained by designing a network architecture that contains a certain amount of a priori knowledge about the problem. The basic design principleis to minimize the number of free parameters that must be determined by the learning algorithm, without overly reducing the computational power of the network.


Optimal Brain Damage

LeCun, Yann, Denker, John S., Solla, Sara A.

Neural Information Processing Systems

We have used information-theoretic ideas to derive a class of practical andnearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvementscan be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information tomake a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application. 1 INTRODUCTION Most successful applications of neural network learning to real-world problems have been achieved using highly structured networks of rather large size [for example (Waibel, 1989; Le Cun et al., 1990a)]. As applications become more complex, the networks will presumably become even larger and more structured.


Optimal Brain Damage

LeCun, Yann, Denker, John S., Solla, Sara A.

Neural Information Processing Systems

We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application. 1 INTRODUCTION Most successful applications of neural network learning to real-world problems have been achieved using highly structured networks of rather large size [for example (Waibel, 1989; Le Cun et al., 1990a)]. As applications become more complex, the networks will presumably become even larger and more structured.


Neural Network Recognizer for Hand-Written Zip Code Digits

Denker, John S., Gardner, W. R., Graf, Hans Peter, Henderson, Donnie, Howard, R. E., Hubbard, W., Jackel, L. D., Baird, Henry S., Guyon, Isabelle

Neural Information Processing Systems

This paper describes the construction of a system that recognizes hand-printed digits, using a combination of classical techniques and neural-net methods. The system has been trained and tested on real-world data, derived from zip codes seen on actual U.S. Mail. The system rejects a small percentage of the examples as unclassifiable, and achieves a very low error rate on the remaining examples. The system compares favorably with other state-of-the art recognizers. While some of the methods are specific to this task, it is hoped that many of the techniques will be applicable to a wide range of recognition tasks.