DUG-RECON: a Framework for Direct Image Reconstruction Using Convolutional Generative Networks V.S.S

Total Page:16

File Type:pdf, Size:1020Kb

DUG-RECON: a Framework for Direct Image Reconstruction Using Convolutional Generative Networks V.S.S 1 DUG-RECON: A Framework for Direct Image Reconstruction using Convolutional Generative Networks V.S.S. Kandarpa∗, Alexandre Bousse, Didier Benoit and Dimitris Visvikis Abstract—This paper explores convolutional generative net- The use of deep learning for the development of either data works as an alternative to iterative reconstruction algorithms in corrections or post-reconstruction image based approaches (i) medical image reconstruction. The task of medical image recon- has shown potential to improve the quality of reconstructed struction involves mapping of projection domain data collected from the detector to the image domain. This mapping is done typ- images. An example of data corrections by improving the ically through iterative reconstruction algorithms which are time raw data through scatter correction is proposed in [14]. In consuming and computationally expensive. Trained deep learning this work a modified U-Net is used to estimate scatter and networks provide faster outputs as proven in various tasks across correct the raw data in order to improve computed tomography computer vision. In this work we propose a direct reconstruction (CT) images. Denoising the reconstructed positron emission framework exclusively with deep learning architectures. The pro- posed framework consists of three segments, namely denoising, tomography (PET) images with a deep convolutional network reconstruction and super resolution. The denoising and the super was done in [13]. The authors used perceptual loss along resolution segments act as processing steps. The reconstruction with mean squared error (MSE) to preserve qualitative and segment consists of a novel double U-Net generator (DUG) quantitative accuracy of the reconstructed images. The network which learns the sinogram-to-image transformation. This entire was initially trained on simulated data and then fine-tuned network was trained on positron emission tomography (PET) and computed tomography (CT) images. The reconstruction on real patient data. Despite resulting in an improvement of framework approximates two-dimensional (2-D) mapping from the reconstructed output, the above mentioned methods do not projection domain to image domain. The architecture proposed directly intervene with the reconstruction process. This can be in this proof-of-concept work is a novel approach to direct image done using the two distinct frameworks (ii) and (iii). reconstruction; further improvement is required to implement it The first one involves the incorporation of a deep neural in a clinical setting. network into an unrolled iterative algorithm where a trained Index Terms—Medical Image Reconstruction, Deep Learning, neural network accelerates the convergence by improving the Generative Adversarial Networks intermediate estimates in the iterations [15]–[17]. The paper by Gong et al. used a modified U-Net to represent images I. INTRODUCTION within the iterative reconstruction framework for PET images. The deep learning architecture was trained on low-dose recon- HE use of deep learning in medical imaging has been on structed images as input and high-dose reconstructed images the rise over the last few years. It has widely been used in T as the output. The work by Xie et al. further extended this various tasks across medical imaging such as image segmen- work by replacing the U-Net with a generative adversarial tation [1]–[5], image denoising [6]–[9], image analysis [10]– network (GAN) for image representation within the itera- [12]. The utilization of deep learning for image reconstruction tive framework. Kim et al incorporated a trained denoising is a more challenging task. Image reconstruction using deep convolutional neural network (DnCNN) along with a novel learning corresponds to the task of mapping raw projection local linear fitting function into the iterative algorithm. The data retrieved from the detector to image domain data. One DnCNN which is trained on data with multiple noise levels can broadly identify three different categories of approaches improves the image estimate at each iteration. They used for the implementation of deep learning within the framework arXiv:2012.02000v1 [physics.med-ph] 3 Dec 2020 simulated and real patient data in their study. The second of medical image reconstruction: framework, referred to as direct reconstruction, is based on (i) methods that use deep learning as an image processing performing the whole reconstruction process (replacing the step that improves the quality of the raw data and/or the classical framework) with a deep learning algorithm, using reconstructed image [13], [14]; raw data as input and reconstructed images as output [18]– (ii) methods that embed deep-learning image processing tech- [20]. In contrast to (i) and (ii) which have been extensively niques in the iterative reconstruction framework to accel- investigated, direct reconstruction (iii) using deep learning has erate convergence or to improve image quality [15]–[17]; been much less explored. (iii) direct reconstruction with deep learning alone without There have been to date three particular approaches that any use of traditional reconstruction methods [18]–[20]. are relevant to this strategy. The deep learning architecture proposed by Zhu et al. [19] called AUTOMAP uses fully All the authors are affiliated to LaTIM, INSERM, UMR 1101, Universite´ de Bretagne Occidentale. connected (FC) layers (which encode the raw data informa- ∗ (email: [email protected]) tion) followed with convolutional layers. The first three layers in this architecture are FC layers with dimensions 2n2,n2 II. MATERIALS AND METHODS 2 and n respectively where n × n is the dimension of the A. Image Reconstruction Model input image. The AUTOMAP requires the estimation of a huge number of parameters which makes it computationally In medical imaging, image reconstruction corresponds to the task of reconstructing an image x 2 Rm, from a scanner intensive. Although initially developed for magnetic resonance n imaging (MRI), AUTOMAP has been claimed to work on measurement y 2 R , where m is the number of voxels other imaging modalities too. Brain images encoded into defining the image and n is the number of detectors in the sensor-domain sampling strategies with varying levels of addi- scanner. In CT, the image x = µ corresponds to X-ray tive white Gaussian noise were reconstructed with AUTOMAP. attenuation, measured by the proportion of X-rays scattered Within the same concept of using FC layers’ architectures a or absorbed as they pass through the object. In PET, x = λ three stage image reconstruction pipeline called DirectPET has is the distribution of a radiotracer delivered to the patient been proposed to reduce associated computational issues [18]. by injection, and is measured through the detection of pairs The first stage down-samples the sinogram data, following of γ-rays emitted in opposite directions (indirectly from the which a unique Radon transform layer encodes the transfor- positron-emitting radiotracer). mation from sinogram to image space. Finally the estimated The measurement y is a random vector modeling the num- image is improved using a super resolution block. This work ber of detection (photon counting) at each of the n detector was applied to full body PET images and remains the only bins, and follows a Poisson distribution with independent approach that can reconstruct multiple slices simultaneously entries: (up to 16 images). DeepPET is another approach implemented y ∼ Poisson(y¯(x)) (1) on simulated images using encoder-decoder architecture based n where y¯(x) 2 R is the expected number of counts (noise- on the neural network proposed by the visual geometric group less), which is a function of the image x. In a simplified [20]. Using realistic simulated data, they demonstrated a net- setting, the expected number of counts in CT is work that could reconstruct images faster, and with an image quality (in terms of root mean squared error) comparable to y¯(µ) = exp(−Lµ) (2) that of conventional iterative reconstruction techniques. where L 2 Rn×m is a system matrix such that each entry In our work we explore the use of U-Net based deep [L]i;j represents the contribution of the j-th image voxel to learning architectures [1] to perform a direct reconstruction the i-th detector. In PET, the expected number of counts is from the sinogram to the image domain using real patient (also in a simplified setting) is datasets. Our aim is to reduce the number of trainable param- eters along with exploring a novel strategy for direct image y¯(λ) = P λ (3) reconstruction using generative networks. More specifically where P 2 n×m is a system matrix such that each entry our approach consists of a three-stage deep-learning pipeline R [P ] represents the probability that a photon pair emitted consisting of denoising, image reconstruction and super reso- i;j from voxel j. For simplification we assume P = L which is lution segments. Our experiments included training the deep a reasonable assumption in a non time-of-flight setting. Image learning pipeline on PET and CT sinogram-image pairs. A reconstruction is achieved by finding a suitable image x^ = µ^ single pass through the trained network transforms the noisy or λ^ that approximately solves sinograms to reconstructed images. The reconstruction of both PET and CT datasets was considered and presented in the y = y¯(x) : (4) following sections. At this stage the work presented is a proof of concept and needs further improvement before being Filtered-backprojection (FBP) techniques (see [21] for a re- applied in a clinical setting. view) can efficiently solve (4) but they are vulnerable to noise as the system matrix P is ill-conditioned. Since the 80’s, model-based iterative reconstruction (MBIR) techniques [22], Projection Domain [23] became the standard approach. They consist in iteratively approximating a solution x^ such that y¯(x^) maximizes the like- Data Corrections with Deep Learning lihood of the measurement y. As they model the stochasticity of the system, they are more robust to noise as compared with FBP, and can be completed with a penalty term for additional Iterative Unrolled Iterative Deep Learning control over the noise [24].
Recommended publications
  • Optimizing-Oncologic-FDG-PET-CT
    Optimizing Oncologic FDG-PET/CT Scans to Decrease Radiation Exposure Esma A. Akin, MD George Washington University Medical Center, Washington, DC Drew A. Torigian, MD, MA University of Pennsylvania Medical Center, Philadelphia, PA Patrick M. Colletti, MD University of Southern California Medical Center, Los Angeles, CA Don C. Yoo, MD The Warren Alpert Medical School of Brown University, Providence, RI (Updated: April 2017) The development of dual-modality positron emission tomography/computed tomography (PET/CT) systems with near-simultaneous acquisition capability has addressed the limited spatial resolution of PET and has improved accurate anatomical localization of sites of radiotracer uptake detected on PET. PET/CT also allows for CT-based attenuation correction of the emission scan without the need for an external positron-emitting source for a transmission scan. This not only addresses the limitations of the use of noisy transmission data, therefore improving the quality of the attenuation-corrected emission scan, but also significantly decreases scanning time. However, this comes at the expense of increased radiation dose to the patient compared to either PET or CT alone. Optimizing PET protocols The radiation dose results both from the injected radiotracer 18F-2-fluoro-2-deoxy-D-glucose (FDG), which is ~7 mSv from an injected dose of 10 mCi (given the effective dose of 0.019 mSv/MBq (0.070 rem/mCi) for FDG), as well as from the external dose of the CT component, which can run as high as 25 mSv [1]. This brings the total dose of FDG-PET/CT to a range of ~8 mSv up to 30 mSv, depending on the type of study performed as well as the anatomical region and number of body parts imaged, although several recent studies have reported a typical average dose of ~14 mSv for skull base-to-thigh FDG- PET/CT examinations [2-8].
    [Show full text]
  • Highly Scalable Image Reconstruction Using Deep Neural Networks with Bandpass Filtering Joseph Y
    1 Highly Scalable Image Reconstruction using Deep Neural Networks with Bandpass Filtering Joseph Y. Cheng, Feiyu Chen, Marcus T. Alley, John M. Pauly, Member, IEEE, and Shreyas S. Vasanawala Abstract—To increase the flexibility and scalability of deep high-density receiver coil arrays for “parallel imaging” [2]– neural networks for image reconstruction, a framework is pro- [6]. Also, image sparsity can be leveraged to constrain the posed based on bandpass filtering. For many applications, sensing reconstruction problem for compressed sensing [7]–[9]. With measurements are performed indirectly. For example, in magnetic resonance imaging, data are sampled in the frequency domain. the use of nonlinear sparsity priors, these reconstructions are The introduction of bandpass filtering enables leveraging known performed using iterative solvers [10]–[12]. Though effective, imaging physics while ensuring that the final reconstruction is these algorithms are time consuming and are sensitive to consistent with actual measurements to maintain reconstruction tuning parameters which limit their clinical utility. accuracy. We demonstrate this flexible architecture for recon- We propose to use CNNs for image reconstruction from structing subsampled datasets of MRI scans. The resulting high subsampling rates increase the speed of MRI acquisitions and subsampled acquisitions in the spatial-frequency domain, and enable the visualization rapid hemodynamics. transform this approach to become more tractable through the use of bandpass filtering. The goal of this work is to enable Index Terms—Magnetic resonance imaging (MRI), Compres- sive sensing, Image reconstruction - iterative methods, Machine an additional degree of freedom in optimizing the computation learning, Image enhancement/restoration(noise and artifact re- speed of reconstruction algorithms without compromising re- duction).
    [Show full text]
  • Arxiv:1707.05927V1 [Physics.Med-Ph] 19 Jul 2017
    Medical image reconstruction: a brief overview of past milestones and future directions Jeffrey A. Fessler University of Michigan July 20, 2017 At ICASSP 2017, I participated in a panel on “Open Problems in Signal Processing” led by Yonina Eldar and Alfred Hero. Afterwards the editors of the IEEE Signal Processing Magazine asked us to write a “perspectives” column on this topic. I prepared the text below but later found out that equations or citations are not the norm in such columns. Because I had already gone to the trouble to draft this version with citations, I decided to post it on arXiv in case it is useful for others. Medical image reconstruction is the process of forming interpretable images from the raw data recorded by an imaging system. Image reconstruction is an important example of an inverse problem where one wants to determine the input to a system given the system output. The following diagram illustrates the data flow in an medical imaging system. Image Object System Data Images Image −−−−−→ −−−→ reconstruction −−−−−→ → ? x (sensor) y xˆ processing (estimator) Until recently, there have been two primary methods for image reconstruction: analytical and iterative. Analytical methods for image reconstruction use idealized mathematical models for the imaging system. Classical examples are the filtered back- projection method for tomography [1–3] and the inverse Fourier transform used in magnetic resonance imaging (MRI) [4]. Typically these methods consider only the geometry and sampling properties of the imaging system, and ignore the details of the system physics and measurement noise. These reconstruction methods have been used extensively because they require modest computation.
    [Show full text]
  • Positron Emission Tomography with Three-Dimensional Reconstruction
    SE9700024 Positron Emission Tomography with Three-dimensional reconstruction Kjell Erlandsson Radiation Physics Department The Jubileum Institute Lund University Sweden 1996 Doctoral Dissertation 1996 Radiation Physics Department The Jubileum Institute Lund University Printed in Sweden by KF-Sigma, Lund, 1996 ISBN 91-628-2189-X Copyright © Kjell Erlandsson (pages 1-67) All rights reserved "I'm guided by the beauty of our weapons." Leonard Cohen LIST OF ORIGINAL PAPERS This thesis is based on the following papers, which will be referred to in the text by their Roman numerals: I. Sandell A, Erlandsson K, Ohlsson T, Strand S-E, "A PET Scanner Based on Rotating Scintillation Cameras: Development and Experience", submitted to J Nucl Med, 1996. II. Erlandsson K, Strand S-E, "A New Approach to Three-Dimensional Image Reconstruction in PET", IEEE Trans Nucl Sci, 39:1438-1443, 1992. III. Erlandsson K, Esser PD, Strand S-E, van Heertum RL, "3D Reconstruction for a Multi-Ring PET Scanner by Single-Slice Rebinning and Axial Deconvolution", Phys Med Biol, 39:619-629, 1994. IV. Erlandsson K, Strand S-E, "3D Reconstruction for 2D PET", submitted to Nucl Instr Meth A, 1996. V. Sandell A, Erlandsson K, Bertenstam L, "A Scanner for Positron Emission Mammography", submitted to Phys Med Biol, 1996. ABSTRACT In this thesis, the developments of two different low-cost scanners for positron emission tomography (PET), based on 3D acquisition, are presented. 3D reconstruction methods for these systems were also developed. The first scanner consists of two rotating scintillation cameras, and produces quantitative images, which have shown to be clinically useful.
    [Show full text]
  • Comparison Among Tomographic Reconstruction Algorithms with a Limited Data
    2011 International Nuclear Atlantic Conference - INAC 2011 Belo Horizonte,MG, Brazil, October 24-28, 2011 Associação Brasileira de Energia Nuclear - ABEN ISBN: 978-85-99141-04-5 COMPARISON AMONG TOMOGRAPHIC RECONSTRUCTION ALGORITHMS WITH A LIMITED DATA Eric F. Oliveira1, Sílvio B. Melo2, Carlos C. Dantas3, Daniel A. A. Vasconcelos4 and Luís F. Cadiz5 1Department of Nuclear Energy -DEN Federal University of Pernambuco – UFPE, Av. Prof. Luiz Freire 1000, CDU 50740-540 Recife, PE, Brazil [email protected] 2Informatic Center - CIN Federal University of Pernambuco – UFPE Av. Prof. Luiz Freire 1000 CDU 50740-540 Recife, PE, Brazil [email protected] 3 Department of Nuclear Energy -DEN Federal University of Pernambuco – UFPE, Av. Prof. Luiz Freire 1000, CDU 50740-540 Recife, PE, Brazil [email protected] 4Department of Nuclear Energy -DEN Federal University of Pernambuco – UFPE, Av. Prof. Luiz Freire 1000, CDU 50740-540 Recife, PE, Brazil [email protected] 5Department of Nuclear Energy -DEN Federal University of Pernambuco – UFPE, Av. Prof. Luiz Freire 1000, CDU 50740-540 Recife, PE, Brazil [email protected] ABSTRACT Nowadays there is a continuing interest in applying computed tomography (CT) techniques in non-destructive testing and inspection of many industrial products. These applications of CT usually require a differentiated analysis when there are strong limitations in acquiring a sufficiently large amount of projection data. The use of a low number of tomographic data normally degrades the quality of the reconstructed image, highlighting the formation of artifacts and noise. This work investigates the reconstruction methods most commonly used (FBP, ART, SIRT, MART, SMART) and shows the performance of each one in this limited scenario.
    [Show full text]
  • Deep Learning CT Image Reconstruction in Clinical Practice CT-Bildrekonstruktion Mit Deep Learning in Der Klinischen Praxis
    Published online: 2020-12-10 Review Deep Learning CT Image Reconstruction in Clinical Practice CT-Bildrekonstruktion mit Deep Learning in der klinischen Praxis Authors Clemens Arndt, Felix Güttler, Andreas Heinrich, Florian Bürckenmeyer, Ioannis Diamantis, Ulf Teichgräber Affiliation Ergebnis und Schlussfolgerung DL als Methode des Ma- Department of Radiology, Jena University Hospital, Jena, chine Learning nutzt im Allgemeinen ein trainiertes, künstli- Germany ches, neuronales Netzwerk zur Lösung von Problemen. Ak- tuell sind DL-Rekonstruktionsalgorithmen von 2 Herstellern Key words für die klinische Routine verfügbar. In den bisherigen Studien computed tomography, image reconstruction, deep neural konnte eine Reduktion des Bildrauschens und eine Verbesser- network, deep learning, artificial intelligence, image ung der Gesamtqualität gezeigt werden. Eine Studie zeigte, denoising bei höherer Qualität der mittels DL rekonstruierten Bilder, received 11.05.2020 eine vergleichbare diagnostische Genauigkeit zur IR für die accepted 20.08.2020 Detektion von Koronarstenosen. Weitere Studien sind not- published online 10.12.2020 wendig und sollten vor allem auch auf eine klinische Überle- genheit abzielen, während dabei ein breiter Umfang an Patho- Bibliography logien umfasst werden muss. Fortschr Röntgenstr 2021; 193: 252–261 DOI 10.1055/a-1248-2556 Kernaussagen: ▪ ISSN 1438-9029 Nach der aktuell verbreiteten iterativen Rekonstruktion © 2020. Thieme. All rights reserved. können CT-Schnittbilder im klinischen Alltag nun auch Georg Thieme Verlag KG, Rüdigerstraße 14, mittels Deep Learning (DL) als Methode der künstlichen 70469 Stuttgart, Germany Intelligenz rekonstruiert werden. ▪ Durch DL-Rekonstruktionen können Bildrauschen vermin- Correspondence dert, Bildqualität verbessert und möglicherweise Strah- Dr. Clemens Arndt lung reduziert werden. Institut für Diagnostische und Interventionelle Radiologie, ▪ Eine diagnostische Überlegenheit im klinischen Kontext Universitätsklinikum Jena, Am Klinikum 1, 07751 Jena, sollte in kommenden Studien gezeigt werden.
    [Show full text]
  • Image Reconstruction for PET/CT Scanners: Past Achievements and Future Challenges
    REVIEW Image reconstruction for PET/CT scanners: past achievements and future challenges PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. 1 K EYWORDS: analytic reconstruction fully 3D imaging iterative reconstruction Shan Tong , maximum-likelihood expectation-maximization method PET Adam M Alessio1 & Paul E Kinahan†1 PET is a medical imaging modality with proven 1Department of Radiology, University clinical value for the detection, staging and PET tomographic data of Washington, Seattle WA, USA †Author for correspondence: monitoring of a wide variety of diseases. This Problem statement Tel.: +1 206 543 0236 technique requires the injection of a radiotracer, Data acquisition & representation Fax: +1 206 543 8356 which is then monitored externally to generate PET imaging can measure the spatial distribu- [email protected] PET data [1,2].
    [Show full text]
  • UCLA Electronic Theses and Dissertations
    UCLA UCLA Electronic Theses and Dissertations Title New Algorithms in Computational Microscopy Permalink https://escholarship.org/uc/item/1r48b4mg Author Pham, Minh Publication Date 2020 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA Los Angeles New Algorithms in Computational Microscopy A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Minh Pham 2020 © Copyright by Minh Pham 2020 ABSTRACT OF THE DISSERTATION New Algorithms in Computational Microscopy by Minh Pham Doctor of Philosophy in Mathematics University of California, Los Angeles, 2020 Professor Stanley Osher, Chair Microscopy plays an important role in providing tools to microscopically observe objects and their surrounding areas with much higher resolution ranging from the scale between molecular machineries (angstrom) and individual cells (micrometer). Under microscopes, illumination, such as visible light and electron-magnetic radiation/electron beam, interacts with samples, then they are scattered to a plane and are recorded. Computational mi- croscopy corresponds to image reconstruction from these measurements as well as improving quality of the images. Along with the evolution of microscopy, new studies are discovered and algorithms need development not only to provide high-resolution imaging but also to decipher new and advanced researches. In this dissertation, we focus on algorithm development for inverse
    [Show full text]
  • Iterative PET Image Reconstruction Using Convolutional Neural
    1 Iterative PET Image Reconstruction Using Convolutional Neural Network Representation Kuang Gong, Jiahui Guan, Kyungsang Kim, Xuezhu Zhang, Georges El Fakhri, Jinyi Qi* and Quanzheng Li* Abstract—PET image reconstruction is challenging due to temporal information. Denoising methods, such as the HYPR the ill-poseness of the inverse problem and limited number of processing [9], non-local mean denoising [10], [11] and guided detected photons. Recently deep neural networks have been image filtering [12] have been developed and show better widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this work, we performance in bias-variance tradeoff or partial volume cor- trained a deep residual convolutional neural network to improve rection than the conventional Gaussian filtering. In regularized PET image quality by using the existing inter-patient information. image reconstruction, entropy or mutual information based An innovative feature of the proposed method is that we embed methods [13]–[15], segmentation based methods [16], [17], the neural network in the iterative reconstruction framework for and gradient based methods [18], [19] have been developed image representation, rather than using it as a post-processing tool. We formulate the objective function as a constraint op- by penalizing the difference between the reconstructed image timization problem and solve it using the alternating direction and the prior information in specific domains. The Bowsher’s method of multipliers (ADMM) algorithm. Both simulation data method [20] adjusts the weight of the penalty based on and hybrid real data are used to evaluate the proposed method. similarity metrics calculated from prior images.
    [Show full text]
  • A New Era of Image Reconstruction: Truefidelity™
    A new era of image reconstruction: TrueFidelity™ Technical white paper on deep learning image reconstruction Jiang Hsieh, Eugene Liu, Brian Nett, Jie Tang, Jean-Baptiste Thibault, Sonia Sahney Contents 02 Introduction 02 Challenges of Filtered Back-Projection and Iterative Reconstruction 03 The New Era of Deep Learning-Based Image Reconstruction 07 Early Evidence: Phantom Studies 10 Early Evidence: Clinical Cases 13 Conclusion 14 Glossary and References gehealthcare.com Introduction GE Healthcare’s deep learning image reconstruction (DLIR) is the first outstanding image quality and preferred noise texture, have the potential Food and Drug Administration (FDA) cleared technology to utilize a deep to improve reading confidence in a wide range of clinical applications, neural network-based recon engine to generate high quality TrueFidelity including imaging the head, whole body, cardiovascular, and for patients computed tomography (CT) images. DLIR opens a new era for CT-image of all ages. DLIR is designed with fast reconstruction speed for routine reconstruction by addressing challenges of filtered back-projection (FBP) CT use, even in acute care settings. and iterative reconstruction (IR). This white paper will: first, take a look at the overall evolution of CT DLIR features a deep neural network (DNN), which was trained with high image reconstruction; second, explain the design, supervised training, and quality FBP data sets to learn how to differentiate noise from signals, deployment of DLIR engine; and third, reveal early phantom and
    [Show full text]
  • Iterative PET Image Reconstruction Using CNN Representation
    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2018.2869871, IEEE Transactions on Medical Imaging 1 Iterative PET Image Reconstruction Using Convolutional Neural Network Representation Kuang Gong, Jiahui Guan, Kyungsang Kim, Xuezhu Zhang, Jaewon Yang, Youngho Seo, Georges El Fakhri, Jinyi Qi* and Quanzheng Li* Abstract—PET image reconstruction is challenging due to can be used to take various degradation factors into consid- the ill-poseness of the inverse problem and limited number of eration [8]. In addition, various post processing approaches detected photons. Recently deep neural networks have been and iterative reconstruction methods have been developed widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this work, we by making use of local patch statistics, prior anatomical or trained a deep residual convolutional neural network to improve temporal information. Denoising methods, such as the HYPR PET image quality by using the existing inter-patient information. processing [9], non-local mean denoising [10], [11] and guided An innovative feature of the proposed method is that we embed image filtering [12] have been developed and show better the neural network in the iterative reconstruction framework for performance in bias-variance tradeoff or partial volume cor- image representation, rather than using it as a post-processing tool. We formulate the objective function as a constrained rection than the conventional Gaussian filtering. In regularized optimization problem and solve it using the alternating direction image reconstruction, entropy or mutual information based method of multipliers (ADMM) algorithm.
    [Show full text]
  • Distributed Computation of Linear Inverse Problems with Application to Computed Tomography Reconstruction Yushan Gao, Thomas Blumensath
    1 Distributed Computation of Linear Inverse Problems with Application to Computed Tomography Reconstruction Yushan Gao, Thomas Blumensath, Abstract—The inversion of linear systems is a fundamental step where y; A; x and e are projection data, system matrix, re- in many inverse problems. Computational challenges exist when constructed image vector and measurement noise respectively. trying to invert large linear systems, where limited computing However, the relatively lower computational efficiency limits resources mean that only part of the system can be kept in computer memory at any one time. We are here motivated their use, especially in many x-ray tomography problems, by tomographic inversion problems that often lead to linear where y and x can have millions of entries each [9] and inverse problems. In state of the art x-ray systems, even a where A, even though it is a sparse matrix, can have billions standard scan can produce 4 million individual measurements of entries. and the reconstruction of x-ray attenuation profiles typically For realistic dataset sizes, these methods thus require sig- requires the estimation of a million attenuation coefficients. To deal with the large data sets encountered in real applications nificant computational resources, especially as the matrix A and to utilise modern graphics processing unit (GPU) based is seldom kept in memory but is instead re-computed on the computing architectures, combinations of iterative reconstruction fly, which can be done relatively efficiently using the latest algorithms and parallel computing schemes are increasingly Graphical Processor Unit (GPU) based parallel computing applied. Although both row and column action methods have platforms.
    [Show full text]