Iterative Reconstruction Methods for Non-Cartesian Mri
Total Page:16
File Type:pdf, Size:1020Kb
ITERATIVE RECONSTRUCTION METHODS FOR NON-CARTESIAN MRI Jeffrey A. Fessler Douglas C. Noll∗ EECS Dept., The University of Michigan BME Dept., The University of Michigan [email protected] [email protected] ABSTRACT For ill-posed problems involving a continuous-space un- known function f but a finite-dimensional (discrete) data vec- For magnetic resonance imaging (MRI) with Cartesian k-space tor y, estimation methods can be categorized into three fam- sampling, a simple inverse FFT usually suffices for image re- ilies of solutions. None of these formulations can be called construction. More sophisticated image reconstruction meth- uniquely optimal. ods are needed for non-Cartesian k-space acquisitions. Reg- ularized least-squares methods for image reconstruction in- 1. Continuous-continuous formulations volve minimizing a cost function consisting of a least-squares In these methods, one first imagines that one has a con- data fit term plus a regularizing roughness penalty that con- tinuum of measurements, solves the inverse problem trols noise in the image estimate. Iterative algorithms are usu- under that hypothetical scenario, and then discretizes ally used to minimize such cost functions. This paper sum- the solution so that only the available measurements are marizes the formulation of iterative methods for image recon- used. struction from non-Cartesian k-space samples, and describes In MR, the natural hypothetical set of measurements is some of the benefits of iterative methods. The primary dis- the entire Fourier transform F (~ν) of the object f(~r). advantage of iterative methods is the increased computation If F (~ν) were available, the “reconstruction method” time, and methods for accelerating convergence are also dis- would be simply an inverse Fourier transform: cussed. · f(~r) = F (~ν) eı2π~ν ~r d~ν. ZR2 1. INTRODUCTION For practical implementation one must discretize this For simplicity, in this summary we consider conventional slice- “solution” and replace F (~νi) by the corresponding noisy selective 2D MRI. The extension to 3D is straightforward. measurement yi as follows: The goal in MR image reconstruction is to estimate the M ı2π~νi·~r transverse magnetization f(~r) of an object from a finite set of fˆ(~r) = yi e wi, (3) M noisy data samples: Xi=1 where {w : i = 1,...,M} are sampling density com- y = F (~ν ) +ε , i = 1,...,M, (1) i i i i pensation factors. This is the conjugate phase method for image reconstruction. Numerous methods have been where F (~ν) denotes the Fourier transform of f for ~ν ∈ R2, proposed for choosing the w factors, including itera- defined as follows: i tive methods, e.g., [3,4]. − · F (~ν) = f(~r) e ı2π~ν ~r d~r. (2) 2. Continuous-discrete formulations Z 2 R These methods attempt to formulate the problem di- For simplicity, we ignore field inhomogeneity effects; exten- rectly using the continuous-discrete model (1). Because sions are available, e.g., [1, 2]. This problem is ill-posed be- there are a multitude of possible solutions, a typical ap- cause there are a multitude of continuous-space objects f that proach is to choose, among those solutions that satisfy (1) exactly, the fˆ that has minimum norm, e.g., [5]. exactly match the measured data y = (y1,...,yM ). So in some sense all MRI data is incomplete, and the notion of “par- However, one may question the appropriateness of in- tial k-space sampling” is only a matter of degrees of partial- sisting on satisfying (1) exactly given the presence of ness. noise. And one may wonder if “minimum norm” is the best criterion for choosing one MR image over other ∗Supported in part by NIH grants EB002683 and DA15410. possibilities. The minimum norm solution turns out to have the form where B is a M × M diagonal matrix: B = diag{B(~νi)}, × − · CM N ı2π~νi ~rj M and E ∈ has elements Eij = e . In MRI, the · ı2π~νi ~r matrix E is sometimes called the Fourier encoding matrix. fˆ(~r) = ci e , (4) Xi=1 There are numerous possible choices of basis functions : b(~r) that have been used in various image reconstruction prob- where the coefficients {ci i = 1,...,M} are found lems. Typically we simply use rect functions for simplicity, by solving a N ×M system of linear equations. In other corresponding to square pixels. words, the minimum norm estimate is a linear combina- tion of complex exponentials. The frequencies of those 1.2. Least-squares reconstruction exponentials are those of the k-space sampling. One may wonder if this set of complex exponentials is the Because the measurement noise in MRI is well modeled as most appealing basis for representing f. complex white gaussian noise, based on the model (6) it may be tempting to apply a least-squares (LS) estimation method: 3. Discrete-discrete formulations 0 − 0 xˆ = arg min ky − Axk2 = (AA) 1Ay. In these methods, we discretize f using a finite-series x expansion akin to (4) but with different basis functions. We focus on this type of formulation hereafter. Indeed, for appropriate Cartesian sampling the matrix E in −1 1 ∗ (10) is orthogonal and satisfies E = M E and in this case the LS solution simplifies to xˆ = A−1y = 1 E∗B−1y, 1.1. Finite-series object model M which is essentially the conventional inverse FFT approach. For a finite-series approach, we first select some basis func- However, for non-Cartesian sampling, the matrix A is often tions {bj (~r) : j = 1,...,N} and model the object f as fol- ill-conditioned or even singular, so the LS solution leads to lows: undesirable noise amplification. N f(~r) ≈ x b (~r) . (5) j j 1.3. Regularized least-squares methods Xj=1 After adopting such a model, the reconstruction problem sim- To control the noise of the LS estimator, one can modify the plifies to determining the vector of unknown coefficients x = cost function by including a regularization term: (x1,...,xN ) from the measurement vector y. xˆ = arg min Ψ(x) Under this assumption, the discrete data, continuous ob- x ject model (1) simplifies to the following discrete-discrete 1 Ψ(x) = ky − Axk2 + β R(x), (11) model: 2 W y = Ax + ε, (6) where W is an optional weighting matrix discussed in more where the M × N system matrix A has elements detail below. The regularizer R(x) usually penalizes image roughness, and the regularization parameter β controls the −ı2π~νi·~r aij = bj(~r) e d~r, (7) tradeoff between spatial resolution and noise. For example, Z in 1D we might use the squared differences between neigh- for i = 1,...,M and j = 1,...,N. boring pixels: Usually the basis functions in (5) are chosen to be equally- N 1 2 spaced translates of a pulse-like function such as a rectan- R(x) = (x − x − ) . 2 j j 1 gle or triangle, although more complicated choices such as Xj=2 prolate spheroidal wave functions have also been used [6]. Specifically, usually we have Nonquadratic regularization is also used in ill-posed inverse problems to better preserve edges. However, in non-Cartesian bj(~r) = b(~r − ~rj ), j = 1,...,N, (8) MRI, often the sampling is good enough that A is only some- what poorly conditioned so relatively small values of β can be where b(~r) denotes the common function that is translated to used in which case quadratic regularization may be adequate. ~rj , the center of the jth basis function. A general form for a quadratic regularizer is For basis functions of the form (8), by the shift property 1 0 of the Fourier transform, the elements of A in (7) are simply R(x) = x Rx, 2 −ı2π~νi·~rj aij = B(~νi) e , (9) where R is the Hessian of the regularizer. For quadratic reg- where B(~ν) is the 2-dimensional Fourier transform of b(~r). ularization, the minimization problem (11) has trhe following In other words, the system matrix A has the following form: explicit solution: 0 − 0 xˆ = arg min Ψ(x) = [AW A + R] 1AW y. (12) A = BE, (10) x However, the matrix inverse in this expression is very large By defining the Toeplitz matrix T = A0W A and the vector (N × N), so in practice one usually computes xˆ by using an b = A0W y, we can rewrite the gradient expression (13) as: iteration like the conjugate gradient algorithm to minimize the cost function (11), as described below. And if R(x) is ∇Ψ(x) = Tx − b + ∇ R(x) . (15) nonquadratic, there is no explicit expression for x, so iterative ˆ The elements of b ∈ CN are given by methods are essential for computing xˆ. M 0 −ı2π~νi·~rj 1.4. Choosing the regularization parameter bj = [AW y]j = wiyi e , j = 1,...,N. Xi=1 A common concern with regularized methods like (11) is choos- We can precompute b prior to iterating using an (adjoint) ing the regularization parameter β. Based on (12) we can an- NUFFT operation [8]. This calculation is similar to the grid- alyze the spatial resolution properties of xˆ easily: ding reconstruction method. Each gradient calculation re- 0 − 0 N N T E[xˆ] = [AW A + βR] 1AW Ax. quires multiplying the × block Toeplitz matrix by the current guess of x. That operation can be performed ef- 2 2 Usually the matrix A0W A and the matrix R are Toeplitz, so ficiently by embedding T into a 2 N × 2 N block circulant we can use FFTs to evaluate rapidly the PSF of this image matrix and applying a 2-dimensional FFT [14].