EE290T: Advanced Reconstruction Methods for Magnetic Resonance Imaging

Martin Uecker Introduction

Topics:

I Image Reconstruction as Inverse Problem

I Parallel Imaging

I Non-Cartestian MRI

I Subspace Methods

I Model-based Reconstruction

I Compressed Sensing Tentative Syllabus

I 01: Jan 27 Introduction

I 02: Feb 03 Parallel Imaging as Inverse Problem

I 03: Feb 10 Iterative Reconstruction Algorithms

I –: Feb 17 (holiday)

I 04: Feb 24 Non-Cartesian MRI

I 05: Mar 03 Nonlinear Inverse Reconstruction

I 06: Mar 10 Reconstruction in k-space

I 07: Mar 17 Reconstruction in k-space

I –: Mar 24 (spring recess)

I 08: Mar 31 Subspace methods

I 09: Apr 07 Model-based Reconstruction

I 10: Apr 14 Compressed Sensing

I 11: Apr 21 Compressed Sensing

I 12: Apr 28 TBA Today

I Review

I Non-Cartesian MRI

I Non-uniform FFT

I Project 2 Direct Image Reconstruction

I Assumption: Signal is of the image:

Z ~ s(t) = d~x ρ(~x)e−i2π~x·k(t)

I Image reconstruction with inverse DFT

ky

kx ⇒ ⇒ sampling iDFT

~ R t ~ image k(t) = γ 0 dτ G(τ) k-space Requirements

I Short readout (signal equation holds for short time spans only)

I Sampling on a Cartesian grid ⇒ Line-by-line scanning

α echo radio measurement time: slice TR ≥ 2ms read 2D: N ≈ 256 ⇒ seconds phase 3D: N ≈ 256 × 256 ⇒ minutes TR ×N

2D MRI sequence Poisson Summation Formula

Periodic summation of a signal f from discrete samples of its Fourier transform fˆ: ∞ ∞ X X 1 l 2πi l t f (t + nP) = fˆ( )e P P P n=−∞ l=−∞

No aliasing ⇒ Nyquist criterium

l For MRI: P = FOV ⇒ k-space samples are at k = FOV I Periodic summation from discrete Fourier samples on grid

I Other positions can be recovered by sinc interpolation (Whittaker-Shannon formula)

image k-space I Other positions can be recovered by sinc interpolation (Whittaker-Shannon formula)

image k-space

I Periodic summation from discrete Fourier samples on grid image k-space

I Periodic summation from discrete Fourier samples on grid

I Other positions can be recovered by sinc interpolation (Whittaker-Shannon formula) Nyquist-Shannon Sampling Theorem

Theorem 1: If a function f (t) contains no frequencies higher than W cps, it is completely determined by giving its ordinates at a series of points spaced 1/2W seconds apart.1

I Band-limited function

I Regular sampling

I Linear sinc-interpolation

1. CE Shannon. Communication in the presence of noise. Proc Institute of Radio Engineers; 37:10–21 (1949) Fast Fourier Transform (FFT) Algorithms

Discrete Fourier Transform (DFT):

N−1 N−1 X 2π N ˆ i N kn DFTk {fn}n=0 := fk = e fn for k = 0, ··· , N − 1 n=0

Matrix multiplication: O(N2)

Fast Fourier Transform Algorithms: O(N log N) Cooley-Tukey Algorithm

Decomposition of a DFT:

N−1 N N−1 X kn DFTk {fn}n=0 = (ξN ) fn n=0

N1−1 N2−1 X k1n1 k2n1 X k2n2 = (ξN1 ) (ξN ) (ξN2 ) fn1+n2N1 n1=0 n2=0 n oN1−1 = DFT N1 (ξ )k2n1 DFT N2 {f }N2−1 k1 N k2 n1+n2N1 n2=0 n1=0

I Recursive decomposition to prime-sized DFTs ⇒ Efficient computation for smooth sizes

(efficient algorithms available for all other sizes too) Parallel MRI: Iterative Algorithms

Signal equation: Z ~ −i2π~x·~k si (k) = d~x ρ(~x) ci (~x)e V | {z } encoding functions

Discretization:

A = Pk FC

A has size 2562 × (8 × 2562) ⇒ Iterative methods built from matrix-vector products sensitivities Ax, AH y Landweber Iteration

Gradient descent: 1 φ(x) = kAx − yk2 2 2 ∇φ(x) = AH (Ax − y)

Iteration rule:

H H xn+1 = xn − µA (Axn − y) with µkA Ak ≤ 1 H H = (I − µA A)xn − A y

Explicit formula for x0 = 0:

n−1 X H j H xn = (I − µA A) µA y j=0 Projection Onto Convex Sets (POCS)

Iterative projection onto convex sets:

yn+1 = PB PAyn

A

B

Problems: slow, not stable, A ∩ B = ∅ POCSENSE

Iteration:

yˆn+1 = PC Py yˆn

(multi-coil k-spacey ˆ)

Projection onto sensitivities:

H −1 PC = FCC F

(normalized sensitivities)

Projection onto data y:

Py yˆ = My + (1 − M)ˆy

AA Samsonov, EG Kholmovski, DL Parker, and CR Johnson. POCSENSE: POCS-based reconstruction for sensitivity encoded magnetic resonance imaging. Magn Reson Med, 52:1397–1406, 2004. Krylov Subspace Methods

Krylov subspace:

n Kn = spani=0···n {T b}

⇒ Repeated application of T .

Landweber, Arnoldi, Lanczos, Conjugate gradients, ... Conjugate Gradients

T symmetric (or Hermitian)

Initialization:

r0 = b − Tx0

d0 = r0 Iteration:

qi ⇐ Tdi 2 |ri | α = H R ri qi xi+1 ⇐ xi + αdi

ri+1 ⇐ ri − αqi 2 |ri+1| β = 2 |ri |

di+1 ⇐ ri+1 + βdi Conjugate Gradients on Normal Equations

Normal equations:

H H 2 A Ax = A y ⇔ x = argminx kAx − yk2

Tikhonov:

 H  H 2 2 A A + αI x = A y ⇔ x = argminx kAx − yk2 + αkxk2

CGNE:

Tx = b

with:

T = AH A b = AH y Implementation

repeat

FFT P IFFT

FFT P IFFT

sum channels point-wise mult. with conjugate sensitivities data flow inverse FFT sampling mask or projection onto data

multi-dimensional FFT

point-wise multiplication with sensitivities

distribute image Conjugate Gradient Algorithm vs Landweber

conjugate gradients Landweber

2 1 1. 1.414 1.732 0 2. Project 1: Iterative SENSE

Project: Implement and study Cartesian iterative SENSE

I Tools: Matlab, reconstruction toolbox, python, ...

I Deadline: Feb 24

I Hand in: Working code and plots/results with description.

I See website for data and instructions. Project 1: Iterative SENSE

Step 1: Implement Model

A = PFC AH = C H F −1PH

Hints:

I Use unitary and centered (fftshift) FFT kFxk2 = kxk2 I Implement undersampling as a mask, store data with zero H I Check hx, Ayi = A x, y for random vectors x, y Project 1: Iterative SENSE

Step 2: Implement Reconstruction

Landweber (gradient descent)1

H xn+1 = xn + αA (y − Axn)

Conjugate gradient algorithm2

Step 3: Experiments

I Noise, errors, and convergence speed

I Different sampling

I Regularization

1. L Landweber. An iteration formula for Fredholm integral equations of the first kind. Amer J Math; 73:615–624 (1951) 2. MR Hestenes and E Stiefel. Methods of conjugate gradients for solving linear systems. J. Ref. N.B.S. 49:409–436 (1952) Today

I Review

I Non-Cartesian MRI

I Non-uniform FFT

I Project 2 Parallel Imaging as Inverse Problem

Model: Signal from multiple coils (image ρ, sensitivities cj ): Z −i2π~x·~k(t) sj (t) = d~x ρ(~x)cj (~x)e + nj (t) Ω Assumptions:

I Image is square-integrable function ρ ∈ L2(Ω, C) I Additive multi-variate Gaussian white noise n

Problem: Find best approximate/regularized solution in L2(Ω, C).

Ω Non-Cartesian Trajectories

Examples:

Cartesian radial spiral Non-Cartesian Trajectories

Practical issues:

I Long readout: blurring, artifacts

I Imperfect gradient waveforms (e.g. delays, eddy currents)

I Efficient implementation of the reconstruction algorithm Spiral

Long readout:

I Efficient! ? I T2 -Relaxation ⇒ blurring I Off-resonance Spiral

Signal equation: Z itγ∆B (~x) −t/T ?(~x) −2πi~k~x y(t) = dx ρ(~x)e 0 e 2 e Ω Off-resonance:

itγ∆B0(~x) |t − tTE |  ∆γB0 ⇒ e ≈ const

Relaxation:

? ? −t/T2 |t − tTE |  T2 ⇒ e ≈ const Spiral

Signal equation: Z itγ∆B (~x) −t/T ?(~x) −2πi~k~x y(t) = dx ρ(~x)e 0 e 2 e Ω Off-resonance:

itγ∆B0(~x) |t − tTE |  ∆γB0 ⇒ e ≈ const

Relaxation:

? ? −t/T2 |t − tTE |  T2 ⇒ e ≈ const Radial

π I Not efficient (Nyquist: 2 × spokes vs. Cartesian lines) I Motion robustness

I High tolerance to undersampling

I Continuous update possible Radial - Undersampling Artifacts

Left to right: 96, 48, 24 radial spokes Forward Problem

Signal equation:

Z ~ −2πikl~x yl = d~x ρ(~x)e Ω

Non-Cartesian sample positions: kl Discretization: ρ(~x) pixel values at grid positions ~x

~ X −2πikl~x yl = ρ(~x)e ~x ⇒ non-uniform DFT from grid to non-Cartesian positions. Inverse Non-Uniform Fourier Transform

Forward model:

Z ~ ikl~x yl = d~x ρ(~x)e

Inverse problem:

y = Pk Fρ = Aρ

Best-approximation (least-squares + minimum-norm):

−1 H  H  x = limα→0A AA + αI y Non-uniform FFT

Fourier transform at arbitray (non-Cartesian) positions kl Z −2πikl x yl = dx f (x)e

DFT at arbitray (non-Cartesian) positions kl

N−1 k n X −2πi l yl = xne N n=0

Fast computation: nuFFT Interpolation to Non-Cartesian Data

I Periodic extension: FFT yields Fourier transform on grid

I Values at k-space positions recovered by sinc interpolation

Whittaker-Shannon formula: ∞ X t − nT sin(πx) f (t) = f (nT ) sinc( ) sinc(x) = T πx n=−∞ Approximation of the Interpolation

Computation:

N/2−1 X t − nT f (t) = f (nT ) kern( ) T n=−N/2

I Sinc interpolation: not practical

I Compactly supported kernels: truncated Gaussian, Kaiser-Bessel Kernel with compact support:

kern(x) = 0 for |x| >= wT

⇒ Sum has only 2 ∗ w terms ⇒ Efficient computation Approximation of the Interpolation

Computation (2D):

N/2−1 M/2−1 X X s − nS t − nT f (s, t) = f (nS, mT ) kern( ) kern( ) S T n=−N/2 m=−M/2 Approximation of the Interpolation

Computation (2D):

N/2−1 M/2−1 X X s − nS t − nT f (nS, mT ) kern( ) kern( ) S T n=−N/2 m=−M/2

Isotropic kernel:   N/2−1 M/2−1 s 2 2 X X s − nS  t − nT  f (nS, mT ) kern +  S T  n=−N/2 m=−M/2 Approximation of the Interpolation

I Sinc interpolation: not practical

I Finite kernel: Kaiser-Bessel interpolation

I Oversampling to reduce approximation errors

stop pass stop Approximation of the Interpolation

I Sinc interpolation: not practical

I Finite kernel: Kaiser-Bessel interpolation

I Oversampling to reduce approximation errors

stop pass stop Approximation of the Interpolation

I Sinc interpolation: not practical

I Finite kernel: Kaiser-Bessel interpolation

I Oversampling to reduce approximation errors

stop pass stop Approximation of the Interpolation

I Sinc interpolation: not practical

I Finite kernel: Kaiser-Bessel interpolation

I Oversampling to reduce approximation errors

stop pass stop Approximation of the Interpolation

I Sinc interpolation: not practical

I Finite kernel: Kaiser-Bessel interpolation

I Oversampling to reduce approximation errors

stop pass stop Gridding

Best approximation:

 −1 A† = lim AH AAH + αI α→0 Gridding: approximate inverse:

G = AH D ≈ A†

Approximations:

I Interpolation with a finite kernel (as discussed) H −1 I In addition: diagonal approximation to AA Gridding - Density Compensation

I Diagonal matrix D in the Fourier domain

I Usually approximated somehow (Voronoi diagrams)

I Intuition: “density compensation”

I Exact inverse for (continuous) Radon transformation Gridding - Density Compensation

Ram-Lak filter No density compensation Iterative Gridding

Normal equations: AH Ax = AH y

Iterative solution:

I Conjugate gradient algorithm H H I Repeated application of A A to A y

Properties:

I No density compensation needed

I More accurate than gridding

I Required for parallel imaging

I Problem: reconstruction speed Density Compensation as Preconditioner

Can we have both? Left preconditioning: ¯ Ax = y ⇒ DAx = Dy

Idea: AH D ≈ A−1 ⇒ AH DA ≈ I

Apply CG algorithm to:

AH DAx = AH Dy

⇒ faster convergence (usually) Density Compensation as Preconditioner

Problems: H I Usually A D not a good approximation (especially with undersampling)

I Weighted least-squares solution (⇒ suboptimal SNR):

H H 1/2 A DAx = A Dy ⇔ argminx kD (Ax − y)k2

Preconditioner for normal equations:

LAH Ax = LAH y or AH ARz = AH y Rz = x Iterative Reconstruction

Observation: basic operation is AH A.

Convolution with PSF:

H −1 A A = F Pk F | {z } with PSF Convolution

Definition: Z (f ? g)(~x) = d~y f (~y)g(~x − ~y) Rn :

F{a ? b} = F{a}·F{b} ⇒ a ? b = F −1 {F{a}·F{b}} Iterative Reconstruction

Main operation:

ρˆ = AH Aρ

Definition of adjoint:

hσ, ρˆi = hAσ, Aρi

Then:

X ~ Z ~ ρˆ(y) = e+2πikl~y d~x ρ(~x)e−2πikl~x l Z X = d~x ρ(~x) e2πikl (y−x) l | {z } PSF Iterative Reconstruction: Alternative Implementation

Discretization: pixel values on finite grid: xr , ys Z X 2πikl (y−x) X X 2πikl (yr −xs ) d~x ρ(~x) e ≈ ρ(~xr ) e l s l

Hermitian Toeplitz matrix: Txy = ax−y   a0 a1 ... ?  a a0 a1   1   ......   . . .   ?   a1 a0 a1  ? a1 a0 ⇒ Two applications of FFT algorithm Convolution

Note: Discretization not necessary (at this point)

Poisson summation formula: ∞ ∞ X X 1 l 2πi l t f (t + nP) = fˆ( )e P P P n=−∞ l=−∞

Periodic convolution: 1. Computation of Fourier coefficients 2. Multiplication with Fourier coefficients of (periodic) kernel 3. Summation of 1. Truncated PSF (twice FOV) 2. Periodic convolution 3. Restricted support

Convolution

Aperiodic convolution: 1. Truncated PSF (twice FOV) 2. Periodic convolution 3. Restricted support

Convolution

Aperiodic convolution: 2. Periodic convolution 3. Restricted support

Convolution

Aperiodic convolution:

1. Truncated PSF (twice FOV) 3. Restricted support

Convolution

Aperiodic convolution:

1. Truncated PSF (twice FOV) 2. Periodic convolution Convolution

Aperiodic convolution:

1. Truncated PSF (twice FOV) 2. Periodic convolution 3. Restricted support Iterative gridding

NH Nx = NH y

N nuFFT.

Alternative method

H −1 H R F MTF FRx = N×2y

MTF Multiplication with Fourier transform of truncated PSF R Restriction to FOV Non-Cartesian SENSE

Model: Signal from multiple coils (image ρ, sensitivities cj ): Z −i2π~x·~k(t) sj (t) = d~x ρ(~x)cj (~x)e + nj (t) Ω

Cartesian:

A = PFC

P sampling mask

Non-Cartesian:

A = NFC

N non-uniform DFT Refocusing Reconstruction

Long readout: blurring, artifacts

Signal:

Z ~ s(t) = dx ρ(~x)eitγ∆B0(~x)e−2πik(t)~x Ω

Refocusing:

I Takes field map ∆B0 into account I Removes blurring/artifacts due to off-resonance Project 2: Non-Cartesian MRI

Project: Implement non-Cartesian reconstruction

I Tools: Matlab, reconstruction toolbox, python, ...

I Deadline: Mar 17

I Hand in: Working code and plots/results with description.

I See website for data and instructions. Project 2: Non-Cartesian MRI

Step 1: Computation of non-Cartesian samples

I Compute non-uniform DFT matrix

I Implement nuFFT and its adjoint

I 1D and 2D versions

Hints:

I Oversampling ×2

I Gaussian kernel (not optimal) Project 2: Non-Cartesian MRI

Step 2: Implement iterative gridding

I Reconstruction from non-Cartesian data

I Use Landweber and/or CG

Step 3: Implement non-Cartesian SENSE

I Implement non-Cartesian SENSE

1. JD O’Sullivan. A fast sinc function gridding algorithm for Fourier inversion in computer tomography. IEEE Trans Med Imaging, 4:200-207, 1985. 2. JI Jackson, CH Meyer, DG Nishimura, and A Macovski. Selection of a convolution function for Fourier inversion using gridding. IEEE Trans Med Imaging, 3:473-478, 1991. 3. KP Pruessmann, M Weiger, P B¨ornert,and P Boesiger. Advances in sensitivity encoding with arbitrary k-space trajectories. Magn Reson Med, 46:638-651, 2001.