<<

2/2/2012

Medical Imaging Analysis Image Registration in y Developing mathematical algorihms to extract and relate information from medical images y Two related aspects of research BI260 { Image Registration VALERIE CARDENAS NICOLSON, PH.D Ù Fin ding spatia l/tempora l correspon dences between image data and/or models ACKNOWLEDGEMENTS: { Image segmentation Ù Extracting/detecting specific features of interest from image data COLIN STUDHOLME, PH.D.

LUCAS AND KANADE, PROC IMAGE UNDERSTANDING WORKSHOP, 1981

Medical Imaging Regsitration: Overview Registration y What is registration? { Definition “the determination of a one-to-one mapping between the coordinates in one space and those in { Classifications: Geometry, Transformations, Measures another, such that points in the two spaces that y Motivation for work correspond to the same anatomical point are mapped to each other.” { Where is image regi istrati on use d in me dic ine an d biome dica l research? Calvin Maurer ‘93 y Measures of image alignment { Landmark/feature based methods { Voxel-based methods Ù Image intensity difference and correlation Ù Multi-modality measures

1 2/2/2012

Image Registration Key Applications I: Change detection

y Look for differences in the same type of images taken at different times, e.g.: { Mapping pre and post contrast agent “Establishing correspondence, Ù Digital subtraction angiography (DSA) between features Ù MRI with gadolinium tracer in sets of images, and { Mapping structural changes using a transformation model to Ù Different stages in tumor growth (before and after treatment) infer correspondence Ù Neurodegenerative disease-> qqyguantifying tissue loss patterns away from those features.” { Detecting changes due to function Ù Functional MRI (fMRI): before and after brain stimulus Bill Crum ‘05 Ù PET imaging: quantitative tracer uptake measurements y Problem: { Subject scanned multiple times-> removed from scanner { We cannot easily fix/know patient location and orientation with respect to imaging system { Need to remove differences in patient positioning to detect true differences in patient images

Key Applications II: Components of Image Registration Algorithms y Relate contrasting information from different types of y Image Data Geometries images { 2D-2D, 2D-3D, 3D-3D y Multi-modality imaging y Transformation type { MRI-CT { Rigid/affine/non-rigid { MRI-PET/SPECT { Structural MRI-functional MRI y Correspondence criteria/measure { Structural MRI to structural MRI { Feature based methods y Problem: { voxel based/dense field methods { Subject scanned multiple times-> different scanners { We cannot easily fix/know patient location and orientation with y Optimization method respect to different imaging systems { Maximizing/minimizing criteria with respect to transform { Need to remove differences in patient positioning to relate information from different types of images

2 2/2/2012

2D-2D Inter-modality image registration problem

Registration and display of the combined bone scan and radiograph in the Examples of Image Geometries and diagnosis and management of write injuries, Hawkes et al., EJNM 1991 Transformation Models in Medical Applications

2D-2D image transformations 2D-2D image transformations y Simple case: parallel projection

{ 2 translations t1 and t2 (up-down/left-right) and one rotation (θ)

This is a linear mapping from (x1,x2) to (y1, y2)

y1=x1cos θ-x2sin θ+t1

y2=x1sin θ+x2cos θ+t2

3 2/2/2012

Rotating around a given location Image Scaling

y Scaling with respect to a fixed point (x,y)

y Scaling along an arbitrary axis

Influence of Affine Transformations on Images Composing Transformations y Map lines to lines y Map parallel lines to parallel lines y Preserve ratios of distances along a line y Do NOT ppgreserve absolute distances and angles

4 2/2/2012

2D-3D registration problems 2D-3D Registration Geometry y Radiotherapy “the determination of a projection mapping, { Treatment verification (Portal images) from a 3D to a 2D coordinate system such that points in each space which correspond to the { Treatment planning (simulator system) same anatomical points are mapped to each { Treatment guidance (Cyberknife system) other.” y Orthopaedic surgery { Spinal surgery (Brain Lab, Medtronic) { Hip or Knee arthroplasty (ROBODOC) Ù Verification of implant position y Neurointerventions { Matching MRA to DSA y Surgical microscope { MRI/CT neurosurgery guidance y Virtual endoscopy

(A) Imaging/Acquisition parameters (intrinsic) (B) Subject Parameters (extrinsic)

5 2/2/2012

3D-3D image registration 3D Rigid Transformations

⎡x'⎤ ⎡t11 t12 t13 t14 ⎤⎡x⎤ y Many different 3D clinical imaging modalities ⎢y'⎥ ⎢t t t t ⎥⎢y⎥ ⎢ ⎥ = ⎢ 21 22 23 24 ⎥⎢ ⎥ { MRI probably still the least common ⎢z'⎥ ⎢t31 t32 t33 t34 ⎥⎢z⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣1 ⎦ ⎣t41 t42 t43 t44 ⎦⎣1⎦ y Images used in many different clinical settings ⎡1 0 0 0⎤ ⎡cos β 0 − sin β 0⎤ ⎡ cosθ sinθ 0 0⎤ { Diagnosis ⎢0 cosα sinα 0⎥ ⎢ 0 1 0 0⎥ ⎢− sinθ cosθ 0 0⎥ R = ⎢ ⎥ R = ⎢ ⎥ R = ⎢ ⎥ { Treatment planning x ⎢0 − sinα cosα 0⎥ y ⎢sin β 0 cos β 0⎥ z ⎢ 0 0 1 0⎥ { Treatment guidance ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣0 0 0 1⎦ ⎣ 0 0 0 1⎦ ⎣ 0 0 0 1⎦ { Clinical research: studying disease z θ ⎡1 0 0 x0 ⎤ y Transformation types: ⎢ ⎥ 0 1 0 y1 { Rigid positioning of subject: still most common T = ⎢ ⎥ ⎢0 0 1 z1 ⎥ { Non-rigid deformations to describe y ⎢ ⎥ Ù Tissue deformation α ⎣0 0 0 1 ⎦ Ù Imaging distortion β Ù Differences between subjects x

Feature Based Approaches y Point set -> Point set (homologous) y Point set -> Point set (non homologous… so need to find order) y Point set -> Surface y Surface -> Surface (also Curve->Surface)

6 2/2/2012

MR-CT Registration Homologous Feature Based Alignment y Manual point landmark y General case of two lists of identification (around 12 corresponding feature locations points) in MR and CT { [p1, p2,…,pN] and { [q1, q2,…,qN] both with N points y Relates soft tissue structures such as y We want to find: enhancing tumor and TfTransformati on T(q) tha t minimizes squared distance blood flow to bone between corresponding points: features in CT 2 E = ∑ pr −T (qr ) r

y Where one set of points, q, is transformed by T() -> Extrapolate Transformation for all image voxels/pixels

Procrustes Point Alignment Golub and VanLoan, Matrix Computations, 1989 Procrustes Point Alignment y Remove translational differences y Scale (procrustes normally includes estimate of scaling) { Calculate translation that brings each of the point sets to { But we can assume scanner voxel dimensions are accurate the origin and subtract from each of the point sets to create centered point sets 1 y It is possible to show that the rotation solution is: q'i = qi − ∑qi N i 1 p'i = pi − ∑ pi T N i R=VU Rewrite centered point lists as matrices

⎡ p'1 ⎤ ⎡x1 y1 z1 ⎤ ⎡ q'1 ⎤ ⎡x'1 y'1 z'1 ⎤ where U and V are matrices obtained ⎢ p' ⎥ ⎢x y z ⎥ ⎢q' ⎥ ⎢x' y' z' ⎥ T P = ⎢ 2 ⎥ = ⎢ 2 2 2 ⎥ Q = ⎢ 2 ⎥ = ⎢ 2 2 2 ⎥ from the SVD of P Q ⎢ M ⎥ ⎢ M M M ⎥ ⎢ M ⎥ ⎢ M M M ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ p' x y z q' x' y' z' ⎣ N ⎦ ⎣ 3 3 3 ⎦ ⎣ N ⎦ ⎣ 3 3 3 ⎦ Recall that (PTQ->(USVT)

{ Estimate rotations: find the rotation matrix R such that PT=RQT S is a diagonal matrix, U and V are matrices K.S. Arun, T.S. Huang, and S.D. Blostein, Least square fitting of two 3-d point sets. IEEE PAMI, containing the left and right singular vectors. T T The system P =RQ is over-determined and there is 9(5):698-700, 1987. noise, thus we want to find the rotation matrix For 2D: ⎡cosθ − sinθ ⎤ R such that VU T = T T 2 ⎢ ⎥ minR=|| P -RQ || ⎣sinθ cosθ ⎦

7 2/2/2012

Extrapolating Transformations: Theoretical point based registration error Alternatives to SVD alignment (points on circle 50 mm radius selected with RMS error of 2mm)

y Quaternion methods { Horn, Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, 4(4):629-642, 1987. y Orthonormal matrices { Horn et al., Closed-form solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America A, 5(7):1127-1135, July 1988. y Dual quaternions { Walker et al., Estimating 3-d location parameters using dual number quaternions. CVGIP: Image Understanding, 54:358-367, November 1991. y These algorithms generally show similar performance and stability with real world noisy data { Loruss et al., A comparison of four algorithms for estimating 3-D rigid transformations. In Proceedings of the 4th British machine conference (BVMC ‘95), pages 237-246, Birmingham, England, September 1995. Important factor: registration lowest where 3D landmarks are found

The distribution of target registration error in rigid-body point-based registration. Fitzpatrick and West, IEEE TMI, 20(9): 2001, 917-927.

Approaches to Landmark/Feature Extraction and Matching y Markers attached to subject Early feature based brain/head matching { Need to be visible in different types of images { Hawkes et al., Registration and display of the combined bone scan and radiograph in the diagnosis and management of wrist injuries, EJNM 1991. y Manual landmark identification { Time consuming, introduce errors, difficult to find true consistent 3D landmarks, but VERY flexible and can adapt to data { Hill et al., Accurate frameless registration of MR and CT images of the head: Applications in surgery and radiotherapy planning, Radiology 1994, 447-454. y Automated landmark identification: geometric models of local anatomy { Need to be true unique 3D point in intensity map: tip-like, saddle-like, and sphere like structures { Need to be consistent in different subjects and image types { Worz and Rohr, Localization of anatomical point landmarks in 3D medical images by fitting 3D parametric intensity models, Medical Image Analysis 10(1):41-58, 2006. y Non-homologous landmarks/3D structures { Easier to automatically find general features: for example point on a surface using edge detection { But: which point maps to which point? “Head-hat” matching of head surfaces { Need to then find correspondence and alignment: point cloud fitting Retrospective geometric correlation of MR, CT and PET images, Levin et al., Radiology 1988.

8 2/2/2012

Alignment of non-homologous feature locations: fuzzy correspondence Iterative Closest Point Algorithm

General case of two lists of point locations y Approximates correspondence matrix [kij] by assigning

P=[p1, p2,…,pN] and Q=[q1,q2,…,qM] each point to the current closest point. with N and M points respectively, and a list of y Applying current transformation R and t to weights [kij] to describe the fuzzy correspondence between every possible pair: Q=[q1,q2,…,qM], so that Q’=RQi+t

RiRegistrat ion error can t hen be expresse d as y Take each point P=[p1, p2,…,pN] and search list Qi’ to find i the nearest point Qi to create a new list with N points E(R,t) = ∑∑kij pi − (Rqi + t) il y Apply least squares fit of current nearest point (e.g. using Procrustes point matching) to estimate R and t But…need to estimate R (rotation), t y Repeat until change in transformation falls below threshold (translation) and correspondence [kij] y Besl and McKay, A method for registration of 3-D shapes, IEEE PAMI, 14(2):239-256, 1991.

ICP advantages and disadvantages Challenges in automating medical image registration y Can be applied to both discrete landmarks, lines, y Finding suitable features surfaces etc. { True 3D landmarks y Finding the same feature in different types of images y But: highly dependent on starting estimate! { No nice edges/corners and man made scenes { Only finds a local optima y Variable/limited anatomical coverage { Can use multi-start to improve search { Truncated y Variable/low contrast to noise y Search for closest point in large point lists or surfaces can be computationally expensive

9 2/2/2012

Iterative refinement of transformation parameters: Pixel/Voxel based registration small displacements

y History: { Detecting an object or feature based on pixel/voxel values y Avoid the need to automatically extract corresponding landmarks or surfaces Consider 1D case: for no noise, assume w(x) is some y Similarity measures for image registration can exact translation of f(x) assume: w(x)=f(x+t) { Linear intensity mapping { Non-parametric 1-to-1 intensity mapping and image ‘mis-match’ or error for given displacement t can simply be local difference in intensity { Non-parametric many-to-1 intensity mapping y Simplest: image intensity difference e(t)=f(x+t)-w(x) Lucas and Kanade, Proc Image Understanding Workshop, 1981, 121-130.

Iterative refinement of transformation parameters: Iterative refinement of transformation parameters: small displacements AT A POINT small displacements OVER ALL x

t(x)≈[w(x)-f(x)]/f’(x)

so average over all x is

t ≈ ∑t(x) / nx x

At one point x, for small t But: because of local approximations, first estimate of t does not get you to the optimal solution, f’(x)≈(f(x+t)-f(x))/t just gets you nearer to it

=(w(x)-f(x))/t Need multiple steps: iterative registration So, translation t to align image f with w at point x is then if t0 = 0 then tn+1 = tn + ∑t(x) / nx t ≈[w(x)-f(x)]/f’(x) x

10 2/2/2012

Minimizing Sum of Squared Intensity Difference Extension to N dimensional images

We can use an alternative form for the The squared intensity difference can be extended to the case where location x intensity ‘mis-match’ or error: the and translation t are vectors of N dimensions: squared intensity difference E(t) = ∑[ f (x + t) − w(x)]2 x To find the optimal transformation t set: ∂E 0 = ∂t ∂ ∂ 0 = ∑∑[ f (x + t) − w(x)]2 = [ f (x) + tf '(x) − w(x)]2 ∂t xx∂t 0 = ∑2 f '(x)[ f (x) + tf '(x) − w(x)] x Giving : ∑ f '(x)(w(x) − f (x)) ∑ f '(x)(w(x) − f (x)) t ≈ x so if t = 0 then t = t + x ∑ f '(x)2 0 n+1 n ∑ f '(x)2 x x

Global Intensity Variations Alternative image similarity measures for image alignment y Many medical images have ‘uncalibrated’ intensities { Gain or illumination changes y Common issue: linear intensity scaling and offset f’=wk+b

Absolute difference will not tend to zero at correct match: { OR worse, minimum does not correspond to correct match

11 2/2/2012

Correlation Sum of Squared Difference and Correlation y Correlation is measuring the residual errors between the data and a best fit of a line to that data y Allows relationship to have a different slope and offset J K D = ∑∑[ f (x + s, y + t) − w(s,t)]2 y Robust to global linear changes in brightness/contrast t==11s This can be expanded to : J K J K J K D = ∑∑f (x + s, y + t)2 + ∑∑w(s,t)2 − 2 ∑∑ f (x + s, y + t)w(s,t) t==11s t == 11s t == 11s

•D will be small when last term is large or first two terms are small. •First two terms are the summed intensities within the template region •Neglecting the first two terms and the factor -2 gives us a simple commonly used measure of template match to be MAXIMIZED

J K c(x, y) = ∑∑ f (x + s, y + t)w(s,t) t==11s •This sum of products of the intensity pairs at each pixel is the correlation between target and template pixel intensities.

Normalized Correlation: Boundary overlap Normalized Correlation: sensitivity to variance

y At image boundaries the overlap of template and target image may be limited: match will be reduced by number of pixels not overlapping y Correlation is still a function of overall energy or brightness of the image and template y so…can still end up picking the brightest target y Can normalize measure by number of overlapping pixels, Nt at template location x, y y One option: normalize by the summed brightness of the target image 1 J K 1 J K c'(x, y) = ∑∑ f (x + s, y + t)w(s,t) c'(x, y) = ∑∑ f (x + s, y + t)w(s,t) E t==11s Nt t==11s J K E = ∑∑ f (x + s, y + t) t==11s

12 2/2/2012

Correlation Coefficient Variance? y Match may still be biased by variance in Target/Template

What about non-linear intensity mappings? Non-linear intensity mapping

13 2/2/2012

Non-parametric 1-to-1 mapping

y But intensity mapping is not usually smooth or easily parameterized Multi Modality Similarity (e.g., discrete because of different tissue classes) Measures y Assume on 1-to-1 mapping of intensities between template and target

MATCHING IMAGES WITH DIFFERENT TISSUE CONTRAST PROPERTIES

SPECT-MRI SPECT-MRI registration registration

Not easy to find 3D landmarks

14 2/2/2012

Partitioned Image Uniformity The effects of misregistration in intensity feature space [Woods et al. JCAT 93]

y Used to MRI-PET/SPECT registration { MRI scan scalp edited-> only consider intra-cranial tissues { Gray-white-CSF values in cranial region differ Ù Non-monotonic mapping Ù Approximately 1-to-1

Partitioned Image Uniformity [Woods et al. JCAT 93] Many-to-1 Intensity Mapping? y Divide up the template image intensities w into bins b∈B y The partitioned image uniformity of the target image f is given by the weighted summation of normalized standard deviations of f in each bin:

15 2/2/2012

Changes in 2D Histogram with Alignment How to Characterize Alignment? ‘Histogram Sharpness’

Information and Entropy Information and Entropy [Collignon, CVRMED 95] [Collignon, CVRMED 95]

16 2/2/2012

Mutual Information Normalized [Viola and Wells: ICCV 95, Collignon et al.: IPMI 95] [Studholme et al., 1998]

y When mutual information is used to evaluate the alignment of two y Joint entropy, like correlation, is influenced by the image structure in finite images: overlap still has a confound the image overlap y Both marginal entropies, H(F) and H(W) vary y the changing transformation modifies the information provided by the images y Instead: form a measure of the relative information in the target image with respect to template using Mutual information: y Difference between marginal and joint entropies

y Alignment can be driven by overlap that has large H(F) and H(W). y e.g., overlap with equal area of background and foreground intensities

y Rather than look for difference in joint and marginal entropies, maximize their ratio (like correlation coefficient): H (F) + H (W ) Y (F,W ) = H (F,W ) But…this does not solve all problems!

It’s still only overlaps of intensities… 3D Rigid MR-CT Registration in the Skull Base the biggest overlaps drive the registration

17 2/2/2012

Image Fusion for Skull Base Surgery Planning Hawkes et al., 1993 Subtraction SPECT Imaging in Epilepsy

CT: Bone MR Gadolinium: Tumor MRA: Blood Vessels

Summary Bibliography I y Barnea and Silvermann, IEEE Transactions on Computing 21(2), 179- 186, 1972. y A range of medical alignment measures have been developed in the last y Woods et al., MRI-PET Registration with Automated Algorithm, J. 15 yrs Computer Assisted Tomography, 17(4):536-545, 1993. y These vary in the assumptions they make about the relationship y Roche et al., The correlation ratio as a new similarity measure for between intensities in the two images being matched multimodal image registration, In Proc. and y Many other criteria not covered! Computer Assisted Intervention, 1998, 1115-1124, Spring LNCS. y Many ways of modifying the criteria y Hill et a l., Voxe l s im ilar ity measures for automate d image reg istrat ion, { Evaluation at multi resolution/scale Proc. Visualization in Biomedical Computing, ed. R.A. Robb, SPIE { Edge/boundary/geometric feature extraction: modify contrast press, 1994, Bellingham. { Spatial windowing and encoding can localize the criteria y Collignon et al., 3D multimodality medical image registration using y Best criteria will depend on the type of data you have features space clustering, Proc. Of , Virtual Reality and { How different the information provided and what contrast is shared Robotics in Medicine, 1995, 195-204, Springer LNCS. { How much they overlap y Collignon et al., Automated multi-modality image registration based on information theory, Proc. Of Information Processing in Medical Imaging, 1995, 263-274. Kluwer, Dordrecht.

18 2/2/2012

Bibliography I y Viola and Well, Alignment by maximization of mutual information, Proc. Of Internation Conference on Computer Vision, 1995, 16-23. Ed. Grimson, Schafer, Blake, Sugihara. y Studholme et al., A normalised entropy measure of 3D medical image alignment, Proceedings of SPIE Medical Imaging 1998, San Diego, SPIE Press, 132-143. y Studholme et al., An overlap invariant entropy measure of 3D image alignment, Pattern Recognition, 32(1), 1999, 71-86. y Maes et al., Multimodality image registration by maximization of mutual information, IEEE Transactions on Medical Imaging, Vol 16, 187-198, 1997.

19