<<

Medical in • Large collection of research fields: – developing mathematical algorithms to extract and relate information from medical images Colin Studholme – For clinical and basic science research • No “Physics of Medical Image Analysis” Associate Professor in Residence – Groups of suitable algorithms and mathematical approaches to Biomedical Image Computing Group specific engineering problems http://radiology.ucsf.edu/bicg • Historically two key (and related) aspects of research: [email protected] – Image Registration: • finding spatial/temporal correspondences between image data and/or models Department of Radiology and Biomedical Imaging – • Extracting/detecting specific features of interest from image data University of California San Francisco • Many clinical motivations: UCSF/UCB Joint Graduate Group in – one of the key areas has been brain imaging, but many more! Biomedical Engineering

C.Studholme U.C.S.F . 1 C.Studholme U.C.S.F . 2

Medical Image Registration: Overview Registration

• What is Registration? – Definitions “the determination of a one-to-one – Classifications: , Transformations, Measures mapping between the coordinates • Motivation for work: Medical Image mis-registration in one space and those in another, – Where is image registration used in and biomedical such that points in the two spaces that research? correspond to the same anatomical • Measures of Image Alignment: point are mapped to each other.” – Landmark/Feature Based Methods Calvin Maurer ‘93 – Based Methods: • Image Intensity Difference and Correlation • Multi-Modality Measures

C.Studholme U.C.S.F . 3 C.Studholme U.C.S.F . 4

1 Image Registration Key Applications I:Change detection

• Look for differences in the same type of images “Establishing correspondence , Taken at different times, e.g.: between features – Mapping Pre and post contrast agent • Digital Subtraction Angiography (DSA) in sets of images , • MRI with Gadolinium tracer ? – Mapping Structural Changes and • Different stages in tumor growth (Before and After treatment) using a transformation model • Neuro degenerative disease-> Quantifying tissue loss patterns – Detecting Changes due to function to • Functional MRI (fMRI) Before and After brain stimulus infer correspondence • PET imaging: Quantitative tracer uptake measurements away • Problem: from those features. ” X – Subject scanned multiple times -> removed from scanner – We cannot easily fix/know patient location and orientation with respect to imaging system – Need to remove differences in patient positioning to detect true Bill Crum ‘05 differences in patient images

C.Studholme U.C.S.F . 5 C.Studholme U.C.S.F . 6

Key Applications II: Some imaging Modalities (sagittal slices shown)

• Relate contrasting information from different types of images • Multi-Modality Imaging – MRI-CT – MRI-PET/SPECT – structural MRI- functional MRI – structural MRI to structural MRI • Problem: – Subject scanned multiple times -> Different scanners – We cannot easily fix/know patient location and orientation with respect to different imaging systems – Need to remove differences in patient positioning to relate information from different types of images

C.Studholme U.C.S.F . 7 C.Studholme U.C.S.F . 8

2 Components of Image Registration Algorithms –Image Data PET(x) •2D-2D, 2D-3D, 3D-3D Examples of Image Geometries and •Transformation Type Transformation Models •Rigid/Affine/Non-Rigid in Medical Applications

y=T(x) •Correspondence Criteria/Measure •Feature Based Methods MRI(y) •Voxel Based/Dense Field Methods

•Optimization Method : maximizing/minimizing criteria wrt T()

C.Studholme U.C.S.F . 9 C.Studholme U.C.S.F . 10

2D-2D Inter Modality Image Registration Problem Registration and display of the combined bone scan and radiograph X-Ray in the diagnosis and management of wrist injuries Hawkes et al, EJNM 1991

Nuc. Medicine

Technetium 99m and X-ray radiograph C.Studholme U.C.S.F . 11 C.Studholme U.C.S.F . 12

3 2D-2D image transformations 2D-2D image transformations –Simple Case: parallel projection

2 translation (up-down/left-right) and one rotation ¡ y = cos ¡.x -sin .x +t

e.g. Hand radiographs 1 1 2 1 ¡ y2= sin ¡.x 1+cos .x 2+t2

Rigid 2D transformation controlled by a y

x y2 ¡ rotation and two translation parameters t1 and t2: 2 x y1 t =a x1 t =a 2 23 y 1 13 y1=a 11 .x 1+a 12 .x 2+a 13

y ¡ x2 x 2 y 1 y2=a 21 .x 1+a 22 .x 2+a 23 t y=Ax x1 t 2 1 Matrix form: y a a a x extend to 3x3 Matrix This is a Linear mapping from ( x , x ) to ( y , y ) 1 11 12 13 1 1 2 1 2 Homogeneous

y2 = a21 a22 a23 x2 ¡ y1= cos ¡.x 1-sin .x 2+t1 coordinate

1 0 0 1 1 Transformation ¡ y2= sin ¡.x 1+cos .x 2+t2 C.Studholme U.C.S.F . 13 C.Studholme U.C.S.F . 14

Rotating around a given location Image Scaling • Scaling with respect to a fixed point (x,y) ⋅ ⋅ − − T(x, y)⋅ R(θ )⋅T (−x,−y) T x,( y) S(sx,sy ) T( x, y) 1 0 x s 0 0 1 0 −x 1 0 x  cos θ − sin θ 0 1 0 − x     x          = 0 1 y ⋅ 0 s 0 ⋅ 0 1 −y = ⋅ θ θ ⋅ −    y    0 1 y  sin cos 0 0 1 y             0 0 1  0 0 1 0 0 1  0 0 1  0 0 1 0 0 1  s 0 1( −s )⋅x  x x  cos θ − sin θ x 1( − cos θ ) + y sin θ  = − ⋅    0 sy 1( sy) y = θ θ − θ − θ    sin cos y 1( cos ) xsin   0 0 1     0 0 1  • Scaling Along an Arbitrary Axis: R −1 (θ )⋅ S(s , s )⋅ R(θ ) x y θ −1 R( ) S(sx , sy ) R (θ )

C.Studholme U.C.S.F . 15 C.Studholme U.C.S.F . 16

4 Influence of Affine Transformations on Images Composing Transformations: • To Apply: We need to COMPOSE: R (−90 °) T( x )3, • Map lines to lines

• Map parallel lines to parallel lines • But: Matrix multiplication is not commutative T(x )3, ⋅ R(−90 °) ≠ R(−90 °)T(x )3, • Preserve ratios of distance along a line • i.e.: Rotation Translation • Do NOT preserve absolute distances and angles followed by followed by translation rotation

C.Studholme U.C.S.F . 17 C.Studholme U.C.S.F . 18

2D-3D registration problems 2D-3D Registration Geometry •Radiotherapy: “the determination of a projection mapping, •Treatment verification (Portal images) from a 3D to a 2D coordinate system such that •Treatment planning (simulator images) points in each space which correspond to the •Treatment guidance (Cyberknife system) same 2D Image Data • Orthopaedic Surgery anatomical points are mapped to each other.” •Spinal surgery (Brain Lab, Medtronic) •Hip or Knee Arthroplasty (ROBODOC) 3D Image Data •Verification of implant position •Neurointerventions •Matching MRA to DSA •Surgical Microscope •MRI/CT neurosurgery guidance •Virtual Endoscopy : •(A) Imaging/Acquisition Parameters ( intrinsic ) •(B) Subject Parameters (extrinsic)

C.Studholme U.C.S.F . 19 C.Studholme U.C.S.F . 20

5 2D-3D Acquisition parameters: 2D-3D: Subject (extrinsic) parameters Pinhole Camera Model (projection) 4 Intrinsic Parameters: Detector Array Imaging System: 6 rigid body extrinsic parameters Mapping points in 3D space to imaging plane X, Y, Z, θθθx, θθθy, θθθz

k1, k 2, uRPP , vRPP T R  x k 0 u 0  u  1 RPP  y     = λ  0 k 2 vRPP 0   v   z     0 0 1 0  1  x  1   u Scaling/Resolution:  y   T R P R T   = λv z     1 Xray source Composed  1 Transformations C.Studholme U.C.S.F . 21 C.Studholme U.C.S.F . 22

3D to 3D image registration 3D-3D Registration X, Y, Z, θθθx, θθθy, θθθz • Many different 3D clinical imaging modalities Image Visit 1 Image Visit 2 – MRI probably still the least common θθθz θθθ • Images used in many different clinical settings x – diagnosis Z θθθy – treatment planning Y Voxels – treatment guidance X – clinical research: studying disease θ θθθz Imaging θθz Imaging θθθx Process • Transformation types: θθθx Process θθθ Z θθθy k1,k2,k3 – Rigid positioning of subject: still most common Z y k1,k2,k3 Y Y (inc.planning) scaling – Non rigid deformations to describe X scaling θθθz X • tissue deformation Patient Imaging Visit 1 θθθx Patient Imaging Visit 2 • imaging distortion Z θθθy • differences between subjects Y X Differences in Patient Positioning C.Studholme U.C.S.F . 23 within scanner C.Studholme U.C.S.F . 24

6 3D Rigid Transformations 3 Translations x,y,z Feature Based Registration M3×3 T3×1 p3×1  PET(x) 3 Rotations: T(p) =    –Image Data Geometries  0 1  1   x′  cos ω 0 − sin ω 0 x •2D-2D, 2D-3D, 3D-3D       x′   cos θ − sin θ 0 0 x  y′ 0 1 0 0 y        =    •Transformation Type  y′  sin θ cos θ 0 0 y   ′   ω ω   =  z   sin 0 cos 0 z   ′     ′      •Rigid/Affine/Non-Rigid  z   0 0 1 0 z   x  1 0 0 0 x   1   0 0 0 1 1            1   0 0 0 1 1 y′ 0 cos φ −sin φ 0 y   =    •Correspondence Criteria/Measure  ′  φ φ    z  0 sin cos 0 z  y=T(x)       1  0 0 0 1 1 •Feature Based Methods •Voxel Based/Dense Field Methods MRI(y) •Optimization Method : Maximizing/minimizing Measure wrt T() 1. Extract Features From Images 2. Evaluate Physical Distance Between Features 3. Minimize Distance

C.Studholme U.C.S.F . 25 C.Studholme U.C.S.F . 26

Feature Based Approaches 1 1 CT 3 3 MR –Point set->Point set 2 2 (homologous) –Point set->Point set 1 ? (non homologous.. 3 3 so need to find order) 2 ? –Point set -> Surface

–Surface -> Surface

(also [space] Curve -> Surface)

C.Studholme U.C.S.F . 27 C.Studholme U.C.S.F . 28

7 MR-CT REGISTRATION Homologous Feature Based 1 1 1 1 Alignment 3 3 3 3 •General case of two lists of corresponding feature locations 2 2 2 2 • Manual point landmark identification [p1, p2… pN] and (around 12 points) in MR and CT [q1, q2… qN] • Accuracy of 1mm at center, and around 2 mm at the both with N points edge •We want to Find: pr • Relates soft tissue structures such as enhancing Transformation T( q) that Minimizes tumor and blood flow to bone features in CT squared distance between A corresponding points: E= Σ || p -T(q )|| 2 r r r qr •Where one set of points, q, is transformed by T() B -> Extrapolate Transformation for all image voxels/pixels

C.Studholme U.C.S.F . 29 C.Studholme U.C.S.F . 30

Procrustes Point Alignment [Golub&VanLoan, Matrix Computations , 1989 ] Procrustes Point Alignment • Remove Translational differences: Calculate translation that brings each of the point sets to the origin and subtract from each of • Scale (procrustes normally includes estimate of scaling) the point sets to create centered point sets: P – But can assume scanner voxel dimensions are accurate q’ =q - 1/N q Q i i i i • Rewrite expression PT=R.Q T p’i=pi- 1/N i pi QTPT=R.(Q T Q) Re-write centered point lists as matrices • Now can decompose symmetric matrix QT Q using singular value

p1 x1, y1 z1 q1 x’1, y’1 z’1 decomposition (SVD): p2 x2, y2 z2 q2 x’2, y’2 z’2 (Q T Q) -> (USV T)

P=p3 = x3, y3 z3 Q= q3 = x’3, y’3 z’3

p q Q N xN, yN zN N x’N, y’N z’N • Here S is a diagonal matrix and P VU T is the rotation matrix.

• Estimate Rotations: we want to find the For 2D: ¡ rotation matrix R such that ¡ T =

–PT=R.Q T VU cos( ) -sin( ) ¡ [ sin( ¡ ) cos( )] The system PT=R.Q T is over-determined and there is noise, thus we want to find the rotation matrix R such that T T 2 min R = ||P -R.Q || K. S. Arun, T. S. Huang, and S. D. Blostein. Least square fitting of two 3-d point sets. IEEE K. S. Arun, T. S. Huang, and S. D. Blostein. Least square fitting of two 3-d point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(5):698 - 700, 1987. Transactions on Pattern Analysis and Machine Intelligence, 9(5):698 - 700, 1987. C.Studholme U.C.S.F . 31 C.Studholme U.C.S.F . 32

8 Alternatives to SVD alignment Manual Landmark Based Registration

Alternative transformation decompositions and parameterizations can be used eg: • Quaternion methods: – B. K. P. Horn. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A , 4(4):629 - 642, April 1987. • Orthonormal matrices: – B. K. P. Horn, H. M. Hilden, and Sh. Negahdaripour. Closed-form solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America A , 5(7):1127 - 1135, July 1988. • Dual quaternions: – M. W. Walker, L. Shao, and R. A. Volz.Estimating 3-d location parameters using dual number quaternions. CVGIP: Image Understanding , 54:358 - 367, November 1991. • These algorithms generally show similar performance and stability with real world noisy data: – A. Lorusso, D. Eggert, and R. Fisher. A Comparison of Four Algorithms for Estimating 3-D Rigid Transformations. In Proceedings of the 4th British Machine Vision Conference (BMVC '95) , pages 237 - 246, Birmingham, D.L.G. Hill, et al, Accurate Frameless Registration of MR and CT Images of the Head: England, September 1995. Applications in Surgery and Radiotherapy Planning, Radiology, 191, 1994, pp 447-454.

C.Studholme U.C.S.F . 33 C.Studholme U.C.S.F . 34

Extrapolating Transformations: THEORETICAL POINT BASED REGISTRATION ERROR (points on circle 50mm radius selected with RMS error of 2mm)

1 1

3 3

2.5 2 2

2

1.5

1

RMS error (mm) error RMS 4 points 0.5 8 points 16 points 0 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 mm Important factor: registration error lowest where 3D landmarks can be found

The distribution of target registration error in rigid-body point-based registration Fitzpatrick, J.M.; West, J.B. Medical Imaging, IEEE Transactions on Volume 20, Issue 9, Sep 2001 Page(s):917 - 927 C.Studholme U.C.S.F . 35 C.Studholme U.C.S.F . 36

9 Approaches to Landmark/Feature Extraction Early Feature Based Brain/Head Matching and Matching • Markers Attached to Subject (rigid bone attachment?) – Need to be visible in different types of Images [Hawkes et al Registration and display of the combined bone scan and radiograph in the diagnosis and management of wrist injuries , EJNM 1991]

• Manual Landmark identification – Time consuming, introduce errors, difficult to find true consistent 3D landmarks: But VERY flexible and can adapt to data. [D.L.G. Hill, et al, Accurate Frameless Registration of MR and CT Images of the Head: Applications in Surgery and Radiotherapy Planning, Radiology, 191, 1994, pp 447-454.] • Automated Landmark Identification: geometric models of local anatomy: – Need to be true unique 3D points in intensity map: tip-like, saddle-like, and sphere-like structures. – Need to be consistent in different subjects and image types [Stefan Wörz, Karl Rohr Localization of anatomical point landmarks in 3D medical images by fitting 3D parametric intensity models, Medical Image Analysis Volume 10, Issue 1, Page 41-58, Feb 2006.]

• Non-Homologous Landmarks/ 3D Structures: – Easier to automatically find general features: for example points on a surface using •“Head-hat” matching of Head surfaces. edge detection. – But: Which point maps to which point? Retrospective Geometric Correlation of MR, CT and PET Images, – Need to then find correspondence and alignment: Point Cloud Fitting Levin, Pelizzari, Chen, Chen Cooper, Radiology, 1988 •Chamfer Distance Matching: Borgefors 1986 Jiang 1992 C.Studholme U.C.S.F . 37 C.Studholme U.C.S.F . 38

Alignment of non-Homologous Iterative Closest Point Algorithm Feature Locations: fuzzy correspondence Approximates correspondence matrix [ ] by assigning each point to Q [kij ] P the current closest point.

General case of two lists of point locations • Applying current transformation R and t to Q=[q1, q2… qM] P=[p1, p2… pN] and Q =[q1, q2… qM] with N and M points respectively, So that

Q’ = RQ i+t and a list of weights [kij ]to describe the fuzzy correspondence between every possible point pair: • Take each point Pi=[ p1, p2… pN] and search list Q’i to find the i nearest point Q i to create a new list with N points. Registration error can then be expressed as. k21 • Apply Least Squares fit of current nearest points (eg using k23 Procrustes point matching) to estimate R and t Σ Σ 2 E(R, t)= i j kij || pi-(Rqj+t)|| • Repeat Until Change in transformation falls below threshold.

P. Besl and N. McKay. A method for Registration of 3-D Shapes. But… need to estimate both R, t and correspondence [k ]. ij IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) , 14(2): 239 256, February 1992.

C.Studholme U.C.S.F . 39 C.Studholme U.C.S.F . 40

10 I.C.P. advantages and disadvantages Improvements/Adaptations in ICP •Orientation driven ICP: Matches location and local • Can be applied to both discrete landmarks, lines, boundary/surface surfaces etc orientation [Godin 2001,Schutz • But: Highly dependent on starting estimate! 1998] – Only finds a local optima •Subsampling: Choosing only meaningful points – Can use multi-start to improve search •Eg points on curved parts of surface

•Optimized Searching Techniques • Search for closest point in large point lists or surfaces to find closest point can be computationally expensive Multi-resolution Matching •Outlier rejection: to handle outliers and also incomplete overlap

[S. Rusinkiewicz, M. Levoy, Efficient Variants of the ICP Algorithm, Proc 3 rd

C.Studholme U.C.S.F . 41 international conference on 3D digital Imaging 2001] C.Studholme U.C.S.F . 42

Fuzzy Correspondence and Point Matching Validation of Rigid Body Registration

Now a very large field in both and • Between Modality validation a difficult problem: medical image analysis, with many different – Need to introduce corresponding features visible in different approaches proposed imaging systems – That can be found accurately in each modality – That have fixed relationship with underlying anatomy H. Chui and A. Rangarajan, A New point Matching Algorithm for non-rigid Calvin R. Maurer, J. Michael Fitzpatrick, Matthew Y. Wang, Student Member, Robert L. registration, Computer Vision and Image Understanding, vol 89, Issue 2-3, 2003. Galloway, Robert J. Maciunas, George S. Allen, Registration of Head Volume Z. Xue, D. Shen, E Khwang Teoh, An Efficient fuzzy algorithm for aligning shapes Images Using Implantable Fiducial Markers (1997) IEEE Transactions on Medical Imaging under affine transformations, Pattern Recognition, Volume 34, Issue 6, June 2001. • This successfully used to evaluate MRI-CT, MRI-MRI and MRI- PET registration using bone implanted markers J. West, J.M. Fitzpatrick, M.Y. Wang, B.M. Dawant, C.R. Maurer, R.M. Kessler, R.J. Maciunas, C. Barillot, D. Lemoine, A. Collignon, F. Maes, P. Suetens, D. Vandermeulen, P.A. van den Elsen,S. Napel,T.S. Sumanaweera, B. Harkness, P.F. Hemler, D.L.G. Hill, D.J. Hawkes, C. Studholme, J.B.A Maintz, M.A. Viergever, G. Malandain, X. Pennec, M.E. Noz,G.Q. Maguire, M. Pollack, C.A. Pelizzari, R.A. Robb,D. Hanson, R.P. Woods, Comparison and Evaluation of Retrospective Intermodality Brain Image Registration Techniques , J. Comp. Assist. Tomog. Vol 21(4), 1997, pp 554-566.

C.Studholme U.C.S.F . 43 C.Studholme U.C.S.F . 44

11 Challenges in Automating medical image registration Feature Based Registration –Finding suitable features –Image Data Geometries PET(x) –e.g. true 3D landmarks •2D-2D, 2D-3D, 3D-3D –Finding the same feature in different types of images •Transformation Type –Not computer vision! •Rigid/Affine/Non-Rigid •Correspondence Criteria/Measure –no nice edges/corners and man made scenes y=T(x) –Variable/limited anatomical coverage •Feature Based Methods •Voxel Based/Dense Field Methods –No scanner images the whole body MRI(y) •Optimization Method : –truncated: part of head or at neck •Maximizing/minimizing Measure wrt T() –Makes using global methods –Variable/low contrast to noise 1. Define Suitable Image Similarity Measure 2. Optimise Similarity wrt Transform T()

C.Studholme U.C.S.F . 45 C.Studholme U.C.S.F . 46

Pixel/Voxel Based Registration Iterative Refinement of Transformation Parameters: small displacements

• History: w(x) f(x) – Detecting an object or feature based on pixel/voxel values Image w(x) intensity t • Avoid the need to automatically extract corresponding f(x) landmarks or surfaces

• Similarity Measures for Image Registration can Consider 1D case: Location x Assume: for no noise, assume – linear intensity mapping w(x) is some exact translation of f(x) – non-parametric 1-to-1 intensity mapping – non-parametric many-to-1 intensity mapping w(x)= f(x+t) and image ‘mis match’ or error for given displacement t can simply be Local Difference in intensity : • Simplest: Image Intensity Difference e(t)=f(x+t)-w(x) [Lucas and Kanade, An Iterative Image Registration Technique with an Application to Stereo Vision C.Studholme U.C.S.F . 47 Proc Image Understanding Workshop, P121-130, 1981] C.Studholme U.C.S.F . 48

12 Iterative Refinement of Transformation Iterative Refinement of Transformation Parameters: small displacements AT A POINT Parameters: small displacements OVER ALL x Image Image intensity w(x) t(x) [w(x)-f(x) ]/f’(x) intensity w(x) t w(x) t so average over x is t f(x) f(x) f(x+t) t xt(x) /nx f(x) x x nx But: Because of Local Approximations At one point x, for small t First estimate of t does not get you to the optimal solution:

f’(x) (f(x+t)-f(x))/t Just gets you nearer to it. Which, taking from above, is = (w(x)-f(x))/t Need multiple steps: Iterative Registration: So, translation t to align image f with w at point x is then if t0=0 then tn+1 =tn+ xt(x) /nx

t [w(x)-f(x) ]/f’(x) [Lucas and Kanade, An Iterative Image Registration Technique with an Application to Stereo Vision Proc Image Understanding Workshop, P121-130, 1981] C.Studholme U.C.S.F . 49 C.Studholme U.C.S.F . 50

Iterative Refinement of Transformation Minimizing Sum of Squared Intensity Difference Parameters: small displacements OVER ALL x If we use an alternative form for Image Image the intensity ‘mis match’ or error: intensity w(x)

t(x) [w(x)-f(x) ]/f’(x) intensity w(x)

The squared intensity difference t ¡ so average over x is t E(t)= [f(x+t)-w(x) ]2 f(x) f(x) x x t t(x) /n To find optimal transformation t set: x x x

0= ¢E nx Weighted average: n

x ¢t so: ¡ Use contributions where linear approximation to f’(x) is better. ¡

2 2 ¢

i.e. weight to points where |f’’(x)| closer to zero. 0= ¢ x [f(x+t)-w(x) ] = x[f(x)+t f’(x)-w(x) ] ¢

Possible weight k(x) for where (w’(x)-f’(x))/t at point x is: ¢t t ¡ = 2f’(x) (f(x)+t f’(x)-w(x)) k(x)=1/|w’(x)-f’(x)| Giving t= xk(x)t(x) x Giving:

xk(x)

¡ ¡

¡ so if t =0 then t =t + k(x)t(x) t x f’(x)¡ (w(x)-f(x)) so if t0=0 then tn+1 =tn+ x f’(x) (w(x)-f(x)) 0 n+1 n x 2 2 x f’(x) x f’(x) xk(x) C.Studholme U.C.S.F . 51 [Lucas and Kanade, Proc Image Understanding Workshop, P121-130, 1981] C.Studholme U.C.S.F . 52

13 Extension to N dimensional Images Other forms of Linear Spatial Transformations The squared intensity difference can be extended to the case where Mapping from One space to the other can be described by a

location x and translation t are vectors of N dimensions: Linear 3x3 transformation matrix A and a translation vector t ¡ 2 So, we assume at the correct transformation: E( t)= x€R [f( x+t)-w( x)] As for 1D, we can then use a linear approximation for small t so that w( x)=f( xA +t)

f( x+t)=f( x)+ t ¢ f( x)

and intensity error is ¢ x ¡ 2 We can then set E( A,t )= x∈R [f( xA +t)-w( x)]

0= ¢E

¢t so: ¡ ¡ To minimize this, one approach is to use a

2

¢ ¢ ¢

0= ¢ x∈R [f( x)+t f( x) -w( x)] = x∈R 2 f( x) [f( x)+t f( x)-w( x)] linear approximation to changes in transformation

¢ ¢ ¢ ¢

t x x x

f( x(A+ A) +( t+ t) )

and then ¢

f( xA +t) + ( x A+ t) f( x)

¡ ¡

T T -1 ¢x

¢ ¢ ¢ ¢ ¢ t=[ x∈R [¢f( x)/ x] [w( x)-f( x)] ][ x∈R ( f( x)/ x) ( f( x)/ x) ]

[Lucas and Kanade, Proc Image Understanding Workshop, P121-130, 1981] C.Studholme U.C.S.F . 53 [Lucas and Kanade, Proc Image Understanding Workshop, P121-130, 1981] C.Studholme U.C.S.F . 54

Global Intensity Variations

• Many Medical Images have ‘uncalibrated’ intensities – Gain or illumination changes • Common Issue: linear intensity scaling and offset f’=w.k+b

Target Image Template Image

e

g

y

a t

Alternative Image Similarity Measures i

s

m

i

n

t

e

t

e n

for Image Alignment g

i

r a

t template intensity

e

g

y

a

t

i

s

m

i

n

t

e

t

e

n g

i

r

a t template intensity • Absolute difference will not tend to zero at correct match: – OR Worse: minimum does not correspond to correct match

C.Studholme U.C.S.F . 55 C.Studholme U.C.S.F . 56

14 Sum of Squared Difference and Correlation Correlation • Correlation is measuring the residual errors between the data and a best fit of a line to that data • Allows relationship to have different slope and offset • So: Robust to global linear changes in Brightness/Contrast

Target image Template Image

e

g y

a Overall brightness

t

i

s

m

i n

increase in target

t

e

t

e

g n

i

r a

t template intensity

e

g

y

a

t i

s Overall brightness

m

i

n

t

e t

e decrease in target

g n

i

r

a t

template intensity

e

g y

a Overall contrast

t

i

s

m

i n

increase in target

t

e

t

e

g n

i

r a [Pratt, 1974], [Pratt Digital Image Processing 1978] t template intensity [Rosenfeld,Kak, Digital Picture Processing 1976] C.Studholme U.C.S.F . 57 C.Studholme U.C.S.F . 58

Normalised Correlation: Boundary overlap Normalised Correlation: Sensitivity to variance

?

?

• Can Normalize Measure by Number of overlapping pixels:

C.Studholme U.C.S.F . 59 C.Studholme U.C.S.F . 60

15 Variance? Correlation Coefficient • Problem: Match may still be biased by variance in Target/Template • Normalise Correlation by the summed

residuals around mean in template and target overlap:

e

e

g

g

y

y

a

a

t

t

i

i

m

s

m

s

i

i

n

n

t

t

e

e

e

t

e

t

g

n g

n

i r

i r

a

a

T T

template intensity template intensity

e

g

y

a

t

i

m

s

i

n

t

e

e

t

g

n

i r

a T

template intensity

C.Studholme U.C.S.F . 61 C.Studholme U.C.S.F . 62

What about non -linear intensity Non-Linear Intensity Mapping

mappings?

e

e

g

g

y

y

a

a

t

t

i

i

m

s

m

s

i

i

n

n

t

t

e

e

e

t

e

t

g

n g

n

i r

r i

a

a

T

T

e

g

y

a t

template intensity template intensity i

m

s

i

n

t

e

e

t

g

n

i r

a T

template intensity

C.Studholme U.C.S.F . 63 C.Studholme U.C.S.F . 64

16 Non-parametric 1-to-1 mapping • But Intensity mapping is not usually smooth or easily parameterized (e.g. discrete because of different tissue classes) • Assume only 1-to-1 mapping of intensities between template and target:

Multi Modality Similarity Measures Target f Target f Register Matching Images with Different Tissue

Contrast Properties Template w Template w

e e

f f

g g

Register

a a

y y

t t

i i

m m

s s i i

n n

t t

e e

e e

t t

g g

n n

r r

i i

a a

T T

template intensity w template intensity w C.Studholme U.C.S.F . 65 C.Studholme U.C.S.F . 66

SPECT-MRI registration: Patient orientated differently with respect to scanner coordinates: SPECT-MRI registration

1.Head rest design 2.Gantry angle limitations 3. No easy to find 3D landmarks

C.Studholme U.C.S.F . 67 C.Studholme U.C.S.F . 68

17 The Effects of Misregistration Partitioned Image Uniformity in Intensity Feature Space [Woods et al JCAT 93] • Used for MRI-PET/SPECT registration – MRI scan scalp edited-> Only consider intra-cranial tissues – Grey-white-CSF values in cranial region differ • Non-monotonic mapping • Approx 1-to-1

T1 Weighted MRI f

Grey Matter

y

t

i s

Registered Translated by 2mm Translated by 5mm Translated by 8mm n

e

t

n

i

White Matter

T AIR/CSF

C

E

P

S

/

T

E P MRI intensity w

HMPAO SPECT image C.Studholme U.C.S.F . 69 C.Studholme U.C.S.F . 70

Partitioned Image Uniformity [Woods et al JCAT 93]

b=1 b=2 b=3 b=4

Functional/

e

f g

Structural

a

y

t

i

m

s i

n Fusion:

t

e

e

t

g

n

r

i a T MRI-SPECT

template intensity w

C.Studholme U.C.S.F . 71 C.Studholme U.C.S.F . 72

18 Correlation Ratio MR -CT Registration in the [A. Roche et al MICCAI 98] Skull Base

• CT (and MR) image volume often targeted with limited axial extent. • Automated segmentation or identification of features is difficult. b=1 b=2 b=3 b=4

• Axial resolution limited.

e

f

g

a

y

t i

m Need to make best use of all shared

i

s

t n

e e

t

g r n features in the images.

•Need to avoid picking region with small variance: i

a T

template intensity w Correlation Ratio:

C.Studholme U.C.S.F . 73 C.Studholme U.C.S.F . 74

MR and CT Correlation Coefficient

m h(m,c)

c

Assumes Linear Relationship between MR and CT intensity.

Improved by using only modified soft tissue or bone intensities from CT. (Van den Elsen, 1994).

Manual Registration Estimate (Using Corresponding Anatomical Landmarks) C.Studholme U.C.S.F . 75 C.Studholme U.C.S.F . 76

19 Many-to-1 Intensity Mapping ? Changes in 2D Histogram with Alignment

Where one image type can detect sub classes e

f of tissue, while the other sees one.

g

a

y

t i

m (e.g. T1 weighted MRI vs CT)

s i

MR

n

t

e

e

t

g

n

r

i

a T

Multiple soft tissue One/few soft tissue template intensity w intensities intensities CT Target f

Template w

C.Studholme U.C.S.F . 77 C.Studholme U.C.S.F . 78

How to Characterize Alignment? Information and Entropy [Collignon, CVRMED 95] -> ‘Histogram Sharpness’ If guessing what pair of values (w,f) a pixel will have... p(w,f) p(w,f)

registered mis-registered

e

e

f f

? g

g

a

a

y

y

t

t

i

i m

m

i

s

i

s

t n

n t

e e

e e

t

t

g

g

r n

n r

i

i a

a f f

T T

template intensity w template intensity w w Small number of Larger number of f f high-probability pairs lower-probability pairs Less Uncertainty More Uncertainty

w w Overall: Image Pair Provides Overall: Image Pair Provides Less Information... More Information.. ‘More Complex’ C.Studholme U.C.S.F . 79 ‘Less Complex’ C.Studholme U.C.S.F . 80

20 Information and Entropy [Collignon, CVRMED 95] [Viola and Wells: ICCV 95, Collignon et al: IPMI 95]

• Joint entropy, like correlation and correlation ratio, is influenced by the image structure in the image overlap – The changing transformation modifies the information provided by the images

• Instead: form a measure of the relative information in the Target image with respect to Template using Mutual information: – difference between marginal and joint entropies

are I(F,W)=H(F)+H(W)-H(F,W) p(w,f) H(W,F) is Minimized to be Maximized p(w,f) where registered w w mis-registered H(F)= ∫ f –p(f) log (p(f))

H(W)= ∫ w –p(w) log (p(w))

H(F,W)= ∫ f ∫ w –p(f,w) log (p(w,w))

f f C.Studholme U.C.S.F . 81 C.Studholme U.C.S.F . 82

MRI-CT for skull base surgery planning

C.Studholme U.C.S.F . 83 C.Studholme U.C.S.F . 84

21 Information Measures and Overlap – MI behavior varies – MI Behavior Depends of Field of View: • can have clear maximum away from alignment

larger peaks smaller peaks p(w,f) w w MI big! more info in target explained by template f f C.Studholme U.C.S.F . 85 C.Studholme U.C.S.F . 86

Normalised Mutual Information [Studholme et al, 1998]

• When mutual information is used to evaluate the alignment of two finite images: overlap still has an confound: – Both Marginal entropies, H(F) and H(W) vary

NMI MI

• Alignment can be driven by choosing overlap that has large H(F) and H(W). – e.g. overlap with equal area of background and foreground intensities

• Rather than look for difference in joint and marginal entropies, use their ratio (like correlation coefficient): Y(F,W)=H(F)+H(W) H(F,W) to be Maximized But.... Does not solve all problems!

C.Studholme U.C.S.F . 87 C.Studholme U.C.S.F . 88

22 MRI-CT Registration MRI-CT Registration

C.Studholme U.C.S.F . 89 C.Studholme U.C.S.F . 90

Image Fusion for Skull Base Surgery Planning 3D Rigid MR-CT Registration Hawkes et al, 1993 in the Skull Base CT: Bone MR Gadolinium: Tumor MRA: Blood Vessels

C.Studholme U.C.S.F . 91 C.Studholme U.C.S.F . 92

23 Subtraction SPECT Imaging in Epilepsy Inter-Ictal Ictal Change in Uptake

Registered MR+PET

C.Studholme U.C.S.F . 93 C.Studholme U.C.S.F . 94

Shared Features in MR and PET Images of the Pelvis • Bone Features • PET: F- Uptake. • MR: Marrow White. • Soft Tissue • Some Boundaries in PET: Very Low Contrast. • Deformed by Different Bed Shapes.

• High Contrast F.D.G. Uptake in Regions of Interest. • Some Soft Tissue Detail. • F- Tracer Added to Highlight Bone Structure. C.Studholme U.C.S.F . 95 C.Studholme U.C.S.F . 96

24 Its still only overlaps of intensities… The biggest overlaps drive the registration

C.Studholme U.C.S.F . 97 C.Studholme U.C.S.F . 98

Summary Bibliography I • [Barnea and Silvermann 72], Barnea and Silvermann 72 IEEE Transactions on • A range of medical alignment measures have been Computing 21(2), 179-186,1972. developed in the last 15yrs • [Pratt 1974], Pratt, IEEE Tran. Aerospace and Elec. Systems, 10(3), pp 353- 358,1974] • These vary in the assumptions they make about the relationship • [Woods et al JCAT 93] R.P. Woods, J.C. Mazziotta, S.R. Cherry, “MRI-PET between intensities in the two images being matched Registration with Automated Algorithm”, J. Computer Assisted Tomography, vol 17(4), pp 536-546, 1993. • [Roche et al, 98], A. Roche, G. Malandain, X. Pennec, N. Ayache, “The • Many other criteria not covered! correlation ratio as a new similarity measure for Multimodal Image Registration”, In Proc. and Computer Assisted Intervention, 1998, pp 1115-1124, Springer LNCS. • Many ways of modifying the criteria: • [Hill VBC94], D.L.G. Hill, C. Studholme, D.J. Hawkes, “Voxel similarity measures – Evaluation at multi resolution/scale for automated image registration”, Proc. in Biomedical Computing, – Edge/boundary/geometric feature extraction: modify contrast ed. R.A. Robb, SPIE press, 1994, Bellingham. – Spatial windowing and encoding can localize the criteria • [Collignon, CVRMED 95]. A. Collignon, D. Vandermeulen, P. Suetens, G. Marchal, “3D multimodality medical image registration using features space clustering”, proc. of Computer Vision, Virtual Reality and Robotics in Medicine, • Best criteria will depend on the type of data you have: 1995, pp 195-204, Springer LNCS. – How different the information provided and what contrast is shared • [Collignon, IPMI95], A. Collignon, F. Maes, D. Delaere, D. Vandermeulen, P. – How much they overlap Seutens, G. Marchal, “Automated multi-modality image registration based on information theory”, Proc. of Information Processing in Medical Imaging, 1995, pp. 263-274. Kluwer, Dordrecht.

C.Studholme U.C.S.F . 99 C.Studholme U.C.S.F . 100

25 Bibliography II

• [Viola, ICCV 1995], P. Viola, W. Wells, “Alignment by Maximization of mutual information”, Proc. of International Conference on Computer Vision, 1995, pp 16-23. Ed. Grimson, Schafer, Blake, Sugihara. • [Studholme et al 1998] C. Studholme, D.L.G. Hill, D.J.Hawkes ,A Normalised Entropy Measure of 3D Medical Image Alignment, Proceedings of SPIE Medical Imaging 1998,San Diego, SPIE Press. pp. 132-143. • [Studholme et al, 1999], C. Studholme, D.L.G.Hill, D.J. Hawkes, An Overlap Invariant Entropy Measure of 3D Medical Image Alignment, Pattern Recognition, Vol. 32(1), Jan 1999, pp 71-86. • [Maes et al, TMI97], F. Maes, A. Collignon, S. Vandermeulen, G. Marchal, P.Suetens, “ Multimodality image registration by maximization of mutual information”, IEEE Transactions on Medical Imaging, Vol 16, pp187-198, Apr 1997. Review Articles: • [Pluim, TMI03] J. Pluim, J.B.A. Maintz, M. Viergever, “Mutual Information Based Registration of Medical Images: A Survey”, IEEE Transactions on medical imaging, vol 22(8), pp 986-1004, 2003. • [Hill, PMB01] D.L.G. Hill, P.G. Batchelor, M. Holden, D.J. Hawkes, “Medical image registration”, Physics in Medicine and Biology, 2001, vol 46(3), pp 1-45. • [Maintz, MIA98], J.B.A. Maintz, M.A. Viergever, “A survey of medical image registration”, Medical Image Analysis, vol 2(1), pp 1-36, 1998. • [Elsen, EMB93], P.A. van den Elsen, E.J.D. Pol, M.A. Viergever, “Medical Image Matching- A Review with Classification”, IEEE Engineering in Medicine and Biology Mag., vol 12, pp 26-39, Mar 1993. C.Studholme U.C.S.F . 101

26