<<

Image Enhancement in Digital X-Ray Angiography

Erik Meijering Colophon

This book was typeset by the author using LATEX2ε. The main body of the text was set using a 10-points Computer Modern Roman font. All graphics and images were included formatted as Encapsulated PostScript (TM Adobe Systems Incorporated). The final PostScript output was converted to Portable Document Format (PDF) and transferred to film for printing.

Cover design by the author using CorelDRAW (TM Corel Corporation) version 8. The background is a colored fragment of a slice taken from a clinical 3DRA dataset. The graphics on the front cover symbolically represent the contents of the different chap- ters: the vector field (in red) refers to the problem of patient motion registration and correction in DSA, addressed in Chapters 2, 3, and 4; the circles and arrows (in green) represent the tasks of visualization and subsequent quantification of blood vessels and their anomalies in 3DRA images, which constitute the subject of Chapter 5; finally, the plot (in blue) portrays the realization of the sinc-like kernel implicitly used in cu- bic spline interpolation, which according to the results of the comparative evaluation study described in Chapter 6 is the method of choice for geometrical transformation of medical image data.

Copyright c 2000 by Erik Meijering. All rights reserved. No part of this publica- tion may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the author.

ISBN 90-393-2500-6

Printed by Ponsen & Looijen, Wageningen. Image Enhancement in Digital X-Ray Angiography

Beeldverbetering in Digitale R¨ontgenangiografie (met een samenvatting in het Nederlands)

Proefschrift

ter verkrijging van de graad van doctor aan de Universiteit Utrecht op gezag van de Rector Magnificus, Prof. Dr. H. O. Voorma, ingevolge het besluit van het College voor Promoties in het openbaar te verdedigen op woensdag 4 oktober 2000 des ochtends te 10:30 uur

door

Hendrik Willem Meijering

elektrotechnisch ingenieur, geboren op 31 juli 1971 te Heemskerk. Promotor: Prof. Dr. Ir. M. A. Viergever University Medical Center Utrecht

Co-promotor: Dr. W. J. Niessen University Medical Center Utrecht

The research described in this thesis was carried out at the Image Sciences Institute, University Medical Center Utrecht (Utrecht, the Netherlands), under the auspices of ImagO, the Utrecht graduate school for Biomedical Image Sciences. The project was financially supported by the Netherlands Ministry of Economic Affairs, within the framework of the Innovation Oriented Research Programme (IOP Beeldverwerking, project number IBV96004).

Financial support for publication of this thesis was kindly provided by Vital Images Inc. (USA), Philips Medical Systems Nederland B.V., SGI Nederland, and Schering Nederland B.V. Financial support by the Netherlands Foundation and the R¨ont- gen Stichting Utrecht is also gratefully acknowledged. Additional financial support was provided by the Image Sciences Institute and Utrecht University. Preface

This thesis describes the findings of the research I carried out as part of my Ph.D. study at Utrecht University, de facto the University Medical Center Utrecht. The framework within which this research was supported has certainly left its mark on large parts of this thesis, where the focus is on technological solutions to practical problems, and efficient implementation and thorough evaluation of both newly de- veloped and existing techniques. The first research efforts were directed towards the development of techniques for automatic reduction of patient motion artifacts in DSA images. The findings of this work are described in Chapters 2–4. It was while work- ing on efficient image warping techniques when I became interested in the problem of image interpolation. Early results in this area were not included in this thesis because they were considered too far-off from the main theme. The results of a subsequently performed evaluation of interpolation techniques for the task of medical image trans- formation are presented in Chapter 6. Research in the final stages of the project, reported in Chapter 5, was focussed on image enhancement techniques for improved visualization and quantification of vascular anomalies in 3DRA images. It requires no explanation that this work could not have been completed without the help of others. First of all I’m indebted to my promotor, Prof. Max Viergever, for offering me a Ph.D. position, for the (scientific) freedom, for his constructive criticism in writing papers, and for allowing me to stay four more months to finish unfinished business. I thank my former supervisor, Karel Zuiderveld, for introducing me into the problem of image registration in DSA and for his invaluable help in developing the algorithm described in Chapter 3. I thank Wiro Niessen for his supervision in the latter stages of the project. Without his help and enthusiasm, Chapter 4 would not have been a part of this thesis. I also thank Richard Kemkers and Franklin Schuling (Philips Medical Systems, Best) for their input during discussions. Many thanks go to Jeannette Bakker, Gerard de Kort, Rob Lo, and Aart van der Molen, who sacrificed some of their free time in order to participate in the evaluation study presented in Chapter 4. Also thanks to Prof. Mali for his help in designing the study, to Tineke Kievit for providing me with the necessary patient data, and to Gerard van Hoorn, Koen Vincken, Theo van Walsum, Remko van der Weide, and Onno Wink for their technical support in the startup phase. Remko is furthermore acknowledged for his comments on the visualization routines which I wrote for the ex- periments described in Chapter 5, and together with Evert-Jan Vonken and Clemens Bos for his help during early attempts to fill the vascular phantoms used in those ex- periments. The edge-enhancing diffusion scheme evaluated in Chapter 5 was obtained by modifying a nonlinear diffusion algorithm which was kindly provided by Joachim Weickert (University of Mannheim, Germany). vi Preface

In view of the medical datasets used in this thesis, several acknowledgments are in order. I thank Wilma Pauw for her contributions in obtaining the XRA runs and 3DRA datasets used in Chapters 1 and 3–6. The input of Rolf Suurmond and John op de Beek (Philips Medical Systems, Best) in acquiring the phantom 3DRA images used in Chapter 5 is also gratefully acknowledged. Rik Stokking and Prof. Buitelaar provided me with the SPECT datasets used in Chapter 6. Finally, the CT, MR, and PET datasets used in that same chapter were made available by Vanderbilt University (Nashville, TN, USA; see Page 127 for further acknowledgments). Considering the amount of time we’ve managed to spend in a heavily undersized office, Josien Pluim and Bram van Ginneken should also not remain unnoticed. Josien is furthermore acknowledged for sharing her knowledge on medical image registration, and Bram for being the initiator of the programming environment which formed the basis for the software which I wrote for the experiments in Chapter 5. I thank Sandra Boeijink for having made my life easier in financial matters, and Margo Agterberg for the social updates and for helping me out in arranging my last-minute trip to Japan. Special thanks to my colleagues and friends Alejandro Frangi and Bert Haverkamp for agreeing to serve as “paranimf” during my defense. Although in the preceding I have restricted myself to mentioning those who have actually contributed to my research and have helped me in related matters, the contri- butions of all other colleagues to the pleasant working environment over the past four years are gratefully acknowledged. I also express here my gratitude to the review com- mittee: Prof. Eikelboom, Prof. Hillen, Prof. Mali (University Medical Center Utrecht), Prof. Unser (Swiss Federal Institute of Technology, Lausanne), and Prof. van Vliet (Delft University of Technology), for their positive judgment. I thank Michael Unser furthermore for his interest in my work and for supporting my visit(s) to Lausanne. I’m looking forward to a fruitful collaboration. Very special thanks go to my wife, Greetje: thank you for your unconditional love and unfailing patience and support, especially in the final stages of my project, when I must have been a lousy husband. I also thank our children, Marjolein and Dani¨el, for their love and for forcing me to let go of my work from time to time. I am much obliged to all of my family and friends for their support through the years. In particular my parents, Tiem and Jannie Meijering, who always encouraged me to exploit whatever talents I had and who paid for the preparatory eduction necessary to write this thesis. I conclude by saying that I feel quite fortunate to have entered thisworldinanenvironmentinwhichIcouldgrowupprosperouslyandwritethis thesis and, eventually, this preface. Therefore, above all, I thank God!

Erik Meijering Utrecht, July 2000 Contents

Colophon ii

Preface v

Abbreviations xi

1 Introduction and Summary 1

2 Retrospective Motion Correction in Digital Subtraction Angiography — A Review 7 2.1Introduction...... 7 2.2MotionArtifactsandPossibleSolutions...... 8 2.2.1 ExamplesofMotionArtifacts...... 8 2.2.2 PatientRelatedSolutions...... 9 2.2.3 AcquisitionRelatedSolutions...... 9 2.2.4 RetrospectiveImageProcessingSolutions...... 11 2.3RetrospectiveMotionCorrection—Preliminaries...... 12 2.3.1 The Existence of a 2D Geometrical Transformation ...... 12 2.3.2 LimitationsinTransformationRecovery...... 13 2.4RetrospectiveMotionCorrection—Techniques...... 14 2.4.1 ComplexityoftheTransformation...... 15 2.4.2 SimilarityMeasures...... 17 2.4.3 Subpixel Precision ...... 23 2.4.4 Optimization and Acceleration ...... 24 2.4.5 Grey-LevelDistortionCorrection...... 26 2.5Discussion...... 27 2.5.1 Control Point Selection and Displacement Estimation . . . . . 27 2.5.2 ComparisonsofSimilarityMeasures...... 28 2.5.3 Interpolation Techniques for Subpixel Precision ...... 32 2.5.4 OptimizationStrategiesandRelatedIssues...... 32 2.5.5 Multiplicative versus Additive Grey-Level Distortions . . . . . 33 2.5.6 SuggestionsforFutureResearch...... 35 2.5.7 FinalRemarks...... 35 2.6Conclusions...... 36 2.A Appendix: Extrema of Histogram-Based Similarity Measures . . . . . 38 viii Contents

3 Image Registration for Motion Artifact Reduction in Digital Subtraction Angiography 41 3.1Introduction...... 41 3.2RegistrationApproach...... 42 3.2.1 ControlPointsSelection...... 42 3.2.2 DisplacementComputation...... 45 3.2.3 DisplacementInterpolation...... 50 3.2.4 InconsistencyDetectionandCorrection...... 52 3.2.5 Inter-ImageDisplacementPrediction...... 54 3.3ImplementationAspects...... 54 3.3.1 ControlPointsSelection...... 54 3.3.2 ControlPointsTriangulation...... 55 3.3.3 SimilarityMeasureComputation...... 55 3.3.4 Inter-ImageDisplacementPrediction...... 57 3.3.5 MaskImageWarping...... 57 3.4AlgorithmOverview...... 58 3.5PreliminaryResults...... 59 3.6Discussion...... 60 3.7Conclusions...... 65 3.AAppendix:ComputationoftheJacobianFactor...... 65

4 Evaluation of a Fast and Fully Automatic Technique for Motion Artifact Reduction in Digital Subtraction Angiography 69 4.1Introduction...... 69 4.2MaterialsandMethods...... 70 4.2.1 ImagesandEquipment...... 70 4.2.2 ManualandAutomaticRegistration...... 70 4.2.3 MethodofEvaluation...... 71 4.2.4 StatisticalAnalyses...... 72 4.3Results...... 73 4.4Discussion...... 79 4.5Conclusions...... 81

5 Nonlinear Diffusion Filtering for Improved Vessel Visualization and Quantification in Three-Dimensional Rotational Angiography 83 5.1Introduction...... 83 5.2NoiseReductionTechniques...... 87 5.2.1 UniformFiltering...... 87 5.2.2 GaussianFiltering...... 88 5.2.3 RegularizedIsotropicNonlinearDiffusion...... 89 5.2.4 Edge-EnhancingAnisotropicDiffusion...... 89 5.3QuantificationofVascularAnomalies...... 90 5.3.1 QuantificationofCarotidStenosis...... 90 5.3.2 QuantificationofIntracranialAneurysms...... 91 5.4 In Vitro Experiments...... 92 5.4.1 PhantomsandImageAcquisition...... 92 Contents ix

5.4.2 MethodofEvaluation...... 93 5.5Results...... 96 5.6Discussion...... 98 5.7Conclusions...... 111

6 Quantitative Evaluation of Convolution-Based Methods for Medical Image Interpolation 113 6.1Introduction...... 113 6.2Convolution-BasedInterpolation...... 115 6.3Sinc-ApproximatingKernels...... 116 6.3.1 Nearest-Neighbor and Linear Interpolation Kernel ...... 117 6.3.2 LagrangeInterpolationKernels...... 117 6.3.3 GeneralizedConvolutionKernels...... 119 6.3.4 CardinalSplineKernels...... 121 6.3.5 WindowedSincKernels...... 123 6.4QuantitativeEvaluation...... 125 6.4.1 EvaluationStrategy...... 127 6.4.2 Results...... 129 6.5Discussion...... 130 6.5.1 DiscussionofEvaluationStrategies...... 135 6.5.2 DiscussionoftheResults...... 137 6.6Conclusions...... 140 6.A Appendix: Piecewise Polynomial Interpolators and B-Splines . . . . . 142 6.BAppendix:ImplementationofDirectB-SplineFilters...... 144

Bibliography 149

Samenvatting 173

Publications 177

Curriculum Vitae 179

Abbreviations

1D one-dimensional 2D two-dimensional 3D three-dimensional 3DRA 3D rotational angiography BA basilar CAVP carotid anthropomorphic vascular phantom CBC coincident bit counting CC correlation coefficient CCA common carotid artery CNR contrast-to-noise ratio CT computed CTA computed tomography angiography DSA digital subtraction angiography DSR digital subtraction DSC deterministic sign change ECA external carotid artery ECG electrocardiogram ECST European Carotid Surgery Trial EED edge-enhancing anisotropic diffusion ENT entropy of the histogram of differences EHD energy of the histogram of differences fMRI functional magnetic resonance imaging FOM figure of merit FOV field of view GF Gaussian filtering IAVP intracranial anthropomorphic vascular phantom ICA LAE largest absolute error MAC minimal artifacts criterion MIP maximum intensity projection MRA magnetic resonance angiography MRI magnetic resonance imaging NASCET North American Symptomatic Carotid Trial NCC normalized cross-correlation PC phase correlation PCA posterior cerebral artery xii Abbreviations

PD proton density PD-w PD-weighted PET positron emission tomography pixel picture RMSE root-mean-square error RPM regularized isotropic nonlinear (Perona-Malik) diffusion SAVD sum of the absolute values of differences SDT sum of the absolute values of differences above a threshold SPD sum of positive differences SPECT single photon emission computed tomography SSC stochastic sign change SSD sum of squared differences SNR -to-noise ratio T1 longitudinal relaxation time T1-w T1-weighted T2 transversal relaxation time T2-w T2-weighted UF uniform filtering VOD variance of differences voxel volume element XRA X-ray angiography Anyone who does not look back to the beginning throughout a course of action, does not look forward to the end. Hence it necessarily follows that an intention which looks ahead, depends on a recollection which looks back.

— Aurelius Augustinus, De civitate Dei,VII.7( A.D.)

Chapter 1

Introduction and Summary

espite the development of imaging techniques based on alternative physical phenomena, such as nuclear magnetic resonance, emission of single photons D(γ-radiation) by radio-pharmaceuticals and photon pairs by electron-positron annihilations, reflection of ultrasonic waves, and the Doppler effect, X-ray based im- age acquisition is still daily practice in medicine. Perhaps this can be attributed to the fact that, contrary to many other phenomena, X-rays lend themselves naturally for registration by means of materials and methods widely available at the time of their discovery — a fact that gave X-ray based an at least 50-year head start over possible alternatives. Immediately after the preliminary communica- tion on the discovery of the “new light” by R¨ontgen [317], late December 1895, the possible applications of X-rays were investigated intensively. In 1896 alone, almost one 1,000 articles about the new phenomenon appeared in print (Glasser [119] lists all of them). Although most of the basics of the diagnostic as well as the therapeutic uses of X-rays had been worked out by the end of that year [289], research on im- proved acquisition and reduction of potential risks for humans continued steadily in the century to follow. The development of improved X-ray tubes, rapid film changers, image intensifiers, the introduction of television cameras into fluoroscopy, and com- puters in digital radiography and computerized tomography, formed a succession of achievements which increased the diagnostic potential of X-ray based imaging. One of the areas in medical imaging where X-rays have always played an im- portant role is angiography,† which concerns the visualization of blood vessels in the human body. As already suggested, research on the possibility of visualization of the human vasculature was initiated shortly after the discovery of X-rays. A photograph of a first “angiogram” — obtained by injection of a mixture of chalk, red mercury, and petroleum into an amputated hand, followed by almost an hour of exposure to X-rays — was published as early as January 1896, by Hascheck & Lindenthal [139]. Although studies on cadavers led to greatly improved knowledge of the of the human vascular system, angiography in living man for the purpose of diagnosis and intervention became feasible only after substantial progress in the development

†A term originating from the Greek words αγγιoν (aggeion), meaning “vessel” or “bucket”, and γραϕιν (graphein), meaning “to write” or “to record”. 2 1 Introduction and Summary of relatively safe contrast media and methods of administration, as well as advance- ments in radiological equipment. Of special interest in the context of this thesis is the improvement brought by photographic subtraction, a technique known since the early 1900s and since then used successfully in e.g. astronomy, but first introduced in X-ray angiography in 1934, by Ziedses des Plantes [425, 426]. This technique al- lowed for a considerable enhancement of vessel visibility by cancellation of unwanted background structures. In the 1960s, the time consuming film subtraction process was replaced by analog video subtraction techniques [156, 275] which, with the in- troduction of digital computers, gave rise to the development of digital subtraction angiography [194] — a technique still considered by many the “gold standard” for de- tection and quantification of vascular anomalies. Today, research on improved X-ray based imaging techniques for angiography continues, witness the recent developments in three-dimensional rotational angiography [88,185,186,341,373].

The subject of this thesis is enhancement of digital X-ray angiography images. In contrast with the previously mentioned developments, the emphasis is not on the further improvement of image acquisition techniques, but rather on the development and evaluation of digital image processing techniques for retrospective enhancement of images acquired with existing techniques. In the context of this thesis, the term “enhancement” must be regarded in a rather broad sense. It does not only refer to improvement of image quality by reduction of disturbing artifacts and noise, but also to minimization of possible image quality degradation and loss of quantitative information, inevitably introduced by required image processing operations. These two aspects of image enhancement will be clarified further in a brief summary of each of the chapters of this thesis. The first three chapters deal with the problem of patient motion artifacts in digital subtraction angiography (DSA). In DSA imaging, a sequence of 2D digital X-ray projection images is acquired, at a rate of e.g. two per second, following the injection of contrast material into one of the or feeding the part of the vasculature to be diagnosed. Acquisition usually starts about one or two seconds prior to arrival of the contrast bolus in the vessels of interest, so that the first few images included in the sequence do not show opacified vessels. In a subsequent post-processing step, one of these “pre-bolus” images is then subtracted automatically from each of the contrast images so as to mask out background structures such as bone and soft- tissue shadows. However, it is clear that in the resulting digital subtraction images, the unwanted background structures will have been removed completely only when the patient lied perfectly still during acquisition of the original images. Since most patients show at least some physical reaction to the passage of a contrast medium, this proviso is generally not met. As a result, DSA images frequently show patient-motion induced artifacts (see e.g. the bottom-left image in Fig. 1.1), which may influence the subsequent analysis and diagnosis carried out by radiologists. Since the introduction of DSA, in the early 1980s, many solutions to the problem of patient motion artifacts have been put forward. Chapter 2 presents an overview of the possible types of motion artifacts reported in the literature and the techniques that have been proposed to avoid them. The main purpose of that chapter is to review and discuss the techniques proposed over the past two decades to correct for 1 Introduction and Summary 3

Figure 1.1. Example of creation and reduction of patient motion artifacts in cerebral DSA imaging. Top left: a“pre-bolus”ormask image acquired just prior to the arrival of the contrast medium. Top right: one of the contrast or live images showing opacified vessels. Bottom left: DSA image obtained after subtraction of the mask from the contrast image, followed by contrast enhancement. Due to patient motion, the background structures in the mask and contrast image were not perfectly aligned, as a result of which the DSA image does not only show blood vessels, but also additional undesired structures (in this example primarily in the bottom-left part of the image). Bottom right: DSA image resulting from subtraction of the mask and contast image after application of the automatic registration algorithm described in Chapter 3. 4 1 Introduction and Summary patient motion artifacts retrospectively, by means of digital image processing. The chapter addresses fundamental problems, such as whether it is possible to construct a 2D geometrical transformation that exactly describes the projective effects of an originally 3D transformation, as well as practical problems, such as how to retrieve the correspondence between mask and contrast images by using only the grey-level information contained in the images, and how to align the images according to that correspondence in a computationally efficient manner. The review in Chapter 2 reveals that there exists quite some literature on the topic of (semi-)automatic image alignment, or image registration, for the purpose of motion artifact reduction in DSA images. However, to the best of our knowledge, research in this area has never led to algorithms which are sufficiently fast and robust to be acceptable for routine use in clinical practice. By drawing upon the suggestions put forward in Chapter 2, a new approach to automatic registration of digital X-ray angiography images is presented in Chapter 3. Apart from describing the functionality of the components of the algorithm, special attention is paid to their computationally optimal implementation. The results of preliminary experiments described in that chapter indicate that the algorithm is effective, very fast, and outperforms alterna- tive approaches, in terms of both image quality and required computation time. It is concluded that the algorithm is most effective in cerebral and peripheral DSA imag- ing. An example of the image quality enhancement obtained after application of the algorithm in the case of a cerebral DSA image is provided in Fig 1.1. Chapter 4 reports on a clinical evaluation of the automatic registration technique. The evaluation involved 104 cerebral DSA images, which were corrected for patient motion artifacts by the automatic technique, as well as by pixel shifting —amanual correction technique currently used in clinical practice. The quality of the DSA images resulting from the two techniques was assessed by four observers, who compared the images both mutually and to the corresponding original images. The results of the evaluation presented in Chapter 4 indicate that the difference in performance between the two correction techniques is statistically significant. From the results of the mutual comparisons it is concluded that, on average, the automatic registration technique performs either comparably, better than, or even much better than manual pixel shifting in 95% of all cases. In the other 5% of the cases, the remaining artifacts are located near the borders of the image, which are generally diagnostically non-relevant. In addition, the results show that the automatic technique implies a considerable reduction of post-processing time compared to manual pixel shifting (on average, one second versus 12 seconds per DSA image). The last two chapters deal with somewhat different topics. Chapter 5 is concerned with visualization and quantification of vascular anomalies in three-dimensional rota- tional angiography (3DRA). Similar to DSA imaging, 3DRA involves the acquisition of a sequence of 2D digital X-ray projection images, following a single injection of contrast material. Contrary to DSA, however, this sequence is acquired during a 180◦ rotation of the C-arch on which the X-ray source and detector are mounted antipo- dally, with the object of interest positioned in its iso-center. The rotation is completed in about eight seconds and the resulting image sequence typically contains 100 images, which form the input to a filtered back-projection algorithm for 3D reconstruction. In contrast with most other 3D medical imaging techniques, 3DRA is capable of provid- 1 Introduction and Summary 5

Figure 1.2. Visualizations of a clinical 3DRA dataset, illustrating the qualitative improvement obtained after noise reduction filtering. Left: volume rendering of the original, raw image. Right: volume rendering of the image after application of edge-enhancing anisotropic diffusion filtering (see Chapter 5 for a description of this technique). The visualizations were obtained by using the exact same settings for the parameters of the volume rendering algorithm.

ing high-resolution isotropic datasets. However, due to the relatively high noise level and the presence of other unwanted background variations caused by surrounding tissue, the use of noise reduction techniques is inevitable in order to obtain smooth visualizations of these datasets (see Fig. 1.2). Chapter 5 presents an inquiry into the effects of several linear and nonlinear noise reduction techniques on the visualization and subsequent quantification of vascular anomalies in 3DRA images. The evalua- tion is focussed on frequently occurring anomalies such as a narrowing (or ) of the internal carotid artery or a circumscribed dilation (or )ofintracra- nial arteries. Experiments on anthropomorphic vascular phantoms indicate that, of the techniques considered, edge-enhancing anisotropic diffusion filtering is most suit- able, although the practical use of this technique may currently be limited due to its memory and computation-time requirements. Finally, Chapter 6 addresses the problem of interpolation of sampled data, which occurs e.g. when applying geometrical transformations to digital medical images for the purpose of registration or visualization. In most practical situations, interpola- tion of a sampled image followed by resampling of the resulting continuous image on a geometrically transformed grid, inevitably implies loss of grey-level information, and hence image degradation, the amount of which is dependent on image content, but also on the employed interpolation scheme (see Fig. 1.3). It follows that the choice for a particular interpolation scheme is important, since it influences the re- sults of registrations and visualizations, and the outcome of subsequent quantitative analyses which rely on grey-level information contained in transformed images. Al- though many interpolation techniques have been developed over the past decades, 6 1 Introduction and Summary

Figure 1.3. Illustration of the fact that the loss of information due to interpola- tion and resampling operations is dependent on the employed interpolation scheme. Left: slice of a 3DRA image after rotation over 5.0◦, by using linear interpolation. Middle: the same slice, after rotation by using cubic spline interpolation. Right: the difference between the two rotated images. Although it is not possible with such a comparison to come to conclusions as to which of the two methods yields the smallest loss of grey-level information, this example clearly illustrates the point that different interpolation methods usually yield different results.

thorough quantitative evaluations and comparisons of these techniques for medical image transformation problems are still lacking. Chapter 6 presents such a compar- ative evaluation. The study is limited to convolution-based interpolation techniques, as these are most frequently used for registration and visualization of medical image data. Because of the ubiquitousness of interpolation in medical image processing and analysis, the study is not restricted to XRA and 3DRA images, but also includes datasets from many other modalities. It is concluded that for all modalities, spline interpolation constitutes the best trade-off between accuracy and computational cost, and therefore is to be preferred over all other methods.

In summary, this thesis is concerned with the improvement of image quality and the reduction of image quality degradation and loss of quantitative information. The subsequent chapters describe techniques for reduction of patient motion artifacts in DSA images, noise reduction techniques for improved visualization and quantification of vascular anomalies in 3DRA images, and interpolation techniques for the purpose of accurate geometrical transformation of medical image data. The results and con- clusions of the evaluations described in this thesis provide general guidelines for the applicability and practical use of these techniques. Let no one say that I have said nothing new. The arrangement of the subject is new.

— Blaise Pascal, Pens´ees ()

Chapter 2

Retrospective Motion Correction in Digital Subtraction Angiography — A Review

Abstract — Digital subtraction angiography (DSA) is a well-established modality for the visualization of blood vessels in the human body. A serious disadvantage of this technique, inherent to the subtraction operation, is its sensitivity to patient motion. The resulting artifacts frequently reduce the diagnostic value of the images. Over the past two decades, many solutions to this problem have been put forward. In this chapter, we give an overview of the possible types of motion artifacts and the techniques that have been proposed to avoid them. The main purpose of this chapter is to provide a detailed review and discussion of retrospective motion correction techniques that have been described in the literature, to summarize the conclusions that can be drawn from these studies, and to provide suggestions for future research.

2.1 Introduction

ver the past two decades, digital subtraction angiography (DSA) has become a well-established modality for the visualization of blood vessels in the human Obody [29,67,137,169,177,197,214,253,281,291,314,395]. With this technique, a sequence of 2D digital X-ray projection images is acquired to show the passage of a bolus of injected contrast material through the vessels of interest. In the images that show opacified vessels (often referred to as contrast images or live images), background structures are largely removed by subtracting an image acquired prior to injection (usually called the mask image).1 It is obvious that in the resulting subtraction images, background structures will have been completely eliminated only in those situations where these structures are exactly aligned and have equal grey-level distributions. Clinical evaluations of DSA, following its introduction in the early 1980s, revealed that this is not the case for

1The idea of subtraction of angiographic images for the enhanced visualization of vascular struc- tures was first described in the 1930s, by Ziedses des Plantes [425,426]. For a brief historical review on the development of subtraction techniques in angiography, the reader is referred to Verhoeven [395], or Jeans [169]. 8 2 Retrospective Motion Correction in DSA — A Review a substantial number of examinations. Images taken at different time instances will always differ in some respect, due to fluctuations in the power of the X-ray source, or noise in the image intensifier and the subsequent imaging chain. However, the main cause of differences is patient motion. In the literature on DSA imaging one can find many examples of cases in which the artifacts caused by patient motion reduced the quality of the images to the extent that they became diagnostically useless. In order to cope with this problem one may endeavor to prevent patient motion, by taking special precautions regarding either the patient or the acquisition system, or both. However, in many cases artifacts can not be entirely avoided and one is forced to resort to retrospective motion correction techniques, which constitute the main subject of this chapter. Although there exists quite some literature on the subject, it appeared to us that frequently “new” ideas are published, without reference to similar work previously done by other researchers. In view of future research, it is very useful and advantageous to have an overview of the techniques and evaluations published so far, and of the conclusions that can be drawn from them. It is the purpose of this chapter to provide such an overview. This chapter is organized as follows. In Section 2.2, we summarize the types of motion artifacts most frequently reported in the literature, as well as the solutions that have been proposed to prevent them. In the subsequent sections we focus on retrospective motion correction. In Section 2.3, full account is given of the validity of such an approach in the particular case of angiographic X-ray projection imaging, as used in DSA. In Section 2.4 we elaborate on the various aspects of retrospective motion correction by image registration and grey-level distortion correction. The advantages and disadvantages of the various techniques are discussed in Section 2.5. Concluding remarks are made in Section 2.6.

2.2 Motion Artifacts and Possible Solutions

Before going into details about retrospective motion correction, we first give an im- pression of the types of motion artifacts that may be encountered. We also summarize the techniques that have been proposed to avoid motion artifacts.2 A brief introduc- tion into retrospective motion correction concludes this section.

2.2.1 Examples of Motion Artifacts Although gross movement during the acquisition of X-ray image sequences can usually be avoided with a cooperative patient, involuntary local motion of particular organs is practically inevitable. For example, most patients cannot resist an urge to swallow or cough, the resulting artifacts of which may cause difficulties in the interpretation of DSA images of the carotid arteries [31,52,155,239,281,343,375]. The pulsatile motion of arteries in combination with the presence of calcifications may cause problems in studies of the carotid bifurcation [31,64,81,375].

2We note that the literature overview presented in this section is by no means complete. In nearly every paper on examinations involving DSA one can find some discussion on problems concerning motion artifacts. 2.2 Motion Artifacts and Possible Solutions 9

Artifacts caused by bowel gas and the peristaltic motion of intestines may cause difficulties in studies of the splenic and portal veins [112], or of renal vascular ab- normalities [152, 153]. Respiratory and cardiac motion may cause misregistration artifacts in images of the thoracic and abdominal regions [26], in particular the pul- monary [225,309] and cardiovascular systems [239,413]. Sudden motion of arms and legs degrades the visualization of peripheral arteries [132,361]. When using automated stepping, the time span between the acquisitions of mask and contrast images is relatively large, and therefore patient motion frequently occurs [94]. Especially in examinations of the lower peripheral vasculature, artifacts can be very misleading, since the lateral displacement of a leg may produce artifacts along bone-tissue transitions which very much resemble vessels [395].

2.2.2 Patient Related Solutions Early attempts to reduce motion artifacts in DSA images focussed on techniques to avoid patient motion during exposure. In many cases, patient motion is initiated by the sudden sensation of heat caused by the contrast material [347, 395]. To reduce these reactional flexes, it has been suggested to use non-ionic instead of ionic contrast media [64,184], although some other studies showed no improvement [347,375]. Immobilization of the head prevents motion artifacts in DSA images of the neck and head [82,344]. This technique can also be applied to peripheral DSA [94]. To some extent, artifacts caused by respiratory motion can be avoided by applying generous amounts of oxygen before injection, thereby allowing patients to hold their breath for a longer period of time [184]. Deep inspiration pulls the diaphragm down and eliminates the inhomogeneity of dense abdomen, e.g. when imaging the thoracic aorta [26]. In other cases, e.g. hepatic or carotid DSA, it has been shown that motion artifacts are better reduced by an expiration holding method [176,419]. Several methods have been proposed to avoid or reduce artifacts caused by peri- stalsis. For example, administration of glucagon prior to contrast material injection temporarily diminishes peristaltic activity [33, 136, 153, 252, 306]. Alternative solu- tions are the use of a compression to displace overlying stomach and bowel [26,152,153], or to turn patients in prone position to displace bowel gas [136,346].

2.2.3 Acquisition Related Solutions Another of research has focussed on modifications of the acquisition system, by exploiting aprioriknowledge about the nature of patient motion or the properties of the contrast material (iodinated solutions) and the tissues to be imaged. An example of this is the use of sophisticated filtering techniques. Temporal band-pass or band- reject filters may offer a higher degree of immunity to some types of patient motion than mask-mode subtraction [190]. Especially with rapid periodic motion, such as e.g. caused by cardiac pulsation, grey-level variations often contribute to specific parts of the temporal frequency spectrum, which can subsequently be filtered out by using a band-pass filter.3 This has been shown to be successful in a number of cases [192,263].

3Notice that low-pass filters are not adequate for this purpose, since they do not remove stationary background anatomy [192]. 10 2 Retrospective Motion Correction in DSA — A Review

Given the contrast dilution curve, even better results may be obtained by using a matched filter, which maximizes the signal relative to the background noise, and therefore dose efficiency [191,221,311–313]. Another means of exploiting aprioriknowledge is subtraction of images acquired at different energy levels. The energy dependent linear X-ray attenuation coefficient of iodine shows a discontinuity at 33keV, whereas the attenuation coefficients of bone and soft tissue vary only gradually as a function of energy. This implies that the iodine contrast in images obtained by using X-rays above this edge is larger than in images obtained with X-rays below this edge. Subtraction will result in a reduction of background structures, while the iodine contrast is greatly enhanced [160,193].4 Since the time span between the acquisitions can be made very small (a few milliseconds), patient motion will be limited. However, this technique is only successful if the X-rays are nearly monoenergetic, which puts high demands on the X-ray generator. Alternative energy subtraction techniques make use of X-ray spectra with aver- age energies above the absorption edge of iodine. Since the contributions to X-ray attenuation of photoelectric absorption and Compton scatter are different for bone and soft tissue, a linear combination of images obtained at different average energy levels allows for selective cancellation of either bone or soft tissue [30, 210], an ap- proach often referred to as dual energy subtraction. Such techniques may also be incorporated into a so called hybrid subtraction approach [28], which comprises both energy and temporal subtraction. With cooperative patients, artifacts are mainly due to involuntary motion of soft tissue, e.g. in examinations of the carotid arteries or the abdominal areas. Dual energy subtraction can be used to remove soft tissue structures from both mask and contrast images, while temporal subtraction eliminates residual bone structures. Although hybrid subtraction may be successful in patient motion reduction, the improvements are obtained at the cost of increased patient exposure and a decrease in signal-to-noise ratio (SNR) [28,34,111,131,226,362].5 Depth information may be useful in regions which show independently moving superimposed structures, such as in abdominal images, or in regions with superim- posed iodinated structures, such as the and the chambers of the heart [29]. Therefore, it has been proposed to use tomographic DSA to isolate arter- ies within a single anatomic plane. Kruger et al. [8, 196] described an approach in which the image intensifier and the plane of projection are moved in such a way that only a single plane of the exposed 3D scene is in focus. The contributions of out-of- focus planes can be diminished by applying the aforementioned temporal band-pass filtering techniques. By extending this approach, it is also possible to reconstruct multiple in-focus planes from a single set of projection images, without additional X-ray exposure [69,133,198,220,231,232].6 In some cases, motion artifacts may be reduced by choosing a different mask image. A system for automatic remasking during acquisition was mentioned by Oung & Smith [279]. By using a real-time motion detector, based on the variance of the histogram of grey values in successive subtraction images, which can be considered

4Since the discontinuity at 33keV corresponds to the K-shell absorption energy level of iodine, this technique is often referred to as K-edge energy subtraction. 5To some extent, this may be compensated for by applying matched filtering [310]. 6This is known as tomosynthesis. 2.2 Motion Artifacts and Possible Solutions 11 a measure of similarity between mask and contrast images (see Section 2.5.2), a new mask image was selected as soon as the measure exceeded a predefined threshold. Finally, we mention the possibility of motion-synchronized gating of X-ray ex- posure. For example, artifacts due to the pulsatile motion of vascular structures, as caused by cardiac pulsation, may partially be avoided by using images acquired during the same cardiac phase.7 The gating of ECG-equipment are related to the QRS-complex in the ECG-curve, and may be exploited to trigger X-ray exposure. ECG-gating has been successfully applied in studies of the aortic arch and carotid ar- teries and bifurcations [22,64,114,120,178,294]. Alternatively, gating may be triggered by internal densitometric measurements of cardiac and respiratory motion [345].

2.2.4 Retrospective Image Processing Solutions Although the techniques mentioned in Sections 2.2.2 and 2.2.3 may provide a rem- edy in specific cases, patient motion always occurs to some extent, which causes the subtraction images to show artifacts that may hamper the interpretation of the im- ages, and consequently proper diagnosis. In such situations, motion artifacts may be corrected for retrospectively, by means of image registration and grey-level distortion correction techniques. With these techniques, the images in a sequence are analyzed so as to retrieve a geometrical transformation that accounts for the changes caused by patient motion and to bring the mask image in optimal correspondence with the contrast image prior to subtraction. The simplest approach in this respect is probably the manually controlled transla- tion of the mask image with respect to the contrast image, a technique often referred to as pixel shifting. Since, in DSA systems, images are acquired, stored, and processed digitally, this technique is quite easy to implement and it has been applied since the early 1980s [137,215]. In fact, it is still the only available motion correction technique on current clinical DSA systems. Obviously, pixel shifting only provides a solution in those situations where artifacts have been caused by gross translational motion. In most cases, patient motion is more complex and can not be modeled by such a basic transformation. Although pixel shifting may reduce artifacts in some parts of the image, in the remainder of the image artifacts will inevitably be reinforced or even newly created, as already pointed out by Levin et al. [215]. In order to be able to correct for more complex patient motion, registration tech- niques should be designed so as to have more local control. An example of this is the approach described by Pickens et al. [298], in which second order polynomials were used to define the geometrical transformation of the mask. The 12 parameters of the transformation were determined by manually selecting six points in the mask image, as well as the six corresponding points in the contrast image, and by solv- ing the system of equations that resulted after substitution of these points into the transformation. Higher order polynomials can be used to define the transformation, simply by incorporating more control points. However, as also pointed out by Pickens et al., manual selection of corresponding points introduces the possibility of operator error. In order to avoid operator-induced

7Since vessels spend most of their time at or near positions corresponding to the end-diastolic phase, images are preferably acquired during this phase [395]. 12 2 Retrospective Motion Correction in DSA — A Review problems, the registration operation should be automated to the highest possible degree. Many techniques have been developed for this purpose. In the subsequent sections, we analyze the validity of retrospective motion correction by image registra- tion and grey-level distortion correction, and review the techniques that have been proposed to perform these tasks in DSA.

2.3 Retrospective Motion Correction — Preliminaries

It is important to note that the individual images in digital X-ray angiography are in fact 2D projections of 3D anatomical structures. This implies that in the case of patient motion, differences between the 2D mask and contrast images are the result of a 3D transformation of these structures. In an attempt to correct for the artifacts in the resulting subtraction images retrospectively, the use of 2D registration techniques is justified only when it can be shown, at least theoretically, that it is possible to construct a 2D geometrical transformation that completely accounts for the projective effects of a 3D transformation. In this section it will be argued that, although this is indeed the case, in practice the possibilities to extract such a transformation from the projection images are limited.

2.3.1 The Existence of a 2D Geometrical Transformation In X-ray projection imaging, the grey-value of an arbitrary pixel in an image is deter- mined by the energy flux, or intensity, Φ, of the X-rays incident on the corresponding detector element. In principle, Φ(x), x ∈ R2, is constituted by the contributions of all particles in the 3D scene, according to the relationship: Z ∞ Φ(x)= S(x,E)e−L(x,E)dE, (2.1) 0 where E denotes energy, S(x,E) is the energy spectral density, at the source, of the X-rays incident on the detector at position x,andL(x,E) is the line integral given by Z 1 L(x,E)= µ(λx(ξ),E)dξ. (2.2) 0 In Eq. (2.2), µ denotes the linear X-ray attenuation coefficient, which is dependent on the type of material (accounted for by a position dependency), as well as on the energy E of the rays. Integration of µ is carried out along the linear path as traversed by the ray, i.e., from the source to the element at position x on the detector matrix, 3 of which λx :[0, 1] → R is a parametric representation. Although, in practice, X-rays will be polyenergetic, i.e., S is non-zero for a certain range of energies, it is common use to assume the rays to be monoenergetic, i.e., S(x,E)=Φ∅(x)δ(E − Eq), where Eq is the energy level of the X-ray quanta and Φ∅(x) is the energy flux that is measured when the traversed path λx is completely 2.3 Retrospective Motion Correction — Preliminaries 13 in vacuum (no material encountered by the rays). In this case, Φ is given by Lambert- Beer’s law:8

−L(x,Eq ) Φ(x)=Φ∅(x)e . (2.3)

After logarithmic post-processing and calibration with respect to Φ∅, the grey-value I at position x in the resulting image is given by9 I(x) ∝L(x). (2.4) Starting from relation (2.4), the problem of the existence of a 2D geometrical transformation that accounts for the projective effects of a 3D transformation was studied by Fitzpatrick [95]. He argued that, since the total amount of attenuation of X-rays, as caused by the material in a confined volume, can only be changed by transport of particles across the boundaries of that volume, the attenuation coefficient µ behaves as the density of a conserved quantity, for which the continuity equation from continuum mechanics and fluid dynamics holds.10 Using the continuity equation, he proved that given two X-ray projection images, I0(x)=I(x,t0)andI(x)=I(x,t), taken at times t0 and t =6 t0 respectively, there always exists a one-to-one 2D mapping 2 2 2 Ψ:R → R that transforms points x0 ∈ R in I0 into points x =Ψ(x0)inI,such 11 that the grey-values in I and I0 can be described by the following equation: −1 ∀ I(x)=JΨ (x0)I0(x0), x0, (2.5) −1 where JΨ is the inverse Jacobian of the mapping Ψ. As argued by Fitzpatrick [97], −1 the factor JΨ will be finite and larger than zero for all transformations describing −1 physical motion. In regions where Ψ describes local expansions of tissues, JΨ will −1 be less than one, and in regions where Ψ describes local contractions, JΨ will be larger than one. For in-plane rigid motion of tissues in the 3D scene, this factor will be equal to one.12

2.3.2 Limitations in Transformation Recovery As emphasized by Fitzpatrick [95], his proof only shows that at least one such image transformation exists.13 It does not yield a recipe for retrieving a transformation

8After the Swiss-German mathematician and physicist J. H. Lambert (-)andtheGerman physicist A. Beer (-). It is a combination of Lambert’s law (also known as Bouguer’s law) from optics, which relates the amount of light absorbed and the distance it travels through an absorbing medium, and Beer’s law, which relates the amount of absorption and the concentration of the absorbing substance. 9 Since Eq is fixed, it is left out for convenience hereafter. 10See e.g. Condon & Odishaw [61]. 11The proof was subject to a few constraints which are easily met in X-ray projection imaging. Although it was based on the assumption of orthogonal projections, it was argued that a similar proof, though more complex, can be constructed in the case of perspective projections. 12Notice that relation (2.4) implies that I(x)=aL(x), where a is a constant. If a>0, local expansions will result in a decrease of local grey values in resulting projection images, whereas local contractions will result in an increase. Conversely, if a<0, local expansions and contractions will result in an increase or decrease of grey values, respectively. In either case, Eq. (2.5) holds. 13 In fact, given the two images I0 and I that differ as a result of a single 3D transformation, there are infinitely many mappings Ψ for which Eq. (2.5) holds. 14 2 Retrospective Motion Correction in DSA — A Review from the two images. In fact, in most practical cases it will be impossible to retrieve a transformation that exactly satisfies Eq. (2.5). This is primarily due to the following three reasons:

i) Since we are dealing with discrete images, measurement of the displacement at a certain pixel in the image inevitably involves incorporation of neighboring pixels into the computations; in all practical situations, comparison of individual pixels is useless and some form of regularization in required. In cases where the changes in a neighborhood have been caused by the uncorrelated motion of several superimposed objects, the result of the measurement can be expected to be entangled.14

ii) Due to the limited field of view (FOV) of the image intensifier, the images I0 and I are defined only on a confined domain, D ⊂ R2, and do not contain any information about the displacement of particles that entered or left the FOV. In the particular case of angiography, the presence of additional contrast in one of the images, as caused by the introduction of contrast material into the scene, may pose a serious problem.

iii) At pixels that lie on isophotes in the image, it is impossible to retrieve the tangential component of the displacement vector, since motion in the tangential direction does not cause a change in the local appearance of the image.15 In the field of (computer) vision this ambiguity problem is generally known as the aperture problem [151,233].

In addition to these fundamental problems there are other factors that may com- plicate the process of finding the optimal correspondence between successive images. These are due to imperfections of the acquisition system, such as limited spatial res- olution, grey-level quantization, noise, or the effects of time-varying scatter, X-rays being non-monoenergetic, or beam hardening, which causes the assumption of pro- portionality between the line integral of the attenuation coefficient and the actual grey values (relation (2.4)) to be only approximately valid. However, these effects have been shown to be negligible [97,100,102,297].

2.4 Retrospective Motion Correction — Techniques

The problem of finding the correspondence between images appears in many situa- tions. Surveys of registration techniques have been given by Aggerwal & Nandhaku- mar [3] and Brown [32], and in the field of medical imaging by Van den Elsen et al. [384] and Maintz & Viergever [229]. However, as these authors aimed at providing general overviews, they were rather superficial with respect to specific applications. In this section we present a detailed overview of the techniques that have been proposed to perform the registration task in the particular field of DSA.

14This problem was also alluded to by Kruger et al. [192]. 15 −1 Notice that this holds true only for certain types of rigid motion, viz., those for which JΨ equals one in Eq. (2.5). 2.4 Retrospective Motion Correction — Techniques 15

In principle, techniques for the automatic computation of local motion or dis- placement of certain objects or structures can be divided into two categories: (i) gradient-based optic-flow techniques, and (ii) template-matching based techniques. In order to allow for the application of any of these techniques to the problem of reg- istration of digital X-ray projection images, account must be given of the validity of their basic assumptions for this particular type of images. In an earlier paper [250], we have discussed the issue of optic flow versus template matching (see also Chapter 3 of this thesis). It was concluded that the basic assumptions of optic-flow techniques do not apply to digital X-ray projection imaging, except in the case of parallel projection and in-plane rigid body motion in the original 3D scene. In addition, these techniques suffer from all of the three problems mentioned in Section 2.3.2. Although optic-flow techniques have been applied to X-ray angiography images for motion analysis of the heart [251], and for the determination of blood flow [7], they have, to our knowledge, never been used to solve the registration problem for this type of images. Therefore, they are not considered further in this chapter. Template-matching techniques are based on the assumption that a local displace- ment, d =(dx,dy), of a structure in one image, I, can be estimated by defining a certain window W (say K × L pixels in size) containing this structure, and by find- ing the corresponding window in a reference image, I0, in the sequence by means of correlation.16 Although these techniques also suffer from the aperture problem and superimposed and independently moving structures, they can be made much more robust against the presence of additional contrast in some parts of the live im- ages by applying a similarity measure that is relatively insensitive to local grey-level changes.17 Since, in angiography images, the contrasted blood vessels are the objects of interest, this is an important property. In the following subsections we elaborate on the various aspects of template-matching based motion correction in DSA.

2.4.1 Complexity of the Transformation Since template-matching algorithms can be computationally very expensive, it is usu- ally not a viable approach to use them to explicitly compute the displacement for every pixel in the image. Therefore, most template-matching based motion correc- tion techniques only compute the optimal correspondence for a limited number of windows, or regions of interest. This imposes limitations on the complexity of the geometrical transformation that can be constructed and therefore on the complexity of the patient motion that can be corrected for. In the simplest case, the entire mask image is used as a template, which amounts to taking a single window of size M × N,whereM and N are, respectively, the x and y-dimension of the image.18 This has been used, e.g., by Potel & Gustafson [301]. Since, with this approach, it is not possible to correct for patient motion other than gross translation and rotation, it boils down to automated pixel shifting. In order to be able to correct for more local translational and rotational motion, the window should be reduced to a confined part of the image. Examples of such an approach

16Not necessarily in the mathematical sense of the word. 17We will return to this issue in Sections 2.4.2 and 2.5.2. 18In digital angiography images, M is equal to N,whereM is usually 512 or 1024 pixels. 16 2 Retrospective Motion Correction in DSA — A Review have been described by several authors [161,369,389,392,393,421], where the windows were user defined regions of interest. Registration in a larger part of the image can then be obtained simply by defining more regions of interest, which may differ in size. Pickens et al. [297] and Fitzpatrick et al. [96, 97, 99–102] described a more flexible approach, where the transformation of a quadrilateral region of interest is not determined by translation and rotation of the entire window, but rather by the independent displacement of the four constituent corner points. This allows for the construction of more complex geometrical transformations, in particular the one-to- one polynomial mappings from the class proposed by Fitzpatrick & Leuze [98]. This approach has also been used by Mandava et al. [230]. More sophisticated algorithms are those in which the displacement vectors at a larger number of so called control points are considered samples of the original displacement vector field, and from which a global geometrical transformation is con- structed by means of interpolation. The displacement vectors at the control points are, again, computed by applying template matching to small windows around these points. With this approach, the size of the windows is not determined by the dimen- sions of the region of interest as indicated by the user, but rather by the minimum amount of information required to obtain reliable estimates for the displacements of the corresponding control points. This, in turn, highly depends on the criterion that is employed to determine similarity (see Section 2.4.2), and may also depend on image content. Control points may be chosen on a regular grid, from which a global displace- ment vector field can easily be computed, e.g., by defining a quadrilateral mesh, and by using bilinear interpolation within every individual quadrilateral, as has been pro- posed by many authors [63,182,259,318–320,363,364,387,388,417,427,428]. Hayashi et al. [140] used cubic B-splines for this purpose. By using a regular grid, control points are chosen without taking into account image content within the corresponding windows. However, it is well known that, regardless of the employed similarity measure, template-matching techniques tend to yield unreliable results in homogeneous regions. To correct for this, Zuiderveld et al. [427,428] proposed to track down unreliable displacement vectors a posteriori and to bring them into agreement with the displacement vectors of neighboring control points by means of iterative relaxation.19 Buzug et al. [35,36,38,40,44,45] proposed to use an exclusion technique, by which grid points in regions with insufficient contrast variation are excluded apriorifrom the set of control points.20 Although, with this approach, the remaining control points are still on the regular grid, it is no longer possible to define a quadrilateral mesh. In order to obtain a complete displacement vector field, they used the displacement vectors at the control points to compute an affine transformation [36, 38–40, 42, 44, 45].21 They also experimented with elastic transformations [35,41,45], using thin-plate splines [24].

19Displacement vectors were considered unreliable if either the curvature of the match surface around the optimum was too low, or the deviation from neighboring vectors was too large. 20Determined by computing the entropy of the grey-value distributions in the windows in the mask image around these points, and by applying a threshold. 21Since this is an over-constrained problem, they used singular value decomposition to obtain the best result in a least squares sense. In a later publication [43], they also experimented with isotropically and anisotropically weighted least squares procedures to further improve the reliability of the resulting transformation. 2.4 Retrospective Motion Correction — Techniques 17

The exclusion concept can be taken one step further by dismissing the regular grid paradigm and by extracting regions with sufficient contrast variation prior to control point selection. The first steps in this direction can be attributed to Yanagisawa et al. [423]. They proposed to use the displacement vectors at a limited number of control points (being the centers of user defined regions containing interesting structures) to determine the global translation and rotation in a least squares sense, and to interpolate the residual local displacement vectors onto the remainder of the image by means of radial basis functions. In recent publications [247,248,250] we have argued that, since in the subtraction images artifacts will only appear in those regions where strong object edges are present in the unsubtracted images, the selection of control points should be based on an edge detection scheme. We used Canny’s operator [47] to detect edges in the mask image and to extract control points by means of a three- parameter algorithm based on assumptions about the coherence of image structures. The overall displacement vector field was computed by using linear interpolation of the local displacement vectors, for which we used a Delaunay tessellation of the set of irregularly distributed control points.

2.4.2 Similarity Measures Even if the chosen control-point based transformation is sufficiently complex so as to be able to accurately model the actual geometrical transformation induced by patient motion, the resulting registration algorithm will be useless if the template-matching operation fails to yield correct displacement vectors. Therefore, the most important aspect of template matching is the similarity measure that is used to determine the amount of correspondence between windows in successive frames. Many measures have been proposed for the registration of X-ray angiography images. Here we briefly describe each of them.

Correlation-Based Measures In digital image processing, the use of correlation for the purpose of image registration has been proposed since the seventies [121, 303, 304, 321]. The normalized cross- correlation (NCC) similarity measure is computed as P I(x)I (x + d) M q x∈W 0 NCC(d)= P P , (2.6) 2 2 x∈W I (x) x∈W I0 (x + d) which is the zero-mean version of the correlation coefficient (CC) measure: P   I(x) −hIiW I (x + d) −hI iW M q x∈W 0 0 ,d CC(d)= P  P  , (2.7) −h i 2 −h i 2 x∈W I(x) I W x∈W I0(x + d) I0 W,d where d =(dx,dy) denotes the local displacement vector and 1 X 1 X hIiW = I(x)andhI iW = I (x + d) (2.8) KL 0 ,d KL 0 x∈W x∈W 18 2 Retrospective Motion Correction in DSA — A Review denote the mean values of the image intensities in the respective windows.22 Both measures are to be maximized. Since in digital angiography images the grey values 23 are positive integers, in the range [0,G]say, it follows that MNCC is in the range

[0, 1] ⊂ R and MCC is in [−1, 1] ⊂ R.NoticethatMNCC assumes its maximum M value only when I(x)=κI0(x + d), while CC assumes its maximum when I(x)= 24 κ1I0(x + d)+κ2,whereκ, κ1 and κ2 are constants. These measures have been applied to the registration problem in DSA by Potel & Gustafson [301], Yanagisawa et al. [423], and Takahashi et al. [363], and have later been mentioned by many others in discussions concerning comparative evaluations of similarity measures (see further Section 2.5.2). Correlation-based measures can also be constructed in the frequency domain. If two images, or windows within the images, I and I0, are assumed to differ only as a result of pure translational motion, i.e., I(x)=I0(x + d), it can easily be derived that their Fourier transforms, I˜ and I˜0, respectively, are related by

i2πf ·d I˜(f)=e I˜0(f), (2.9) √ where f ∈ R2 denotes 2D frequency and i = −1 is the imaginary unit number. That is, the images, or windows, have identical Fourier spectra up to a phase difference that corresponds to the relative displacement. Using Eq. (2.9), one readily derives the cross-power spectrum, S,ofI and I0:

I˜(f)I˜∗(f) S(f) , 0 =ei2πf ·d, (2.10) | ˜ ˜∗ | I(f)I0 (f) ˜∗ ˜ where I0 is the complex conjugate of I0. The inverse Fourier transform of the complex conjugate of S(f) will yield a Dirac delta , δ(x−d), with the position of the pulse corresponding to the displacement d. This observation has led to the introduction of the phase correlation (PC) measure, defined as

−1 ∗ MPC(d)=F [S (f)], (2.11)

−1 where F [·] denotes the inverse Fourier transform. Notice that MPC is insensitive to grey-level scaling. That is, if I(x)=κI0(x + d), where κ is any constant, the extreme of M will have the same value for all κ. Also notice that, in practice, due to the use of discrete Fourier transforms, this extreme value will be finite.

22It must be pointed out that, although Eqs. (2.6) and (2.7) are the formal definitions of these measures, the normalized cross-correlation and the correlation coefficientP are usuallyP not exactly 2 computed this way. Several components of these measures, such as hIiW, W I (x), and W (I(x)− 2 hIiW ) , are independent of d and can therefore be precomputed. Normalization with respect to I can even be omitted, without altering the final result (i.e., the displacement vector d that yields the largest correlation value). Also, in order to be able to apply these measures (and the ones described hereafter), one will have to decide on how to treat windows that extend beyond the image borders. In the sequel to this section, we only present the basic principles and definitions, without going into implementation related details. 23Notice that with current clinical DSA devices, the quantization precision is usually 10 bits, which implies that G = 1023. 24Unless explicitly stated otherwise, the other similarity measures described in this section assume their extreme value only when I(x)=I0(x + d). 2.4 Retrospective Motion Correction — Techniques 19

Phase correlation for registration was originally proposed by Kuglin & Hines [199] and has been applied to X-ray angiography sequences by Leclerc & Benchimol [206], Wu et al. [421], and Close & Whiting [56,58]. A related approach, based on the power cepstrum25 representations of the images, was used by Englmeier et al. [80].

Sum of the Absolute Values of Differences In contrast with correlation-based measures, most similarity measures are defined in terms of the grey values in the difference images Id(x)=I(x) − I0(x + d). The most well-known measure is the sum of the absolute values of differences (SAVD): X M | | SAVD(d)= Id(x) , (2.12) x∈W which is to be minimized, and assumes values in the range [0,GKL] ⊂ Z. This measure was first used by Svedlow et al. [359] for alignment of land-sat images, and has later been applied to the registration problem in DSA by several authors, viz., Wilson et al. [417], Van Tran & Sklansky [387,388] and Ko et al. [182]. A similar measure, the mean of the absolute values of the differences,wasusedby Pickens et al. [296,297], Fitzpatrick et al. [96,97,99–102] and Mandava et al. [230].

Sum of Squared Differences Another well-known measure based on absolute differences is the sum of squared differences (SSD), given by X M | |2 SSD(d)= Id(x) . (2.13) x∈W

The displacement vector that minimizes MSSD relates the window in the contrast image to the corresponding window in the mask image that is most similar in a least 2 squares sense. Notice that MSSD is in the range [0,G KL] ⊂ Z. Hayashi et al. [140] applied this measure to Laplacean filtered versions of the mask and contrast images. Although the SSD measure has been mentioned by many others as a potential criterion for image registration in DSA [166, 301, 369, 387, 388, 417], in most cases alternative measures were used.

Variance of Differences The standard deviation of differences has been used by Van der Stelt et al. [385] as a similarity measure for determining the optimal projections for subtraction, and by Dunn et al. [75] as a quality measure for comparing different registration techniques in dental DSR. A related measure, based on the variance of differences (VOD), was proposed by Cox & De Jager [63] for registration in DSA: X  1 2 M (d)= I (x) −hI iW , (2.14) VOD KL d d x∈W

25An anagram of the word “spectrum”. See e.g. Lee et al. [208] for details on the use of power cepstrum and spectrum techniques for image registration. 20 2 Retrospective Motion Correction in DSA — A Review

26 with the operator h·iW as defined in Eq. (2.8). This measure assumes values in the 2 range [0,G ] ⊂ R, and is to be minimized. Notice that MVOD assumes its minimum value only when I(x)=I0(x + d)+κ,whereκ is any constant.

Sign-Change Measures

If two images, I and I0, are assumed to differ only as a result of noise, the difference image, Id, will exhibit random fluctuations according to the noise properties in case d = 0, and will contain additional distortions in case d =6 0 (provided that the original images are not entirely homogeneous). This implies that if the noise is additive, with zero mean and a symmetrical probability density function, the difference image will have many sign changes when scanned row-wise or column-wise, the number of sign changes being maximum when d = 0. This observation by Venot et al. has led to the construction of the stochastic sign-change (SSC) measure [389,392]. Although the SSC measure had been applied successfully to normalization and registration of scintigraphic images [390, 391], it did not appear to be an adequate measure for registration of X-ray images because of the relatively low noise level as compared to the quantization precision [389, 392].27 In order to cope with this “problem”, Venot et al. adjusted the SSC measure to form the deterministic sign- change (DSC) measure: X  M P P − DSC(d)= sgn Id (x, y)Id (x 1,y) , (2.15) x∈W0 where W0 = {x|(x − 1,y) ∈W}(the window W is assumed to be scanned row-wise P − P P here), Id (x)=I(x) I0(x + d)+ (x), and the functions sgn(x)and (x)are respectively defined as  0ifx > 0, sgn(x) , (2.16) 1ifx<0, and  +q if x + y even, P(x) , (2.17) −q if x + y odd, where q is a small real or integer value. With the DSC measure, the sign changes in the difference image are caused by the deterministic properties of the pattern

P(x) rather than the stochastic properties of the original images. Notice that MDSC assumes values in the range [0, (K − 1)L] ⊂ Z, and is to be maximized. Apart from Venot et al. [389,392,393], this measure has been used for registration by Zuiderveld et al. [427, 428] and Talukdar et al. [364], and by Roos & Viergever

26The definition presented in Eq. (2.14) deviates from the original definition by Cox & De Jager [63] in the sense that the latter was constructed so as to yield values in the range (0, 1]. This was accom- plished by subjecting the values MVOD as computed using Eq. (2.14) to a nonlinear transformation. However, since this transformation is a bijection for the possible range of MVOD values, the measure presented here will yield equivalent results. 27Notice that in the early 1980s, digital angiography images were usually acquired with a grey-level resolution of 8 bits. As pointed out in Footnote 23, the quantization precision of current clinical DSA devices is 10 bits. 2.4 Retrospective Motion Correction — Techniques 21

[318–320] for the purpose of reversible interframe compression of medical images, in particular coronary and ventricular X-ray angiograms. Hua & Fram [161] applied the DSC measure to first derivative versions of the mask and contrast images.

Coincident Bit Counting Another measure that was designed to be independent of actual grey values is coin- cident bit counting (CBC). With this measure, the degree of similarity of successive windows is determined by counting the total number of coincident bits in the binary representations of the grey values of corresponding pixels. This measure was proposed by Chiang & Sullivan [51], and can be defined as X nX−1   M ⊗ CBC (d)= bi I(x) bi I0(x + d) , (2.18) x∈W i=0 where bi(·) is a function that returns the ith bit of the binary representation of its argument, “⊗” denotes the exclusive-NOR operator, and n is the number of bits in which the grey values are represented.28 The CBC measure as defined in Eq. (2.18) assumes values in the range [0,nKL] ⊂ Z, and is to be maximized.

Histogram-of-Differences Based Measures Fundamentally different measures are those based on the normalized histogram of differences, H(g), defined as 1 X  H(g)= δ I (x),g , (2.19) KL d x∈W where g ∈ [−G, G] ⊂ Z is any grey-value difference, and δ(x, y) is the Kronecker delta function:  1ifx = y, δ(x, y) , (2.20) 0ifx =6 y. By using histogram-based measures, the degree of similarity of windows in suc- cessive images is not determined by the actual differences of grey values but rather by their relative frequency. An example of such a measure is the entropy of the histogram of differences (ENT):

XG

MENT(d)=− H(g)logH(g), (2.21) g=−G which assumes values in the range [0, log(2G +1)]⊂ R, and is to be minimized.29

28In accordance with the earlier remark (Footnote 22), this definition of the CBC measure does not provide a means to deal with border problems, which is the reason why it deviates somewhat, though not fundamentally, from the original definition by Chiang & Sullivan [51]. 29It must be pointed out that if KL is not an integer multiple of 2G + 1 (as is mostly the case), the explicit form of the actual upper limit of MENT is more complex (see Appendix 2.A). 22 2 Retrospective Motion Correction in DSA — A Review

This measure was used by Lehmann et al. [211] as a quality measure for the alignment of X-ray images in dental DSR. Independently, it was soon after pro- posed by Buzug et al. [36,40] and used for registration of angiographic X-ray images. Later, Buzug et al. generalized the concept of histogram-based similarity measures andprovedthatanymeasure:

XG  M(d)= f H(g) , (2.22) g=−G is a suitable similarity measure for registration purposes, provided that f is a strictly convex, or strictly concave, differentiable function [42, 44, 45], the function f(x)= −x log x being just an example. They proposed several other weighting functions and argued that the function f(x)=x2, leading to the energy of the histogram of differences (EHD):

XG 2 MEHD(d)= H (g), (2.23) g=−G is computationally cheap and yields accurate results [35, 38, 39, 41, 42,44, 45]. It has also been used by us [247,248,250]. In contrast with MENT, the energy measure MEHD is to be maximized, and assumes values in the range [1/(2G+1), 1] ⊂ R.30 Notice that both measures take on their respective extreme value when I0(x)=κ1I(x + d)+κ2, where κ1 and κ2 are constants.

Various Alternative Measures Hitherto, we have only presented the similarity measures that are most frequently en- countered in the literature. Some measures that have been suggested in the context of DSA have remained largely unnoticed, yet are worth mentioning. For example, Potel & Gustafson [301] proposed a variant of the SAVD measure, viz.,thesum of the absolute values of differences above a threshold (SDT), in which only the abso- lute difference values that exceed a specified threshold, T , are incorporated, thereby reducing the influence of noise in rather homogeneous regions:31 X M T | | SDT(d)= max( , Id(x) ), (2.24) x∈W

where MSDT ∈ [T KL,GKL] ⊂ Z is to be minimized. In contrast to this, Tianxu et al. [369] argued that if the noise is not Gaussian, relatively large noise peaks may frequently occur, which significantly spoil the perfor- mance of measures such as SAVD and SSD. They proposed to reduce the influence of

30 As with the entropy measure, the explicit form of the actual lower limit of MEHD is more complex if KL is not an integer multiple of 2G + 1 (see Appendix 2.A). 31The name and definition of this measure were adopted literally from Potel & Gustafson [301]. Notice, however, that they are not consistent. When using the definition of Eq. (2.24), also absolute differences below the threshold T contribute to the value of MSDT. 2.4 Retrospective Motion Correction — Techniques 23 difference values with a large magnitude, by using a measure which they coined the minimal artifacts criterion (MAC):32 X A1 MMAC(d)= , (2.25) A0 + |Id(x)| x∈W which assumes values in the range [A1KL/(A0 + G),A1KL/A0] ⊂ R,andistobe maximized, assuming that A1 and A0 are positive constants. Another variant of the SAVD measure is obtained if only positive pixel differences are taken into account, thereby attempting to ignore pixels whose difference values are due to the injected contrast medium. This sum of positive differences (SPD) measure is defined as33 X  M SPD(d)= max 0,Id(x) . (2.26) x∈W

Similar to the SAVD measure, this measure assumes values in the range [0,GKL] ⊂ Z, and is to be minimized. A related approach to reduce the negative influence of contrasted vessels is the use of so called exclusion templates, which has been put forward by several authors.34 This involves a presegmentation of the image or windows so as to identify regions that may contain contrasted vessels. When computing the measure of match, only those pixels are included in the summation that do not belong to these regions. Since the number of excluded pixels may vary, this requires normalization of the similarity measure, which is done simply by dividing by the number of incorporated pixels. An exclusion template may be obtained in several ways. Ko et al. [182] simply ap- plied a threshold to the original subtraction image, where the threshold was taken to be the sum of the mean and one times the standard deviation of the difference values. Earlier, Van Tran & Sklansky [387,388] had used a similar technique, with the addi- tion that isolated pixels in the resulting binary exclusion template were merged with their surroundings. Finally, Cox & De Jager [63] proposed to subdivide the windows into smaller subwindows and to compute the similarity measure for every individual subwindow. Only the k best matching subwindows are then used to compute the measure of match of the entire window.

2.4.3 Subpixel Precision As indicated by Brody et al. [31], even subpixel misalignments may produce signifi- cant artifacts in subtraction images. Therefore, it is necessary to provide means to

32Although one may question the validity of the claim that the application of this measure will result in minimal artifacts, we have adopted the original appellation. It must be added that the original definition by Tianxu et al. [369] also incorporated a mechanism to correct for local grey-level variations, thereby probably making it a more powerful measure than the one presented here. 33This measure is the complementary version of the sum of negative pixel differences measure (obtained by replacing “max” by “min” in Eq. (2.26)) of Potel & Gustafson [301]. The definition of such a measure is directly related to the definition of the difference image Id. 34Not to be confused with the template-exclusion technique as proposed by Buzug et al. [35, 36, 38, 40, 41,44,45] and described in this chapter in Section 2.4.1. 24 2 Retrospective Motion Correction in DSA — A Review obtain subpixel precision in the displacement computations. Since the pixel positions and corresponding grey values constitute the only available information about the represented scene, this inevitably requires some form of interpolation. An obvious approach to obtain subpixel precision is to compute the measure of match not only for integer but also for non-integer displacements of the images or windows. This requires resampling of the original pixel data, e.g. by means of bilinear interpolation, as has been done by several authors [97, 99–102, 140, 182, 230,297, 301, 387, 388, 423]. With this approach, the final precision of the displacement vectors is either determined implicitly by the optimization procedure, or has to be specified explicitly by the user, in terms of an incremental step size. A frequently applied alternative approach is to use the match values at inte- ger displacements, M(d), d ∈ Z2, in an interpolation scheme in order to construct a continuous bivariate function, the match surface, Mˇ (d), d ∈ R2 and Mˇ (d)= M(d), ∀d ∈ Z2, and to calculate the extreme of this function analytically by solving ∇Mˇ (d)=0. Obviously, bilinear interpolation cannot be used for this purpose since a bilinearly interpolated surface still has its extreme at the integer extreme position. Buzug et al. [41, 45] proposed to fit a six-parameter quadratic function to the match values of the 3 × 3 neighborhood around the optimal match value of integer dis- placements,35 from which the subpixel component of the displacement vector can be computed directly. For higher-order bivariate interpolation it can easily be deduced that this approach will lead to the problem of solving an algebraic equation of degree n > 5 (in either x or y and with coefficients that are functions of the match values at integer displacements), which can only be done numerically.36 In order to cope with this problem, several authors have proposed to simplify it by constructing two separate monovariate functions Mˇ (di),i∈{x, y},andtosolve ∂iMˇ (di) = 0 independently for i ∈{x, y} in order to obtain an estimate for the x and y coordinates of the extreme. In most cases, quadratic polynomials were used for this purpose [63, 161,364,427,428]. Alternatively, Venot & Leclerc [393] reported the use of cubic splines.37 Wilson et al. [417] applied Brent’s algorithm for function optimization [27], in which parabolic fitting is used in an iterative fashion.

2.4.4 Optimization and Acceleration Given the similarity measure, M, the optimal displacement vector according to this measure is the vector d for which M assumes a global extreme value.38 Finding this extreme is tantamount to the problem of function optimization, for which a large number of algorithms exist. In this section we only consider the approaches that

35Since this is an over-constrained problem, they used singular value decomposition. 36According to the fundamental theorem of algebra (several proofs of which have been given by Gauss, see Netto [264] for a compilation), every polynomial equation of degree n, with either real or complex coefficients, has n roots in the complex plane (counting multiplicity). However, the impossibility theorem by Abel [2] states that for n>4, the roots cannot be written as finite formulae involving only the four arithmetic operations and radicals. Strictly speaking, the roots of higher-order polynomials can be expressed in terms of hypergeometric functions in several variables, or in terms of Siegel modular functions. 37 Although it is not clear from their explanation whether they computed dx and dy separately. 38Either a minimum or a maximum value, depending on the chosen similarity measure. 2.4 Retrospective Motion Correction — Techniques 25 have actually been applied to registration of digital X-ray angiography images. Most of these are well-known optimization techniques, for which introductory descriptions and numerical recipes have been given by Press et al. [305]. We also describe several acceleration techniques that have been proposed in order to reduce the time required for optimization and computation of the measure of match. From an implementational point of view, the simplest yet computationally most expensive approach is to impose constraints on the maximum admissible displacement in both x-andy-direction and to perform an exhaustive search, i.e.,toevaluate the similarity measure for every possible displacement, subject to the constraints | | 6 | | 6 dx dxmax and dy dymax , and the required subpixel precision. In this case, the computation time can be reduced considerably by applying a multiresolution or pyramidal approach, in which a full search is iteratively carried out on subsampled and, eventually, supersampled versions of the original images. Several variants to this approach have been described [140,182,301,364,369,387,388,417]. Cox & De Jager [63] described a different multiresolution approach, in which exhaustive searching was replaced by a three-step optimization procedure. A related technique, known as logarithmic search [167], was mentioned by Roos & Viergever [318–320]. Another well-known technique for function optimization is Powell’s direction set method [302], in which multidimensional optimization is achieved by successive 1D optimizations, in conjugate directions. Although the original algorithm prescribes a mechanism for optimally updating the search direction after each iteration, frequently the directions are kept fixed to the x-andy-direction, a simplification often referred to as hill climbing. This has been used by several authors [35,37,42,45,247,248,250,364, 392,393,427,428]. As an acceleration technique, Zuiderveld et al. [427,428] proposed to use the already computed displacement vectors of neighboring control points as initial estimates for the displacements of points yet to be considered. When registering complete image sequences, the displacement vectors of previously registered frames can also be used as estimates [250]. An alternative is to use real multidimensional optimization strategies.39 For ex- ample, Yanagisawa et al. [423] used the Nelder-Mead downhill simplex method [262], with which the optimal 2D displacement vector is found by defining three candidate vectors, the end-points of which constitute a triangle, and by proceeding downhill by means of successive reflections and contractions of this triangle. Venot et al. [389] proposed to use a stochastic search approach in which, for each iteration, the displace- ment vector to be evaluated is computed from the vector of the previous iteration by adding a random vector according to some suitably chosen probability density func- tion. Other algorithms for the simultaneous optimization of a larger number of param- eters are genetic search [68], derived from principles of natural population genetics, and simulated annealing [181], an analogue of the physical process of annealing. These types of strategies were used, respectively, by Fitzpatrick et al. [96,97, 99–102, 297] and Mandava et al. [230], who also proposed an approach for increasing the accuracy and reducing the computational cost of such algorithms.40

39That is, entirely self-contained strategies in which 1D optimizations do not figure [305]. 40Recall from Section 2.4.1 that they applied one-to-one polynomial mappings, from the class proposed by Fitzpatrick & Leuze [98], to user defined regions of interest. In that case, the objective is to simultaneously find the optimal displacement vectors of the four corner points that constitute 26 2 Retrospective Motion Correction in DSA — A Review

Apart from using an adequate optimization technique, acceleration may in some cases also be achieved by reducing the computational cost of the similarity mea- sure. Potel & Gustafson [301] proposed to use the acceleration technique described by Barnea & Silverman [13], in which summation is terminated as soon as the accu- mulated measure of match exceeds a certain threshold. A similar technique was used by Ko et al. [182]. An alternative approach to reduce the computational cost is to use only a limited number of randomly selected pixels when evaluating the similar- ity measure, as was done by Pickens et al. [297], Fitzpatrick et al. [97, 100–102] and Mandava et al. [230]. The accuracy of the resulting estimated measures of match was increased by employing a stratified sampling procedure [168].

2.4.5 Grey-Level Distortion Correction Even if the correct correspondence between the grey-level structures in successive im- ages in a sequence has been recovered (in terms of a displacement vector field), and the mask image has been geometrically deformed accordingly, the background in the resulting subtraction images will not necessarily be entirely homogeneous and their mean intensity may be different for different contrast images. Apart from noise, which all the components in the imaging chain contribute to [29,59,137,169,197,215,281,395], these artifacts are primarily caused by changes in local densities due to contractions and expansions of tissues. Other causes of grey-level distortion artifacts include fluc- tuations in the intensity of X-rays and, as time elapses, the non-uniform diffusion of the contrast medium into the capillaries. Therefore, a final aspect of motion correction is retrospective correction for remaining grey-level distortion artifacts. As shown by Fitzpatrick [95], and recalled in this chapter in Section 2.3.1, it is possible to construct a 2D geometrical transformation that completely accounts for the projective effects of 3D patient motion. Local contractions and expansions of tissues are manifest in the contrast images as negative or positive changes in local grey levels.41 As expressed in Eq. (2.5), these changes may be compensated for by incorporating the Jacobian of the geometrical transformation when applying this transformation to the mask image, prior to subtraction. This was done by Pickens et al. [296,297], Fitzpatrick et al. [97,99–102] and Mandava et al. [230]. A frequently encountered alternative approach to grey-level distortion correction is to make a distinction between multiplicative and additive changes and to correct for them separately, prior to subtraction. Venot et al. [389,393] proposed to incorporate multiplicative and additive grey-value correction parameters into the optimization strategy. Van Tran & Sklansky [387, 388] proposed to correct for additive variations by adding a correction image, ID, to the transformed mask image. The grey values in ID were computed by bilinear interpolation of the grey values at a number of reference points on a regular grid. The grey values at these reference points were determined by defining a window around corresponding points in the contrast and transformed mask image and by averaging the grey-value differences. Multiplicative variations were corrected for by computing the ratio of the averages of grey values in the contrast and transformed mask image, r = hIi/hI0i, and by multiplying the a region of interest, which boils down to an eight-dimensional optimization problem. 41See Section 2.3.1 for details, in particular Footnote 12. 2.5 Discussion 27 grey values in the mask image with this factor.42 The exact same approach was later used by Ko et al. [182]. Similar approaches for multiplicative and additive distortion correction were reported by Wu et al. [421] and Cox & De Jager [63], respectively.

2.5 Discussion

In the previous section we described the different aspects of retrospective patient motion correction in DSA and presented an overview of the techniques that have been proposed in order to solve the various parts of the correction problem. In this section we further discuss the benefits and limitations of these techniques. Conclusions are drawn in Section 2.6.

2.5.1 Control Point Selection and Displacement Estimation In Section 2.4.1 we pointed out that, because of the high computational cost, it is usually not admissible to determine the displacement explicitly for every pixel. This has been recognized by many researchers and has led to the concept of control-point based registration. However, this introduces two new problems: (i) how to select a suitable set of control points, and (ii) how to construct the complete displacement vector field from the displacement vectors of these points? Most control-point based registration algorithms published so far employ regular grids. That is, given the Cartesian coordinate system in which the image is defined, a set of equidistant lines are taken parallel to the x and y axes, and the line crossings are selected as control points. In the papers referred to in Section 2.4.1, either the inter-line distance was chosen arbitrarily, or an “optimal” distance was established empirically, where optimality was defined in terms of image quality and/or computa- tional cost. Since this is a highly subjective procedure, inter-line distances have been reported in the range from five pixels [63] to well over a 100 pixels [363].43 Even when image content is taken into account, by applying some or other exclusion mechanism, it remains a rather arbitrary approach. In recent publications [247, 248, 250] we have argued that control point selection should be image-feature based, the reason for this being threefold: (i) control points can be selected at those positions where the artifacts can be expected to be largest; (ii) since the neighborhoods of those points are known to contain large contrast vari- ations, the reliability of the displacement estimates will be higher; (iii) with this approach, usually a smaller number of control points suffices, resulting in a reduction of computation time. We have demonstrated the advantage of algorithms with edge- based control point selection over algorithms based on regular grids. Although it has not yet been confirmed experimentally, the use of other (additional) features, such as corners and ridges, may be beneficial in some cases.

42In order to prevent pixels corresponding to contrasted blood vessels from spoiling these processes, they were left out of the computation of average values. This was accomplished by using an exclusion template, as described at the end of Section 2.4.2. 43Notice that the inter-line distance parameter is, in principle, independent of the window size parameters K and L (see introduction of Section 2.4). 28 2 Retrospective Motion Correction in DSA — A Review

Feature-based control-point selection and subsequent local displacement compu- tations will in general result in irregularly distributed displacement vectors. For the computation of the overall displacement vector field there are two approaches, which are connected with the two possible aprioriassumptions regarding the reliability of the local displacement vectors: the individual vectors are assumed to be either suffi- ciently reliable, or prone to too much error. In the former case, the vectors may be interpolated directly. The computationally cheapest approach in this respect is to use linear interpolation, which requires a triangulation of the set of control points. The resulting transformation may be considered a “finite-element” approximation of the original transformation. However, it can be expected that in a number of cases, more complex interpolation schemes, such as thin-plate splines, will yield more accurate approximations, albeit at higher computational cost. Alternatively, if the reliability of the individual displacement vectors is assumed to be insufficient, some form of regularization should be applied. In that case, the final vectors of the control points are also dependent on those of neighboring control points.

2.5.2 Comparisons of Similarity Measures In Section 2.4.2 we stressed the importance of the measure employed to quantify the similarity of windows in successive frames. We summarized the different measures that can be found in the literature on registration in DSA. Although all of these measures have never been compared in a single evaluation, the partial evaluations reported by several authors enable us to draw some important conclusions.44 First, it should be noted that ordinary correlation, i.e., the numerator of the right-hand side of Eq. (2.6), is dependent on actual grey values. This may lead to serious problems, which can be appreciated from the observation that, given two windows in the contrast image, the first being almost identical to the window in the mask image and the second being more or less homogeneous and showing relatively large grey values, ordinary correlation will designate the latter window as being most similar to the window in the mask image. This runs counter to intuition, and explains why normalization is mandatory.45 Similar remarks can be made for frequency-based correlation: the numerator of Eq. (2.10) is the Fourier transform of the ordinary correlation function. Here, normalization with respect to both images is accomplished by eliminating the magnitudes of both spectra, leaving only the phase term. There are some specific problems connected to Fourier-based correlation. Apart from noise, which also influences spatial correlation measures, the effects of both spectral and spatial leakage further deteriorate the performance of the PC measure. Spectral leakage in the Fourier transforms of the windows in the mask and contrast images results from the non-periodicity of the spatial information in these windows. Spatial leakage in the inverse Fourier transform (Eq. (2.11)) is caused by the non- periodicity of the cross-power spectrum (Eq. (2.10)). These effects can be reduced

44In this section we frequently use the abbreviations introduced in Section 2.4.2. See the “Abbre- viations” at the beginning of this thesis for an overview. 45 It must be remarked that normalization with respect to I0 is not strictly necessary for the NCC and CC measures, since the corresponding normalization factor is a constant, i.e., independent of d. See also Footnote 22. 2.5 Discussion 29 by applying windowing techniques [138]. By using these techniques, Wu et al. [421] found that the PC measure yields a more sharply peaked match function than spatial cross-correlation measures. Compared to correlation measures, the SAVD measure is more consistent with the ultimate goal of registration in DSA: minimizing the absolute difference values in subtraction images. However, as pointed out by Fitzpatrick et al. [97], algorithms based on this measure cannot be expected to proceed consistently beyond the point where the differences due to motion artifacts are reduced to the level of the inherent differences due to the contrast medium. This implies that there may remain artifacts which are as pronounced as the contrasted vessels. The extent to which the presence of contrasted vessels has a negative influence on the performance of this measure is dependent on the relative area of the vessels. It is clear that in order to get rid of this dependency, the similarity measure should be made insensitive to inherent local dissimilarities between the images. As pointed out by several authors [32,304,321], the SSD measure is directly related to ordinary correlation. By expanding the right-hand side of Eq. (2.13) we obtain: X X X M 2 2 − SSD(d)= I (x)+ I0 (x + d) 2 I(x)I0(x + d). (2.27) x∈W x∈W x∈W In Eq. (2.27), the first term on the right-hand side is a constant. If the second term varies only gradually as a function of d, the optimal displacement found by using this measure is mainly determined by the third term, which is equivalent to ordinary correlation. Notice that in this particular case, the NCC measure also approaches ordinary correlation, which implies that the SSD and NCC measure should yield equivalent results. In other cases, the second term will cause the SSD measure to perform better than ordinary correlation. Potel & Gustafson [301] carried out a comparison between the CC measure and the SSD, SAVD, SDT, and SPD measure, and concluded that the discrepancies be- tween the displacement vectors obtained by using the latter measures and the vectors obtained by CC were always less than 0.1 pixel. This can be explained from the fact that they used relatively large windows of 128 × 128 pixels in images of size 256 × 256 pixels, thereby diminishing the negative effects of local dissimilarities caused by noise and contrasted vessels. Van Tran & Sklansky [387,388] also compared the SAVD and SSD measures, using windows of 31 × 31 pixels in images of size 512 × 512 pixels, and concluded that the SAVD measure produced the best results. In contrast with the aforementioned measures, the VOD measure is independent of additive mean grey-level offsets. This may be profitable in cases where the images contain a grey-level gradient, as in the examples shown by Cox & De Jager [63]. They compared the SAVD and VOD measures and found that in those cases the latter performs better indeed. However, they also found that in the absence of any grey-level gradient, the SAVD measure yields more accurate results. This can be explained from the expansion of Eq. (2.14): ! X 1 2 M (d)= I (x) −hI i2 . (2.28) VOD KL d d W x∈W 30 2 Retrospective Motion Correction in DSA — A Review

In those cases where the variation, as a function of d, of the second term on the right- hand side of Eq. (2.28) is negligible, the VOD measure becomes equivalent to the SSD measure and, in accordance with the results of Van Tran & Sklansky [387,388], should perform worse than the SAVD measure. The first measure that was explicitly designed to be relatively insensitive to local dissimilarities, as caused by e.g. contrasted vessels, is the DSC measure. As pointed out by Fitzpatrick et al. [97], the value of MDSC, computed by using Eq. (2.15), may be considered an estimate of the area (in pixels) within which the absolute differences are less than q.46 That is, X M u −| | DSC(d) η(q Id(x) ), (2.29) x∈W where η(x) is the step function defined as:  1ifx>0, η(x) , (2.30) 0ifx 6 0.

The robustness of the DSC measure against the inflow of contrast may then be ex- plained from the fact that the decrease of the sum on the right-hand side of Eq. (2.29), as caused by a total of n affected pixels, is only n, regardless of the magnitude of the actual difference values.47 Venot et al. carried out a comparison of the CC, SAVD and DSC measures and confirmed the superiority of the latter [392]. A disadvantage of the DSC measure is the associated parameter, q, which needs to be tuned. Venot et al. [392] demonstrated that if the variance, σ2, of the noise in the subtraction images is known, there is no advantage in selecting values of q larger than 2σ. This upper bound, however, is dependent on image content. In their evaluations, they used q = 1 [389, 392] and q = 2 [393]. Zuiderveld et al. [427, 428] used q values of about 10. A value of 8 was reported by Hua & Fram [161], who also claimed that even more accurate registration results are obtained by applying the DSC measure to the first derivatives of the original images. In the presence of a grey-level gradient, this may indeed be true, since the low frequency variations are reduced by first-derivative filtering, i.e., edge enhancement. The competence of CBC as a measure of similarity is highly questionable. Anal- ogous to sign-change measures, the CBC measure was designed so as to ascribe equal weight to every pixel within the windows to be compared, irrespective of their grey values [51]. As explained by Chiang & Sullivan [51], the optimal displacement ac- cording to this measure is the displacement for which the number of matching bits is maximal. However, the idea behind this stems directly from correlation. In fact, when taking a closer look at Eq. (2.18), it must be concluded that CBC is nothing but a rather unfortunate implementation of ordinary correlation. Although it is true

46Similarly, the expected value of the SSC measure is approximately equal to the area within which the absolute differences are insignificant compared to the noise in the images [97]. 47For the DSC measure itself, the decrease caused by a run of m contiguous affected pixels is either m − 1, m,orm + 1 (depending on whether the run contains an even or an odd number of pixels, whether or not the run starts or ends at the border of the window W, and the position of the run relative to the pattern P defined in Eq. (2.17)). For a total of n affected pixels (partitioned into one or more runs), the total decrease will be approximately n. 2.5 Discussion 31 that this measure assigns equal weight to all pixels, it suffers from peculiar incon- sistencies. For example, consider a 10 bit pixel with value 511, binarily represented as 01 1111 1111. A small amount of noise may turn this pixel value into 510 or 512, represented as 01 1111 1110 and 10 0000 0000, respectively. According to the CBC measure, 511 matches very well with 510, but extremely bad with 512, which runs counter to intuition.48 This also shows that it is not true that lower-order bits tend to be more contaminated with noise, while higher order bits are more locally uniform among neighboring pixels, as was asserted by Chiang & Sullivan [51]. In fact, it can quite easily be shown that for any grey-value, the effect of noise on the CBC measure is asymmetric with respect to the sign of the change. Therefore, contrary to what was claimed, the distribution of the noise does affect the performance of CBC. In their paper, Chiang & Sullivan [51] evaluated the performance of the CBC measure only by comparison with the SSC measure, and concluded that CBC was superior. However, as pointed out several times by Venot et al. [389, 392, 393], the SSC is not an adequate measure for DSA because of the relatively low noise level. Therefore, we cannot assign much value to this evaluation, and we are quite confident that the DSC measure, or even CC or NCC, would have outperformed the CBC measure. Similarity measures based on the histogram of differences take advantage of the fact that in the case of optimal alignment, only a small number of difference values will have a high relative frequency, while the majority will have a low relative frequency. This results in a sharply peaked histogram, whether or not a window contains opacified vessels, the former case resulting in two peaks and the latter in only one peak. In the case of misalignment, the histogram will have a larger dispersion in both cases. This dispersion could be measured on the abscissa by computing e.g. the standard deviation of the histogram, as was done by Wenzel [408] in dental DSR. However, the dispersion is more adequately computed on the ordinate axis, by means of convex or concave weighting functions, as proposed by Buzug et al., since these functions are more sensitive to small changes in the histogram. Several comparisons [37,39,40,45] between ordinary correlation and the SSD, DSC, and ENT measures have demonstrated the superiority of the latter.49 It has also been shown that the EHD measure performs comparably, at a reduced computational cost [39,42,45]. In summary, in contrast with all other similarity measures used in DSA, histogram- based measures consider relative frequencies of difference values. As a consequence, these measures are sensitive to neither mean grey-level offsets, nor local dissimilari- ties caused by contrasted vessels (regardless of their relative areas), and therefore do not require exclusion templates. Furthermore, they are computationally cheap [212], do not require the tuning of parameters and yet lead to very smooth match sur- faces [37–40, 42, 45], which allows for efficient optimization. In conclusion, of all similarity measures developed so far, the EHD measure has been shown to be the most adequate for registration in DSA.

48Similar remarks were made by Venot et al. [394]. 49It is worth noting that, in the context of , Lehmann et al. [212] carried out a comparative evaluation of CC, SAVD, VOD, ENT, and some additional measures that have hitherto not been used in DSA. They concluded that ENT was the most adequate similarity measure, although it must be pointed out that, due to the different nature of dental DSR images, their evaluation only involved regions that lacked contrasted vessels. 32 2 Retrospective Motion Correction in DSA — A Review

2.5.3 Interpolation Techniques for Subpixel Precision In Section 2.4.3 we argued that, since even subpixel misalignments may produce sig- nificant artifacts, the displacement computations should be carried out with subpixel precision, which requires interpolation. As can be observed from the overview in Sec- tion 2.4.3, the techniques that have been proposed for this purpose may be divided into two categories. With the first type of techniques, the mask or contrast image is interpolated and resampled so as to allow for an explicit evaluation of the chosen similarity measure at non-integer displacements. With techniques from the second category, the match values of integer displacements are interpolated so as to obtain a continuous bivariate match surface, from which an optimum displacement vector may be determined analytically. It must be pointed out that there is no theoretical basis to support the use of match-interpolation techniques. In the papers referred to in Section 2.4.3, choices for a particular interpolation scheme were rather arbitrary, and were based on constraints regarding computational cost, rather than on theoretical foundations. In contrast, for interpolation of the original images one can appeal to well-known sampling and reconstruction theorems, e.g. Shannon’s [348]. Interpolation of medical image data can be performed quite accurately by means of e.g. spline interpolation (see Chapter 6 of this thesis). In that case, the final precision of the displacement vectors must be specified explicitly by the user. Several authors have reported that a precision of 1/10 pixel is sufficient for registration in DSA [387,388,417,423].50 In summary, it can be expected that image-interpolation techniques for subpixel precision will, in general, yield better results than match-interpolation techniques. Al- though, to our knowledge, no thorough quantitative analyses have been carried out, our initial experiments support this hypothesis [250]. It must be remarked that image- interpolation is computationally more expensive than match interpolation. However, we have recently demonstrated that the computational cost can be reduced consider- ably by efficient implementation [247,248,250].

2.5.4 Optimization Strategies and Related Issues Despite its robustness, the use of an exhaustive search procedure for optimization of the chosen similarity measure is usually not a feasible approach. Although computer hardware is rapidly becoming faster, this approach is currently still computationally too expensive. Therefore, a more efficient strategy is demanded. Sophisticated multidimensional optimization techniques, such as the downhill sim- plex method, stochastic or genetic search, and simulated annealing, are adequate for the simultaneous optimization of a large number of parameters. When using a control- point based registration approach, as recommended in Section 2.5.1, one may choose to compute the local displacement vectors of all control points simultaneously, such as in the approach described by Fitzpatrick et al. [96, 97, 99–102, 230, 297]. However, it is computationally cheaper to compute the displacements of the individual control points separately, in a successive fashion.

50Notice, for comparison, that the manual pixel shifting tool available on most modern clinical DSA devices has a precision of 1/8pixel. 2.5 Discussion 33

The applicability of simple optimization techniques for the computation of indi- vidual displacement vectors is determined by the behavior of the employed similarity measure. For example, if the resulting match surface has a pronounced global opti- mum, but in addition shows many local optima, hill climbing is very likely to fail in finding the global optimum. The same argument holds for some of the multiresolution techniques, and probably explains why the three-step search procedure of Cox & De Jager, in combination with the VOD measure, did not perform adequately [63]. As hinted at in Section 2.4.1, the behavior of a similarity measure is strongly related to the size of the windows in which it is computed. For example, while Venot et al. [393] reported the successful use of hill climbing for optimization of the DSC measure for windows of size 70 × 70 pixels, Roos [318, 320] reported that, in at least 30% of all cases, this technique, as well as logarithmic search, did not properly optimize the DSC measure for windows of 31 × 31 pixels in size. In Section 2.5.2 we recalled the experiments of Buzug et al. [37, 39, 40, 42, 45] and Lehmann et al. [212], from which it was concluded that the EHD measure is the most adequate measure for registration in DSA. Buzug et al. [38] reported that with this measure, as opposed to others, a window size of about 50 × 50 pixels already leads to very smooth match surfaces, which allows for optimization by means of hill climbing [35,37,42].

2.5.5 Multiplicative versus Additive Grey-Level Distortions As can be concluded from Section 2.4.5, the techniques for retrospective correction of remnant grey-level distortion artifacts in subtraction images — that is, after ap- plication of the computed geometrical transformation to the mask image — can be divided into multiplicative and additive approaches. Before being able to judge the validity of these approaches, we must go into more detail about the physics and signal processing behind the acquisition of digital angiography images. According to the Lambert-Beer attenuation paradigm, Eq. (2.3), X-rays incident on the detector matrix have been attenuated exponentially by the encountered matter. In the subsequent signal processing chain, the detected signal is further amplified and processed. It has been argued by Kruger et al. [195] that, prior to subtraction, images should be processed logarithmically for the following reasons: i) Uniformity of the contrasted vessels in the resulting subtraction images. It can easily be derived that with linear processing, the grey values in the contrasted vessels are modulated by the background structures in the mask image. This type of distortion is removed by logarithmic processing. ii) With logarithmic processing, the grey values in contrasted regions of the sub- traction images are directly proportional to the thickness of the underlying vasculature. This is an important property if the images are to be used for subsequent quantitative analyses. iii) Logarithmic subtraction imaging reduces the bias introduced by the possible spatially non-uniform detection properties of the image intensifier. Therefore, logarithmic post-processing is the standard in modern clinical DSA imaging devices. From Eq. (2.3) it can be derived that in this case the grey-value I at position 34 2 Retrospective Motion Correction in DSA — A Review x in the resulting images becomes:  I(x)=lnΦ∅(x) −L(x). (2.31)

From the analysis by Fitzpatrick (Section 2.3.1), it can be concluded that con- tractions and expansions of tissues will result in spatially varying multiplicative grey- level distortion artifacts. These artifacts may be corrected for by incorporating the Jacobian of the obtained geometrical transformation. However, as mentioned by Fitz- patrick [95], such an approach is only valid subject to the proportionality restriction expressed in relation (2.4). From Eq. (2.31) it can be appreciated that (2.4) only holds when the acquisition system is properly calibrated so as to correct for ln Φ∅(x) .This must be accounted for when applying this technique [97,99–102,297]. Also notice that the complexity of the computed transformation directly deter- mines the complexity of the Jacobian factor to be computed. For instance, if the displacement vector field is constructed by interpolation from the displacements of the control points by using thin-plate splines, the Jacobian will be a nonlinear func- tion of x and y. The use of piecewise bilinear interpolation will cause the Jacobian factor to be piecewise linearly dependent on x and y. When using linear interpola- tion between control points, the Jacobian can easily be shown to become piecewise constant [250]. Furthermore, it must be pointed out that because of the reasons mentioned in Section 2.3.2, the displacement vector field d : D → R2 as found by whatever registration approach is very likely not to be an element of the class of pos- sible mappings Ψ : D → R2 (see Section 2.3.1). This implies that, regardless of the −1 complexity of d, the effects of the correction factor Jd will be limited. The effects of X-ray intensity fluctuations, i.e., temporal changes in Φ∅, and non- uniform diffusion of contrast material into the capillaries, may be assessed as follows. Assume that a mask image, Im, and a contrast image, Ic, are produced by X-rays with intensities Φm and Φc, respectively:  Im(x)=ln Φm(x) −Lm(x), (2.32a)  Ic(x)=ln Φc(x) −Lc(x) −LI(x), (2.32b) where LI denotes the contribution of the contrast medium (iodine). Only in the case of complete absence of motion artifacts, or complete motion correction, we have 51 Lc(x)=Lm(x), ∀x ∈ D, in which case the subtraction image becomes:   Ic(x) − Im(x)=ln Φc(x) − ln Φm(x) −LI(x). (2.33) From Eq. (2.33) it can be concluded that in the case of logarithmic processing, fluctuations in X-ray intensity and the non-uniform diffusion of contrast medium both result in additive grey-level distortions in the subtraction images, as was correctly mentioned by Fitzpatrick et al. [97] and Cox & De Jager [63]. In the case of linear post- processing of the acquired images it can be shown that, subject to some restrictions [195], both phenomena result in multiplicative distortions. The situation where X-ray

51 The minus sign in front of LI (x) indicates that contrasted vessels appear dark on a bright back- ground. This is the standard setting in modern DSA imaging. Obviously, reverse subtraction, as was done in the early 1980s, will yield the opposite effect. 2.5 Discussion 35 intensity fluctuation has a multiplicative effect, while at the same time the diffusion of contrast medium has an additive effect (or vice versa), as mentioned by Van Tran & Sklansky [387, 388], does not occur. Nor does the contrast medium yield both multiplicative and additive distortions, as was asserted by Ko et al. [182]. The idea of multiplicative correction to eliminate the effects of changes in X-ray intensity, as originally proposed by Venot et al. [389,393] and also used by e.g. Wu et al. [421], was based on the assumption of linear post-processing.

2.5.6 Suggestions for Future Research Although there exist 2D geometrical transformations that account for the projective effects of a 3D transformation, we have argued that successful application of image registration techniques to recover any such transformation is limited. To some ex- tent, the problem of independently moving superimposed structures may be solved by using additional information, either from models or from the combination of sev- eral projections, possibly at different angles. Ro et al. [316] described an approach for the simultaneous correction of artifacts caused by both cardiac and respiratory motion, by a specific combination of mask images taken at different cardiac and respi- ratory phases. Although preliminary evaluations failed to prove the significance of the improvements resulting from their algorithm [316], such approaches are potentially useful and deserve further investigation. However, it must be emphasized that the combined subtraction of several mask images reduces the SNR. At several points in this chapter we have broached the subject of computational cost. We note that several concepts, such as control-point selection, match interpo- lation for subpixel precision, or multiresolution optimization, were developed from the necessity to prevent excessive computation times. Although the available time for image post-processing may differ from case to case, minimization of the computa- tional cost of the individual steps of a motion correction algorithm is important from another point of view: the trade off with complexity. For example, a speed up of similarity evaluations allows for the selection of a larger number of control points, or a higher-order displacement interpolation scheme, which will improve the accuracy of the registration. Therefore, an interesting topic for future research is efficient imple- mentation. Most algorithms published so far were implemented entirely in software. It is obvious that hardware implementations will considerably reduce computational cost. We have already been able to virtually diminish the computation time required for the deformation of images, by using a hardware-accelerated OpenGL architecture. Since the bulk of the computation time is taken up by similarity evaluations, the use of dedicated hardware for this purpose is an interesting option.

2.5.7 Final Remarks We note that the discussion in this chapter was mainly focussed on retrospective mo- tion estimation for the purpose of image enhancement. There also exist several related algorithms for cardiac or vascular motion analysis [370], and vessel tracking and seg- mentation for improved densitometric estimation of the diameter and cross-sectional area of the of projected vessels [55–58]. However, since these algorithms were 36 2 Retrospective Motion Correction in DSA — A Review not primarily developed for the purpose of overall motion artifact reduction, they were considered to be outside the scope of this chapter. Finally, it must be emphasized that the purpose of this chapter was not to pro- mote techniques for retrospective motion correction at the detriment of techniques for prospective avoidance of artifacts. Of course, prevention is better than cure: if the latter techniques yield satisfactory results, they are greatly preferred. In general, however, artifacts cannot be entirely avoided and in such cases retrospective motion correction will prove useful.

2.6 Conclusions

In this chapter, we reviewed the techniques described in the literature for reduction of motion artifacts in DSA images. We summarized the different types of artifacts that have been reported, as well as the techniques that have been proposed to prevent motion artifacts. The main purpose of this chapter was to present a detailed overview of techniques for retrospective motion correction by image registration and grey-level distortion correction. To this end, we described the different problems connected with patient motion in angiographic X-ray projection images, as well as the techniques that have been developed to solve these problems. From the evaluations and experiences reported by many authors, we draw the following conclusions:

• In ordinary X-ray projection imaging, it is possible to construct a 2D geomet- rical transformation that completely accounts for the effects of 3D motion of the original objects. However, it is practically impossible to exactly retrieve such a transformation from the projection images, mainly due to the aperture problem and the use of neighborhood operations. In angiography, the presence of additional local contrast in the live images imposes further limitations.

• The computation of local motion or displacements of structures in images can be carried out either by optic-flow or by template-matching based techniques. In practice, the basic assumptions of optic-flow techniques do not apply to X-ray projection images. Also, these techniques are sensitive to the inflow of con- trast. Template-matching techniques, however, can be made relatively robust against this phenomenon, by applying an adequate similarity measure (see fur- ther). Template-matching techniques also use more information to assess local displacements. Therefore, template matching is preferred over optic flow.

• Because of the high computational cost (even with the current status quo of computer technology), it is usually inadmissible to determine the correspon- dence between images explicitly for every pixel. To reduce computation times to a clinically acceptable level, only a limited number of control points should be considered. Since the computation of displacement vectors will only be accurate in regions containing sufficient image structure, the selection of control points should be based on image content, rather than on regular grids. Appropriate features to extract structured regions are edges, corners, ridges, etc. 2.6 Conclusions 37

• Feature-based control-point selection will usually result in irregular grids. In or- der to obtain a complete displacement vector field, the computationally cheapest approach is to use linear interpolation, which requires triangulation of the set of control points and can be carried out very fast using graphics hardware. More complex interpolation schemes, such as thin-plate splines, are likely to yield better results. However, this will drastically increase computational cost. In cases where the individual displacement vectors are not sufficiently reliable, some form of regularization should be applied.

• The use of template matching for the computation of local displacements of grey- level structures requires a measure for the quantification of “similarity”. Of all similarity measures developed so far, the energy of the histogram of differences (EHD) has been shown to be the most adequate measure for registration in DSA. It is insensitive to mean grey-level offsets and local dissimilarities caused by contrasted vessels. Furthermore, it is computationally cheap, it does not require exclusion templates, nor tuning of parameters and yet leads to very smooth match surfaces.

• Since even subpixel misalignments may produce significant artifacts in subtrac- tion images, displacement computations should be carried out with subpixel pre- cision. This may be achieved either by interpolation of the image data, followed by evaluations of the chosen similarity measure at non-integer displacements, or by interpolation of the match values at integer displacements and analysis of the resulting continuous match surface. In general, the former technique can be expected to yield better results. An precision of 1/10 pixel is usually sufficient.

• To further reduce computation time, the number of similarity evaluations should be as small as possible. This requires the use of an efficient optimization strat- egy. The applicability of simple optimization techniques for the computation of individual displacement vectors is determined by the behavior of the employed similarity measure. As pointed out before, the EHD measure has been shown to yield very smooth match surfaces, which allows for a computationally cheap hill-climbing procedure.

• After application of the transformation as obtained by the registration proce- dure to the mask image, there may be remaining grey-level distortion artifacts. These artifacts may be the result of contractions and expansions of tissues, fluc- tuations in the intensity of the X-ray source, or the non-uniform diffusion of the contrast medium. In the case of logarithmic amplification of the acquired images, artifacts caused by tissue deformations lead to spatially varying mul- tiplicative grey-level distortions. In theory, this distortion is described by the Jacobian of the transformation, subject to the restriction of proper calibration of the acquisition system. In practice, the effectiveness of the Jacobian correction factor is limited by the accuracy and complexity of the computed transforma- tion. Artifacts caused by X-ray intensity fluctuations or contrast diffusion both result in spatially varying additive grey-level distortions. 38 2 Retrospective Motion Correction in DSA — A Review

To further improve the performance of registration algorithms, future research should focus on the use of additional knowledge, either from models or from the com- bined information of multiple projections, possibly from different angles. Another interesting topic is efficient implementation. Hardware implementations will consid- erably reduce the computational cost of the algorithms, which may also be exploited to increase their complexity. Since, at his point, the bulk of the computation time is taken up by similarity evaluations, the use of dedicated hardware for this purpose is an interesting option.

2.A Appendix: Extrema of Histogram-Based Similarity Measures

In Section 2.4.2, extrema were given for the histogram-of-differences based similarity measures. However, as pointed out in Footnotes 29 and 30, the explicit forms of the actual extrema are more complex in those cases where the number of pixels in the window under consideration (KL) is not an integer multiple of the number of bins in the histogram of differences (2G + 1). In this appendix, the actual upper limit of

MENT and the actual lower limit of MEHD will be derived.

M The Upper Limit of ENT Given the normalized histogram of differences, H, as defined in Eq. (2.19), and the related entropy measure, MENT,asdefinedinEq.(2.21).Ingeneral,MENT will assume its maximum value when the histogram is as disperse as possible. In case KL = n(2G +1),n∈ N, maximum dispersion implies that every bin in H has the same value, viz., n/KL, and the corresponding value of MENT is log(2G +1). In all other cases, the bins in the histogram can be divided into two groups. If we define the following variables:   KL r = , (2.34a) 2G +1 s = KL − r(2G +1), (2.34b) maximum dispersion implies that there are s bins with value (r+1)/KL,and2G+1−s bins with value r/KL. The actual upper limit of M cannowbewrittenas   ENT   s(r +1) KL r KL log +(2G +1− s) log . (2.35) KL r +1 KL r It can be shown that the quantity (2.35) is always less than or equal to log(2G+1), where equality holds in case KL = n(2G +1),n∈ N. Therefore, in Section 2.4.2, log(2G + 1) was taken as an indication of the largest possible value of MENT.

M The Lower Limit of EHD Given the normalized histogram of differences, H, as defined in Eq. (2.19), and the energy measure, MEHD, as defined in Eq. (2.23). In general, MEHD will assume its 2.A Appendix: Extrema of Histogram-Based Similarity Measures 39 minimum value when the histogram shows maximum dispersion. As just explained, if KL = n(2G +1),n∈ N, maximum dispersion implies that every bin in H has the value n/KL. The corresponding value of MEHD is 1/(2G +1). Again, in all other cases the bins in the histogram can be divided into two groups. When using the variables defined in Eqs. (2.34a) and (2.34b), the actual lower limit of MEHD canbewrittenas     r +1 2 r 2 s +(2G +1− s) . (2.36) KL KL

It can be shown that the quantity (2.36) is always larger than or equal to 1/(2G+ 1), where equality holds in case KL = n(2G +1),n∈ N. Therefore, in Section 2.4.2,

1/(2G + 1) was taken as an indication of the smallest possible value of MEHD.

In order to get any picture at all of a thing, it is sometimes necessary to begin with a false picture and then correct it.

—C.S.Lewis,Miracles, App. B ()

Chapter 3

Image Registration for Motion Artifact Reduction in Digital Subtraction Angiography

Abstract — Although digital subtraction angiography (DSA) is a powerful tech- nique for the visualization of blood vessels in the human body, the diagnostic rel- evance of DSA images is often reduced by artifacts which arise from the misalign- ment of successive images in the sequence, due to patient motion. Over the past two decades, several registration techniques have been proposed to correct for such artifacts retrospectively. However, this has never led to algorithms which are fast enough to be acceptable for integration in a clinical setting. In this chapter, a new approach to the registration of digital X-ray angiography images is proposed. It involves an edge-based selection of control points for which the displacement is computed by means of template matching, and from which the complete displace- ment vector field is constructed by means of interpolation. The final warping of the images according to the computed displacement vector field is performed real-time by graphics hardware. Preliminary experiments with several clinical datasets show that the proposed algorithm is both effective and very fast.

3.1 Introduction

n clinical practice, digital subtraction angiography (DSA) is a powerful technique for the visualization of blood vessels in the human body [29, 395]. In ordinary IX-ray projection images, blood vessels are hardly visible due to the very low con- trast between vessels and surrounding tissue. This contrast is enhanced by injection of a radiopaque contrast medium into the vessels to be diagnosed. However, with- out any further processing, the contrast between vessels and surrounding tissue is still significantly smaller than that between e.g. bone and surrounding tissue. This may introduce severe distortions and, hence, a reduction in the amount of diagnostic information that can be extracted from the images. In DSA imaging, a sequence of 2D images is taken to show the passage of a bolus of injected contrast material through the vessels of interest. The contrast distortions in these live images,orcontrast images, are largely removed by subtracting an image 42 3 Image Registration for Motion Artifact Reduction in DSA taken prior to the arrival of the contrast medium (referred to as the mask image). The relatively high image resolution, the low patient load as compared to e.g. com- puted tomography angiography (CTA), and the possibility to extract both spatial and temporal information (dynamic behavior of organs) will likely ensure DSA to remain an important technique for the visualization of blood vessels. The subtraction technique is based on the assumption that the tissues surrounding the vessels do not change in position or density during acquisition. However, as already mentioned in the introduction of the previous chapter, this assumption is not valid for a substantial number of examinations, the primairy reason for this being patient motion. Since the early 1980s, several techniques have been developed and applied in order to avoid or reduce the artifacts and to improve the diagnostic value of DSA images. These include tomographic [29,196] and tomosynthetic [198] imaging techniques, as well as dual (K-edge) energy subtraction [193,254], hybrid subtraction [28] and automatic remasking [279]. These methods have never been introduced on a large scale in clinical practice, mainly because they require materials and devices which are either very expensive, or difficult to produce, or both. Alternatively, the artifacts in the subtraction images could be corrected for ret- rospectively by means of image processing techniques. This is done by computing the correspondence between pixels in the successive images in the sequence and by warping the images with respect to each other according to this correspondence. This is often referred to as image registration,ormotion registration [3, 32, 229, 384]. Al- though many studies have been carried out on this subject over the past two decades, they have not resulted in algorithms which are sufficiently fast so as to be acceptable for integration in a clinical setting. In this chapter, a new approach to the registration of digital angiography image sequences is proposed, which is both effective and very fast. The approach and some implementation aspects are described in detail in Sections 3.2 and 3.3, respectively, followed by an overview of the complete algorithm in Section 3.4. Results of prelim- inary experiments on real angiography image sequences are presented in Section 3.5 and discussed in Section 3.6. Concluding remarks are made in Section 3.7.

3.2 Registration Approach

Given a 2D digital image sequence, I(x, y; t), of size M 2 × N pixels, the registration of one of the images, I(x, y; t1),t1 ∈ [0,N− 1] ⊂ N, with respect to a second image, I(x, y; t2),t2 =6 t1, involves two operations: (i) the computation of the correspondence between the pixels in the two images, and (ii) performing the correction according to this correspondence, by warping one of the images with respect to the other. In this section, the proposed approach to perform these operations is presented.

3.2.1 Control Points Selection In the attempt to obtain the correspondence between two images, a possible approach would be to explicitly compute the correspondence for every pixel. However, this is computationally very expensive and will not lead to sufficiently fast algorithms; even 3.2 Registration Approach 43 with modern workstations, it will take hours for images of size 512×512 or 1024×1024 pixels (the usual sizes for angiography images), not to mention the time required for a complete sequence, which usually consists of 15 to 20 of such images. In order to be able to reduce the computation time to a clinically acceptable level (several seconds), assumptions have to be made concerning the nature of the underlying motion. One possibility would be to assume rigidity of the parts that were imaged and to compute the correspondence of the two images only in terms of global translation and rotation. Current DSA imaging systems are equipped with a so called pixel-shifting mode, by which it is possible to correct for gross translational motion only. Although this may locally improve the subtraction in some cases, in general it will not yield an overall registration. The alternative is to assume a certain amount of coherence between neighboring pixels and to compute the correspondence only for a selected set of so called control points, pi =(xi,yi), in the image. The overall correspondence can then be obtained by interpolation. Control points can be chosen manually by selection of a region of interest [230,393,423] or can be taken to lie on a regular grid [5,63,360,388,428]. More sophisticated algorithms use image features such as lines [17] or line intersections [356], centers of gravity of closed-boundary regions [124], high-curvature points (corners) [66], zero-crossings of the Laplacian [150], or highly structured regions [349,371]. Since, in DSA images, artifacts appear only in those regions where strong object edges are present in the individual images in the case of a misalignment between mask and contrast image, and because edges can be matched better than more homogeneous regions, the selection of control points should be based on edge information. By selecting control points on important edges in the image, the implicit assumption is made that the displacement of points in between edges can be described by the displacements of points on these edges. Physically, this means that the displacement of tissue in between objects with a very different density (usually bone/soft-tissue transitions), is governed by the displacement at these transitions, which indeed seems a valid assumption. Compared to the approach of Zuiderveld et al. [428], in which control points were taken on a regular grid, this approach has three major advantages: (i) control points are chosen on positions where the artifacts can be expected to be largest, (ii) since the neighborhoods of those points are known to be structured, the reliability of displacement estimates will be higher, and (iii) the number of control points is reduced, thereby reducing computation time. The location of edges can easily be computed by detecting local maxima of the grey-level gradient magnitude. It should be noted that edges are scale dependent image features and therefore can only be detected properly by using derivatives com- puted at the proper scale. The commonly used scale-dependent detector for this purpose is the derivative of the Gaussian [47, 149, 234]. By using this approach, the 2D (regularized) gradient magnitude is computed as

    1 2 2 2 k∇L(x, y)k = ∂xL(x, y) + ∂yL(x, y) , (3.1)

with ∂iL(x, y)=I(x, y) ∗ ∂iG(x, y), where ∂i denotes the first derivative in the i-th direction, i ∈{x, y},andG is the bivariate Gaussian. 44 3 Image Registration for Motion Artifact Reduction in DSA

Notice that the edge-detection operation should not be applied to the contrast images, since in that case the boundaries of vessels would also be detected. The same argument holds for the application of edge detection to the subtraction images. In addition, in the latter case, regions that are already registered but in which there are very strong edges in the underlying mask and contrast images, will not be identified as potential problem regions. This means that in these regions the interpolation operation may even introduce motion artifacts. Therefore, the gradient-magnitude computations should be applied to the mask image. This has the additional advantage that the detection of regions that may give rise to motion artifacts in the subtraction image needs to be carried out only once (as a preprocessing step), instead of repeatedly for every new contrast image. In principle, the severeness of a motion artifact is directly related to the strength of the underlying edge, i.e., the gradient magnitude. In order to be able to extract adequate control points from the gradient-magnitude image, it is required to indicate which edges are important enough to be considered further. This is done by thresh- olding the gradient magnitude at a value Θe, resulting in a binary image in which there are connected regions that are of interest. It is obvious that it is neither man- ageable (because of the unacceptably large number of points involved) nor necessary (because of the assumed coherence between neighboring pixels) to take all of these edge points as control points for the construction of the warp transformation. There- fore, an additional selection mechanism is required, which is described hereafter. In order to preserve a sufficient amount of detail prior to this final selection, the scale σ at which the gradient magnitude is computed should be taken small.1 Under the assumption of coherence between neighboring pixels in an image, the control points can be constrained to have a minimum distance with respect to each other, say Dmin . This minimum distance is related to the image size according to Dmin = φminM, in which the minimum distance factor φmin is a parameter of the algorithm.2 In order to avoid complete absence of control points in very large regions in which no important edges are detected, the control points should also be constrained to have a maximum possible distance, Dmax = φmaxM. In angiography images, certain regions in the image (at the borders) are usually of constant grey-value. This is caused by the physical properties of the acquisition system: the X-rays are detected by a circularly shaped image intensifier. The part of the image in which the grey values are due to exposure of X-rays will be called the exposure region, denoted RE , in the sequel:

2 2 2 RE = {x | (x − xC ) +(y − yC ) 6 R ∧ > ∧ 6 ∧ x xmin x xmax (3.2) y > ymin ∧ y 6 ymax},

1 It should be noted that taking σ = σ0 for M = 512, implies that σ must be 2σ0 for M = 1024 in order to have the same regularization. However, the actual grey values in the resulting gradient- magnitude image will still be twice as small with respect to those of a 512 × 512 image of the exact same scene. This should be accounted for in Θe. 2 The rationale for relating Dmin and Dmax to the image size is that the distance (expressed in pixels) between coherent structures in a 1024 × 1024 sized image is indeed twice as large as the corresponding distance in a 512 × 512 size image of the same scene. 3.2 Registration Approach 45

where (xC ,yC) is the center of the image intensifier of the DSA system which, in the case of proper calibration, is also the center of the image, R is the radius of the image intensifier, and xmin, xmax, ymin ,andymax, are the left, right, upper and lower borders 3 respectively. Since the parts of the image outside RE do not contain information that could be used for registration, the control points should be positioned inside this exposure region, at a minimum distance Dexp = φexpM from the border ∂RE of the region. An example of control point selection based on this approach is shown in Fig. 3.1. More details will be given in Section 3.3.

3.2.2 Displacement Computation Techniques for computing the displacement due to motion of certain structures in one image with respect to another, can be divided into two categories: (i) gradient- based optic flow techniques, and (ii) template-matching based techniques. In this section, both techniques are discussed with respect to their applicability to estimate displacements in digital X-ray angiography images.

Optic-Flow Based Approach Many techniques for the determination of motion in image sequences are based on the concept of optic flow, originally introduced by Gibson [118], for which computational approaches have been described by e.g. Horn & Schunck [158], Nagel [260], Uras et al. [383] and Florack et al. [105]. Optic-flow techniques are explicitly based on the assumption that motion of objects in images causes only a change in position of the corresponding grey-level patterns, while the patterns themselves remain unchanged. For a 2D image sequence I(x, y; t), this implies that I(x, y; t)=I(x+δx, y+δy; t+δt), where δx, δy and δt denote small changes in position and time, respectively. By expanding the right-hand side of this equation, dividing by δt, and taking the limit δt → 0, we arrive at dI ∂I = + ∇I · u =0, (3.3) dt ∂t where ∇I denotes the 2D gradient of the grey-level image I,andu =(u, v) denotes the velocity in the image plane. Equation (3.3) is generally known as the optic flow constraint equation [158]. It is well known that the optic-flow problem defined in this manner is ill-posed by the fact that the solution to this problem is not unique (one equation in two unknown variables, u and v). Only the component of u in the gradient direction ∇I can be computed since a displacement in the tangential direction will never lead to a change

3 The grey values outside RE are of a constant value which is often much larger than those inside the region. This makes it very easy to extract the five parameters, by evaluating scan lines from the border to the center (xC,yC ) of the image. For example, in order to extract the radius R, scanning starts at pixel (0, 0). During scanning, the grey-level at the next position along the scan-line is compared to the value at the previous position. As soon as a difference is detected, say at position 2 2 1/2 (xi,yi), scanning is terminated and the radius is simply computed as R =((xC −xi) +(yC −yi ) ) . 1 The other four parameters, xmin, xmax, ymin,andymax, are found in a similar way, using (0, /2M), 1 1 1 (M, /2M), ( /2M,0), and ( /2M,M), as start points, respectively. 46 3 Image Registration for Motion Artifact Reduction in DSA

Figure 3.1. Example of control point selection. Top left: the mask image of a 1024×1024×19 digital angiography image sequence. Top right: contrast-enhanced subtraction of the mask from one of the contrast images, showing motion artifacts. Bottom left: thresholded gradient-magnitude image of the mask, predicting of the location of motion artifacts in the subtraction image (grey regions correspond to pixels whose grey-value is above the threshold Θe in the original grey-value gradient- magnitude image). Bottom right: selected control points (white dots).

in the grey-level distribution. This is often referred to as the aperture problem.In order to be able to obtain a unique solution for u and v, additional constraints need to be imposed, e.g. by requiring the displacements (or velocities) u(x, y)andv(x, y) to vary smoothly within the image, for which several approaches have been described in the literature [109,150,158]. 3.2 Registration Approach 47

However, the just described optic-flow technique cannot simply be applied to X- ray projection images. As mentioned in Section 2.3 of the previous chapter, these images are 2D projections of originally 3D scenes. A thorough analysis by Fitzpatrick [95] has indicated that for these images, the total time derivative is

dI = −I∇·u¯, (3.4) dt where “∇· ” denotes the divergence operator and u¯ is the weighted average along the rays, of the velocities perpendicular to the rays. This contradicts the assumption made in Eq. (3.3). In addition, it has to be noted that in the particular case of angiog- raphy image sequences, the optic-flow approach suffers from all of the three problems mentioned in Section 2.3.2 of the previous chapter. Taking into account these con- siderations, it can be expected that the optic-flow technique will not yield accurate displacement estimates when applied to this type of images. This was confirmed by pilot experiments.

Template-Matching Based Approach Template matching techniques are based on the assumption that the displacement d of every pixel in an image I(x, y; t1), can be approximated by taking a small window W of K × L pixels around the pixel and searching for the corresponding window in a successive image I(x, y; t2),t2 >t1 in the sequence by optimization of a certain similarity measure under translation. In principle, this approach also suffers from two of the problems mentioned in Section 2.3.2, viz., independently moving structures and the aperture problem. However, it can be made more robust against the inflow of contrast, by a proper choice for the similarity measure.

Similarity Measure. A crucial aspect of template matching is the similarity mea- sure that is used to determine the amount of correspondence between regions in suc- cessive frames. Many measures have been devised for this purpose, and an overview was given in Section 2.4.2 of the previous chapter. Most of these measures are based on the same assumption as optic-flow techniques, viz., that the structures in the image change only in position and not in intensity (i.e., ∇·u¯ = 0). As mentioned, this is not a valid assumption when dealing with angiography image sequences. In addition, most similarity measures are quite sensitive to the inflow of contrast. Measures that are very robust against this phenomenon are those based on the histogram of differences [36,42]. With histogram-based similarity measures, advantage is taken of that fact that in the case of optimal alignment, only a small number of grey-level differences have a high relative frequency, while the majority of differences have a low relative frequency, which results in a sharply peaked histogram. This is true whether the window W contains opacified vessels or not, the former case resulting in two peaks and the latter in only one peak. In the case of misalignment, the histogram will have a larger dispersion in both cases (see the examples in Fig. 3.2). For every pixel in an image and for every displacement d of a window W around that pixel, the amount of dispersion of the histogram H of differences g in W can be computed by a weighted summation over the bins, which are assumed to be of size 48 3 Image Registration for Motion Artifact Reduction in DSA

H(g) H(g) 0.07 0.07

0.06 0.06

0.05 0.05

0.04 0.04

0.03 0.03

0.02 0.02

0.01 0.01

0 0 -200 -150 -100 -50 0 50 100 -200 -150 -100 -50 0 50 100 H(g) H(g) 0.07 0.07

0.06 0.06

0.05 0.05

0.04 0.04

0.03 0.03

0.02 0.02

0.01 0.01

0 0 -200 -150 -100 -50 0 50 100 -200 -150 -100 -50 0 50 100

Figure 3.2. Normalized histograms for two different regions of interest. Top row: part of a subtraction image showing a region of interest (indicated by the white box) without (left) and with (right) contrasted vessels. Middle row: the corresponding normalized histograms in the case of alignment of the original mask and contrast images. Bottom row: the corresponding normalized histograms in the case of misalignment of the original mask and contrast images.

∈ − ⊂ Z H − → R+ one here, i.e., g [ G, G] and :[ G, G] 0 ,withG being the largest possible grey-value in both images. This is expressed in the following measure: XG  M(d)= f H(g) . (3.5) g=−G 3.2 Registration Approach 49

Here, H(g) is the fraction (orP relative frequency) of pixels in the window W that have H a difference value of g,with g (g) = 1, because the histogram must be normalized in order to have a fair comparison between the values of this measure for different displacements d,andf : R → R is the weight function. In order for M to be an adequate similarity measure for the purpose of registra- tion, the following two constraints must be imposed on the weight function f: (i) it must be chosen in such a way that the measure M assumes its minimum value if and only if H is maximally flat, where maximally flat implies that if KL = r(2G +1)+s, with r and s as defined in Eqs. (2.34a) and (2.34b) in Appendix 2.A of the previous chapter (see Page 38), then there are s bins with value (r +1)/KL,and2G +1− s bins with value r/KL, and (ii) it must be generalized super-additive, which means that ∀ai ∈ R,i=1, 2, 3, with 0

f(a2 − a1)+f(a3 + a1) >f(a2)+f(a3). (3.6)

In the case that the difference values in the window W behave like white noise (i.e., the grey values in the successive images in the sequence are completely uncorrelated in W), the similarity measure should assume its smallest value. This is expressed in the first constraint. The second constraint is necessary in order to let the measure increase when the histogram becomes more clustered. As proven by Buzug et al. [42], these requirements are met by the class of differentiable, strictly convex functions, which satisfy the following relation ∀x, y ∈ R,x=6 y,and∀α ∈ (0, 1) ⊂ R:4

f(αx +(1− α)y) <αf(x)+(1− α)f(y). (3.7)

Subpixel Precision. The just described matching approach only determines the displacements up to integer precision. However, as indicated by a evaluations of DSA carried out in the early eighties [31], subpixel misalignments often produce significant artifacts in subtraction images. Therefore, it is necessary to provide means to obtain subpixel precision in the displacement computations. As already mentioned in Section 2.4.3 of the previous chapter, a frequently applied approach to obtain subpixel precision is to use the match values at integer displace- ments in an interpolation scheme in order to construct a continuous bivariate function M(d), d ∈ R2, and to compute the global maximum of this function analytically. Al- though analytical methods will yield an estimation of the displacement with subpixel precision, they differ fundamentally from approaches in which the similarity measures is evaluated explicitly for subpixel displacements, such as in the algorithms described by e.g. Yanagisawa et al. [423] and Van Tran & Sklansky [388]. In that case, one of the two images needs to be reconstructed and the computations are carried out by

4It should be mentioned that the requirement that M must assume its minimum value in the case of a maximally flat histogram, may be replaced by the requirement that M must assume its maximum value in that case. This can only be done if at the same time, the requirement of super- additivity for the weight function f is replaced by sub-additivity, which is achieved by replacing the “>”signby“<” in Eq. (3.6). These new requirements are met by the class of differentiable, strictly concave functions, which are obtained by replacing “<”by“>” in Eq. (3.7). In this case, M should be referred to as a dissimilarity measure rather than a similarity measure, since it assumes its maximum value when the windows are most dissimilar. 50 3 Image Registration for Motion Artifact Reduction in DSA

Figure 3.3. Two examples of registration with subpixel precision, using both the image-interpolation and match-interpolation method. Left column: the original subtractions of unregistered digital X-ray angiograms. Middle column: the cor- responding subtractions after registration using the match-interpolation method (quadratic interpolation). Right column: the corresponding subtractions after registration by using the image-interpolation method (bilinear interpolation) with a precision of 1/10th of a pixel.

using a resampled version of this image, where resampling is done on a corresponding subpixel displaced grid. Yanagisawa et al. [423] and Van Tran & Sklansky [388] re- ported that a precision of 1/10th of a pixel is sufficient for angiography images. From our experiments it was concluded that in all cases, the image-interpolation approach (even with a precision of no more than 1/10th of a pixel) leads to better registrations compared to the analytical match-interpolation methods (see Fig. 3.3). The left image of Fig. 3.4 shows the displacement vectors of the control points shown in the bottom-right image of Fig. 3.1, which result from the proposed subpixel- precision template-matching approach, followed by the detection and correction of inconsistent vectors, as described in Section 3.2.4 hereafter.

3.2.3 Displacement Interpolation In order to be able to carry out the warping of the mask image with respect to the contrast image, it is required to have a complete description of the displacement vector field d : D → R2,whereD ⊂ R is the image domain. That is, the displacement d 3.2 Registration Approach 51

Figure 3.4. Left: the displacement vectors (12 times magnified) of the auto- matically selected control points shown in the bottom-right image of Fig. 3.1, as computed by the template matching approach described in Section 3.2.2, followed by the detection and correction of inconsistent vectors along the lines described in Section 3.2.4. Right: the Delaunay triangulation, D(P ), of the set of control points, used for displacement interpolation as described in Section 3.2.3.

must be known for every point p in the image. So far, we have only described the method for computing the displacements of a selected set of control points, pi, under the assumption that the remainder of the field could be obtained by interpolation. In order minimize the required computation time for this operation, the parameters φmin,φmax,andφexp, as introduced in Section 3.2.1, should be chosen in such a way that linear interpolation is sufficient for this purpose. Given the set P = {pi} of control points as extracted by the feature detection and selection algorithm (Section 3.2.1), a suitable tessellation is required in order to be able to carry out the linear interpolation of displacements in between these control points. In the case of a regular grid of data points, quadrilaterals are the commonly used polygons [63, 107, 388, 428]. However, in the case of irregularly distributed (or scattered) control points, such a mesh is not guaranteed to exist. The only possible polygons that can be used for this purpose are triangles, so that the control points become the vertices of an irregular triangular mesh. Although triangular meshes are well known geometric constructions and have been studied and applied by several au- thors in image registration [122,123], warping and morphing [325,326,420], computer graphics and scientific visualization [110], they have, to our knowledge, never been applied to digital X-ray angiography image sequences. It should be noted that the solution to the problem of triangulating any set of data points is not unique, as can easily be concluded from the fact that any quadruple of vertices can always be triangulated in two ways; there are two possible diagonals in the quadrilateral constituted by these vertices. In order to obtain a unique triangu- 52 3 Image Registration for Motion Artifact Reduction in DSA lation that is consistent with the choice for a certain interpolation scheme, additional constraints need to be imposed. As stated by Watson & Philip [401], triangles with one or two highly acute interior vertex angles should be avoided in the tessellation, since the vertices of these elongated triangles are not capable of reflecting the local variation of the interior points in the dependent variables. A suitable tessellation for the purpose described in this chapter is the so called Delaunay triangulation, D(P ). It guarantees the smallest of the three angles of a triangle to be as large as possible [205, 209] and is unique, except in the degenerate case where four or more points are co-circular [400]. In that case, the Delaunay triangulation is locally non-unique. In the implementation, the incremental algorithm described by Watson [400] was used. The right image of Fig. 3.4 shows the Delaunay triangulation of the set of control points shown in the bottom-right image of Fig. 3.1. Additional remarks are made in Section 3.3.2.

3.2.4 Inconsistency Detection and Correction The window size, K ×L, and the weight function, f, described in Section 3.2.2, should be chosen such that the template-matching operations will yield reliable estimates for the diplacement of local image structures. However, occasional inaccuracies in the displacement estimates cannot be entirely avoided, even if these variables would be adapted to local image content. Therefore, the registration algorithm should be able to detect and correct inconsistent vectors retrospectively. In the registration approach described here, inconsistent displacement vectors are detected simply by computing the relative length of and the angle between the vector di of every control point pi and the vectors dj of the natural neighbors pj,j= 1, 2,...,Ni, of that control point in the Delaunay triangulation, provided that both di and dj are not equal to the null vector:

kdik ρij = , (3.8a) kdjk   di · dj ϕij = arccos , (3.8b) kdikkdjk and by testing the following criteria: ∨ −1 ρij >ρmax ρij <ρmax, (3.9a)

|ϕij| >ϕmax. (3.9b)

Displacement vectors for which more than half of the natural-neighbor vectors cause at least one of these criteria to be satisfied, are labelled as inconsistent. In cases where the vector di or the natural-neighbor vector dj is close to the null vector, the former vector is labelled as inconsistent if the difference in length between the vectors is more than ρmax times the precision with which the vectors are computed. In order to correct for inconsistencies, several approaches have been proposed, such as e.g. iterative relaxation [427,428]. In the algorithm described in this chapter, a more pragmatic and computationally much cheaper approach was chosen: inconsistent 3.2 Registration Approach 53

Figure 3.5. Left column: part of a contrast image (top) and the image (bot- tom) resulting from subtraction of the mask image, taken from a cerebral X-ray angiography sequence. Middle column: the same contrast image (top), overlayed with the displacement vectors (eight times magnified) as computed by the approach described in Section 3.2.2, and the image (bottom) resulting from subtraction of the warped mask according to these vectors. Two of the displacement vectors are highly inconsistent with their neighboring vectors, as a result of which their are some remaining artifacts. Right column: displacement vectors (top) and the improved subtraction image (bottom) resulting after detection and correction of inconsistent vectors, using the technique described in Section 3.2.4.

vectors are substituted by a Gaussian-weighted average of the vectors of consistent natural-neighbors, according to

  X kp − p k2 X d0 = w−1 w d , with w =exp − i k and w = w , (3.10) i k k k 2d2 k k min k

where dmin denotes the distance between control point pi and the closest of its natural neigbors having consistent displacement vectors. A rather extreme and rare example of inconsistent vectors, their detrimental effect on the resulting subtraction image, and the improvement brought by the correction technique described in this subsection, is presented in Fig. 3.5. 54 3 Image Registration for Motion Artifact Reduction in DSA

3.2.5 Inter-Image Displacement Prediction In the previous subsections, the discussion was mainly focused on the registration of only two images, I(x, y; t1)andI(x, y; t2),t2 =6 t1, with respect to each other. However, as indicated in Section 3.1 and at the beginning of Section 3.2, in digital X- ray angiography were are usually dealing with image sequences of size M 2 ×N,where N is usually 10 to 20. The successive images in the sequences are highly correlated since they are projections of the same scene. Under the assumption that the velocity of patient motion is small with respect to the rate at which the images are acquired (i.e., the differences between one image I(x, y; n),n∈ [0,N − 2] ⊂ N and a successive image I(x, y; n +1)aresmallwith respect to the total change from n =0ton = N − 1), it is reasonable to assume that the displacement vector fields d(p; n)andd(p; n + 1) are highly correlated as well. This can be taken advantage of when computing the correspondence of all images in the sequence with respect to a single image (the mask), or the correspondence of all images with respect to their predecessor in the sequence. In the implementation, the displacements d(pi; n) are used as estimates for the computation of d(pi; n +1). More implementation related details are provided in Section 3.3.4.

3.3 Implementation Aspects

In the previous section, the components of the registration algorithm were presented. In this section, several aspects concerning the implementation of these components are discussed in more detail.

3.3.1 Control Points Selection In Section 3.2.1, it was proposed to use the gradient magnitude of the mask image as a prediction of the regions in the subtraction images where artifacts can be expected to appear in the case of a misalignment. When dealing with 1024 × 1024 images, the scale at which the gradient magnitude is computed should be twice as large as for 512 × 512 images, in order to have the same amount of regularization. This means that for 1024×1024 images, the required computation time is at least a factor 8 larger (a factor 4 from image size and another factor 2 from kernel size, not to mention the possibility of worse caching behavior at these image sizes). In order to reduce computation time, the gradient magnitude can be computed by using a subsampled version of the mask image, at the cost of loss of precision in the positioning of the control points in the final selection. In the implementation, edge detection is always carried out on an Medge × Medge sized version of the mask image, regardless of its original size. This has two advantages: (i) the computation time is fixed, and (ii) the threshold Θe can be the same for both image sizes. The final selection of control points from the gradient magnitude of the mask image is carried out as follows. First, the image is divided into square regions of size Dmax × Dmax pixels. In turn, these regions are sub-divided into smaller regions of size Dmin × Dmin pixels. For every large region (from the upper-left to the lower-right side of the image), every small region (from the upper-left to lower-right side within 3.3 Implementation Aspects 55 the large region) is scanned for pixels with a gradient magnitude value above the specified threshold Θe. From these pixels, the one with the largest value is taken as a candidate control point. (If no edge pixels are encountered, no candidate is selected.) The candidate becomes a control point if it is positioned inside the exposure region as defined in Eq. (3.2) and at a distance of at least Dexp from the border ∂RE of that region. In order to enforce a minimum distance between control points, the gradient magnitude values in a region of size (2Dmin +1)× (2Dmin + 1) pixels around the selected point (that point being the center) are then suppressed. If no point is selected after a large region has been scanned, the point with the largest gradient magnitude value in a small region around the center of the large region is taken as a control point so as to limit the maximum distance between the points.

3.3.2 Control Points Triangulation

For the eventual warping of the mask image, the displacements di of the selected control points pi are linearly interpolated to construct the complete displacement vector field d(p). To this end, it is required that the triangulation D(P ) completely covers the image. However, this is not the case if the set P contains only control points that are positioned inside the exposure region. In the implementation, this problem is solved by selecting four additional corner points that lie outside the image. The displacement of each of these points is explicitly set to zero. As will be explained in Section 3.3.5, special attention needs to be paid to the warping of triangles of which at least one of the vertices is a corner vertex (hereafter referred to as corner triangles). In order to be able to remove serious border artifacts that may result from the warping, a set of equally spaced points that are positioned exactly on the border, ∂RE , of the exposure region are taken as additional control points for which the displacement will be computed.

3.3.3 Similarity Measure Computation As proposed in Section 3.2.2, the displacement of every control point is computed by means of template matching, by using a histogram-of-differences based similarity measure as proposed by Buzug et al. [42]. Since the larger part of the computation time is due to the evaluation of this measure, a few notes should be made concerning its implementation. First, the time required to compute the measure M (Eq. (3.5)) should be as small as possible, which implies that the weight function, f, should not only satisfy the requirements of differentiability and convexness, but should also be computationally cheap. To this end, the energy function f : H(g) →H2(g)was chosen, since it involves only a single multiplication [35,42]. A very important parameter is the size of the window, W, in the which difference values constitute the histogram. In order to minimize computation time, this window should be as small as possible. However, in principle, the size of the window deter- mines the amount of statistical information available and, therefore, the smoothness of M. In order to be allowed to use computationally cheap optimization techniques, such as hill climbing (a variant of the direction set method of Powell [302]), it is a prerequisite for the function M to be sufficiently smooth so as to have only one ex- 56 3 Image Registration for Motion Artifact Reduction in DSA

M 10 M 10 5 5

- 0 - 0 100 dy 100 dy -5 -5 - - 0 5 0 5 dx 5 dx 5 -10 -10 10 10

M 10 5

- 0 100 dy -5 - 0 5 dx 5 -10 10

Figure 3.6. The match values, M, for the energy measure, as a function of the displacement d =(dx,dy)inx and y direction, using different window sizes. Top left: A window size of 11 × 11 pixels (51 local extrema). Top right: A window size of 31 × 31 pixels (although hardly visible, there are still seven local extrema). Bottom: A window size of 51 × 51 pixels (one local extreme, corresponding to the correct displacement). As can be seen from this figure, the window size determines the smoothness of the function M, and thereby also the number of local extrema in which an optimization algorithm may become trapped. treme, corresponding to the actual displacement. Small windows will yield unreliable match values, which causes the resulting function M to be rather coarse (many local extrema). This is illustrated in Fig. 3.6. Experiments for several thousands of points have indicated that a window size of 51×51 pixels yields a good compromise between computational cost and statistical reliability, which is in agreement with the findings of Buzug et al. [38]. Furthermore, digital angiography images usually have a grey-value resolution of 10 bits, i.e., G = 1023 in Eq. (3.5). This implies that the difference values are integers in the range [−1023, 1023]. In order to avoid time-consuming checks when filling the histogram, the histogram should have a bin-size of one (no clustering of bins). The computational load can be reduced somewhat further by reducing the summation range. As can be seen from Fig. 3.2, the majority of difference values is concentrated in the range [−200, 100]. Restricting the summation to this range reduces computation time, without affecting the accuracy of the computation. Finally, in order to let the displacement computations be based on actual image information only (i.e., grey values inside the exposure region), the difference values outside the exposure region are not incorporated in the computations. 3.3 Implementation Aspects 57

3.3.4 Inter-Image Displacement Prediction In Section 3.2.5 it was stated that when successively computing the displacement vector fields d(x, y; n) for all images I(x, y; n),n∈ [1,N − 1] ⊂ N, with respect to a single image I(x, y; 0), the displacements obtained in the previous iteration, d(x, y; n − 1), can be used as estimates for the displacements in the next image. If the assumptions concerning patient motion, as discussed in that section, hold true, the use of these estimates may reduce the number of iterations of the hill-climbing algorithm in finding the optimal correspondence. As proposed in Section 3.2.2, the computation of displacements is carried out hierarchically, i.e., first up to an integer number and then with subpixel (fractional) precision. It should be noted that the computation of the fractional part of the displacements requires interpolation of one of the images (bilinear interpolation was used in the implementation). This is a relatively expensive operation. In order to prevent the need to interpolate the image when computing the integer part of the displacement, only that part was used in the predictions.

3.3.5 Mask Image Warping Because of the coherence in the behavior of neighboring pixels, it was proposed in Section 3.2.1 to compute the local displacements between two images of a sequence only for a selected set of control points pi. In Section 3.2.3 it was advocated that in order to minimize computation time, control points selection should be done is such a way that it is sufficient to use linear interpolation to obtain the complete displacement vector field d : D → R2, and an approach to realize this was presented. The actual warping of the mask image, I, is carried out as follows. For every triangle ∆ijk in the mesh, the constituent vertices, pi, pj,andpk, are virtually translated over the computed displacement vectors di, dj,anddk, respectively. By using a rasterization algorithm (see e.g. Foley et al. [110]), it is determined which of the pixels p in the transformed (warped) image Iˆ belong to the warped trian- gle. For every one of those pixels, the inverse displacement, d−1(p), is computed by linear interpolation of the inverse displacements at the (now displaced) vertices: −1 d (pl + dl)=−dl,l∈{i, j, k}. By using this inverse displacement, the grey-value of the warped image, Iˆ,atpixelp, is computed according to the theory described in Section 2.3.1 of the previous chapter, in particular Eq. (2.5):

−1 Iˆ(p)=Jr(p)I(p + d (p)). (3.11)

It can easily be shown that, because of the fact that the inverse displacement of an arbitrary point, p, in the image is computed by linear interpolation from the inverse displacements of the three control points, pi, pj,andpk, of the enclosing triangle, ∆ijk, the Jacobian factor, Jr, in Eq. (3.11) has the form

Jr(p)=1+c(∆ijk), (3.12) where c is a piecewise constant function that is dependent only on the enclosing triangle, ∆ijk, and not on p itself (see Appendix 3.A for details). This implies that Jr 58 3 Image Registration for Motion Artifact Reduction in DSA is nothing but a piecewise constant grey-level scaling factor that needs to be computed only once for every triangle. In order to avoid artifacts at the borders of triangles, we imposed the additional constraint that the grey-level distribution in the resulting subtraction images must vary in a continuous fashion. Since Jr is a constant within every triangle, this requirement can only be satisfied by taking c(∆ijk)=C, ∀∆ijk. In the implementation, we chose C = 0 (hence, Jr = 1), since any other value will cause the entire subtraction image to be grey-level scaled, which makes no sense. In almost all cases, the position p + d−1(p) in the original image will not be on the grid. The grey-value at that position can then be obtained by simple bilinear interpolation. In the implementation, the described process of polygon rasterization [110] and texture mapping [142] is carried out real-time by graphics hardware. Because of the transition from the exposure region to the constant grey-value at the borders of an image, warping of the corner triangles (as described in Section 3.3.2) may give rise to very disturbing border artifacts. Since subtraction in those region does not yield any relevant information, the difference values in the corner triangles are explicitly set to zero.

3.4 Algorithm Overview

In the previous sections, the several operations involved in the registration of digital X- ray angiography images have been presented and discussed. For clarity, the individual operations are summarized here, together with their accompanying parameters.

GivenanimagesequencewithN images of size M × M pixels, the registration of the mask image, I(x, y; 0), with respect to the contrast images, I(x, y; n),n∈ [1,N−1] ⊂ N, is accomplished by carrying out the following steps:

1) Compute the gradient magnitude, k∇Lk,atscaleσ,ofanMedge × Medge sized version of the mask image, I(x, y; 0), of the sequence, and extract potential artifact regions by means of thresholding at level Θe.

Parameters: Medge, σ,Θe.

2) Extract the border, ∂RE , of the exposure region (expressed in terms of R, xmin, xmax, ymin ,andymax, see Eq. (3.2)) from the original mask image, I(x, y;0),by analyzing scan lines from the border to the center of the image, and select a set of border control points.

3) Extract control points from the exposure region, RE, by using the thresholded gradient magnitude, k∇Lk, of the mask image, I(x, y; 0), and by using the minimum and maximum distance constraints, Dmin, Dmax,andDexp, based on assumptions about the coherence between neighboring pixels.

Parameters: φmin, φmax, φexp.

4) Given the set of control points, P = {pi} (including the border points and the four corner points that are positioned outside the image), construct a triangular mesh, D(P ) (completely covering the image), by using the incremental Delaunay triangulation algorithm proposed by Watson [400]. 3.5 Preliminary Results 59

5) For every image, I(x, y; n),n∈ [1,N − 1] ⊂ N, in the sequence, compute the displacements, d(pi; n), of the selected control points, pi ∈ P (except for the corner points), by maximizing the energy of the histogram-of-differences simi- larity measure, M (see Eq. (3.5)), in a K × L neighborhood of these points, by using hill-climbing optimization and by using the displacements of the previous image, d(pi; n − 1), as estimates. Parameters: K, L. 6) For every image, I(x, y; n),n∈ [1,N−1] ⊂ N, detect inconsistent displacement vectors by computing the length of and the angle between all vectors d(pi)and their corresponding natural-neighbor vectors d(pj),j=1, 2,...,Ni,andby testing the criteria described in Section 3.2.4. Correct inconsistent vectors by substituting them with a Gaussian-weighted average of the consistent natural- neighbor vectors according to Eq. (3.10).

Parameters: ρmax, ϕmax. 7) For the computation of the motion-corrected subtraction image corresponding to any contrast image I(x, y; n),n∈ [1,N − 1] ⊂ N, warp the mask image by deforming every triangle ∆ijk in the mesh D(P ), by using the displacements d(pi; n), d(pj; n), and d(pk; n) of the constituent control points pi, pj,and pk, and the linearly interpolated displacements at the remaining points, p =6 pi, pj, pk, and by using bilinear interpolation of grey values.

3.5 Preliminary Results

The algorithm presented in the previous sections was implemented in the C++ pro- gramming language [358], utilizing the Open Graphics Library [261]. User interaction was provided by means of an interface, implemented by using Tcl/Tk [280]. All experiments were carried out on an O2 workstation (Silicon Graphics, De Meern, the Netherlands), with one 180MHz R5000 IP32 processor, 64MB main memory and 512kB secondary unified instruction/data cache memory, providing special graphics hardware for support of OpenGL instructions. The specifications (type and size) of the three datasets (Cer, Per,andAbd)used to evaluate the performance of the algorithm, are presented in Table 3.1. All datasets were clinical digital X-ray angiography image sequences, acquired on an Integris V3000 C-arm imaging system (Philips Medical Systems, Best, the Netherlands). During the experiments, the parameters of the algorithm (as summarized in the previous section) were kept fixed to the values shown in Table 3.2. For all three image sequences, the first image was taken as the mask image. The results of applying the proposed registration technique to the three datasets of Table 3.1 are presented in Figs. 3.7, 3.8, and 3.9, respectively. In these figures, the top-left image is the original subtraction of the mask image and one of the contrast images. The bottom-right image shows the subtraction after correction for patient motion artifacts by using the proposed approach. In order to relate the performance of the algorithm to that of the manual pixel-shifting technique and automatic methods 60 3 Image Registration for Motion Artifact Reduction in DSA

Sequence Type Size Cer Cerebral 1024×1024×18 Per Peripheral (Femoral) 512× 512×10 Abd Abdominal (Kidney) 1024×1024× 6

Table 3.1. Specifications of the digital X-ray angiography image sequences used in the experiments described in Section 3.5.

Parameter Value Parameter Value

Medge 512 φmin 0.04 σ 1.0 φmax 0.20 Θe 15.0 φexp 0.01 K 51 ρmax 3.0 ◦ L 51 ϕmax 30

Table 3.2. Values of the parameters of the algorithm during the experiments described in Section 3.5. See Section 3.4 for an overview of these parameters.

based on regular quadrilateral meshes, the results of these methods are shown in the top-right and bottom-left images, respectively. In order to give an impression of the computational cost of the algorithm, the total computation times required to register the image sequences are presented in Table 3.3.

3.6 Discussion

From Figs. 3.7, 3.8, and 3.9, it can be observed that the artifacts could not be re- moved by global translation of the mask image, i.e., by applying the pixel-shifting technique provided on the viewing console of clinical DSA systems. In all examples presented here, artifacts were due to more complex patient motion, as a result of which registration by means of pixel shifting in one part of the image may result in a deterioration of artifacts in other parts. Furthermore it can be seen that, in general, the proposed approach, based on a triangular mesh of irregularly spaced (edge-based) control points, yielded somewhat better registrations and hence better subtractions compared to algorithms based on a regular quadrilateral mesh. There is a major difference between the registration results of datasets Cer and Per (Figs. 3.7 and 3.8, respectively) and the results of dataset Abd (Fig. 3.9), when using the proposed method. In the first two datasets, artifacts were removed al- most completely, i.e., the algorithm resulted in near perfect registrations. In the last dataset, however, although some artifacts were removed, the result still showed ma- jor artifacts. It should be mentioned that these artifacts could not be removed by adjusting one or more of the parameters of the algorithm (Table 3.2) so as to obtain 3.6 Discussion 61

Figure 3.7. Registration of dataset Cer. Top left: original subtraction of one of the contrast images from the mask image, showing major motion artifacts. Top right: resulting subtraction after registration by manual pixel shifting. Only local correction is obtained, in this case at the bottom right of the image. Bottom left: result after registration by using a quadrilateral mesh. Still there are some minor artifacts. Bottom right: result after application of the proposed approach. 62 3 Image Registration for Motion Artifact Reduction in DSA

Figure 3.8. Registration of dataset Per. Top left: original subtraction of one of the contrast images from the mask image, showing subpixel motion artifacts. Top right: resulting subtraction after registration by manual pixel shifting. Only local correction is obtained, in this case on the bottom-left side of the image. Bottom left: result after registration by using a quadrilateral mesh. Bottom right: result after application of the proposed approach. 3.6 Discussion 63

Figure 3.9. Registration of dataset Abd. Top left: original subtraction of one of the contrast images from the mask image, showing major motion artifacts. Top right: resulting subtraction after registration by manual pixel shifting. Only local correction is obtained, in this case on the right side of the image. Bottom left: result after registration by using a quadrilateral mesh. Most of the artifacts are still present. Bottom right: result after application of the proposed approach. Still there are major artifacts (see Section 3.6 for a discussion). 64 3 Image Registration for Motion Artifact Reduction in DSA

Sequence Algorithm

Q-FS Q-HC Q-HC-P T-FS T-HC T-HC-P Cer 950 32 26 392 20 17 Per 515 13 12 317 9 9 Abd 3429913255

Table 3.3. Total computation times (in seconds) required by the several versions of the algorithm to completely register the datasets listed in Table 3.1. “Q” indicates the use of a regular quadrilateral mesh; “T” the use of an irregular triangular mesh. The different optimization techniques used are full search (“FS”), in a range of [−10, 10] pixels in both x-andy-direction, and hill climbing (“HC”). “P” indicates the use of inter-image displacement prediction.

a larger density of control points. In fact, the artifacts could not even be removed by replacing hill-climbing optimization by a full-search approach. We will discuss these phenomena in more detail. The resulting subtraction image after application of the pixel-shifting method to dataset Abd (see the top-left image in Fig. 3.9) reveals that there are parts in the image in which there are several important structures superimposed (e.g.,thespine, the , and the bowels in the left-middle part of the image), as opposed to datasets Cer and Per, where the important structures are due to the projection of bones only. In an attempt to remove the artifacts caused by the displacement of the catheter (the black/white curve in the left-middle part of the images in Fig. 3.9), it is inevitable that other artifacts will be introduced, due to the spine edges in that same region. This phenomenon has been mentioned in Section 2.3.2 of the previous chapter as the first limitation of any registration algorithm for projection images, and explains why even a full search did not retrieve the correct correspondence. In the center of the image, however, the artifacts were caused only by peristaltic motion. As can be seen from the bottom-right image in Fig. 3.9, the algorithm was able to remove the artifacts near the small vessels. The larger artifacts on the right side of the image could not be removed. The main reason for this was the increase of noise in the subtraction image under translation, which caused the match surface to have a local maximum at d =(0, 0). In this region, the correct correspondence would have been obtained by means of full-search optimization rather than hill climbing. However, this case appeared to be very exceptional. The computation times presented in Table 3.3 indicate that in spite of the addi- tional time required for preprocessing (edge detection, control points selection, trian- gulation), the proposed approach was faster than commonly used algorithms based on regular quadrilateral meshes. This is mainly due to the edge-based control point selection procedure which resulted in a reduction of the number of points for which the displacement were to be computed explicitly. One might argue that algorithms based on regular meshes could have been made faster too, by reducing the density of the control points in the mesh. However, this would almost certainly have resulted in a deterioration of the registrations, which were already worse than those of the pro- 3.7 Conclusions 65 posed approach. With the current parameter settings (Table 3.2), the average number of points in relatively dense regions was about the same with both approaches.

3.7 Conclusions

In this chapter, a new approach to the registration of digital X-ray angiography image sequences was presented. The method involves the extraction of regions in the image where artifacts can be expected to appear in the case of patient motion. These regions are obtained by thresholding the gradient magnitude of the mask image. Based on assumptions about the coherence of neighboring pixels, a set of control points is extracted for which the displacement is computed explicitly by means of maximization of the energy of the histogram-of-differences similarity measure. A hill-climbing approach is used for optimization. The complete displacement vector field is constructed from the displacements at the control points by using a Delaunay triangulation and linear interpolation. The final warping of the images is done real- time by graphics hardware. The overall conclusion from the preliminary results is that, in general, the pro- posed method is effective, very fast, and outperforms algorithms based on regular grids. The best results are obtained in those situations where the important structures in the original 3D scene are exposed in such a way that the grey-level distributions of these structures in the resulting projection images do not overlap. Mostly, this is the case in e.g. cerebral and peripheral images. In abdominal images however, there are often several important structures that can move independently, as a result of which accurate registration with the current approach becomes impossible.

3.A Appendix: Computation of the Jacobian Factor

The mapping for which the Jacobian factor, Jr, must be computed in Eq. (3.11) is −1 not just the inverse displacement vector field, d , but the total reverse mapping dr, defined by

2 2 −1 dr : R → R : p → p + d (p), (3.13) which we could write as   rx(x, y) dr(x, y)= . (3.14) ry(x, y)

The Jacobian of this reverse mapping is then computed as

∂rx ∂rx ∂x ∂y Jr = Jdr = , (3.15) ∂ry ∂ry ∂x ∂y where “|·|” denotes taking the determinant of the matrix. 66 3 Image Registration for Motion Artifact Reduction in DSA

In general, Jr will have to be computed explicitly for every point in the image or region of interest. For example, when the region of interest is a rectangle, and the inverse displacement vector field within the region is computed by using bilinear interpolation from the inverse displacements of the four corner points (such as e.g. in the algorithm of Mandava et al. [230]):

rx(x, y)=x +(a10xy + a11x + a12y + a13), (3.16a)

ry(x, y)=y +(a20xy + a21x + a22y + a23), (3.16b) then the Jacobian becomes

(1 + a11 + a10y)(a12 + a10x) Jr = (a21 + a20y)(1+a22 + a20x) (3.17) =(a20 + a11a20 − a10a21)x + (a10 + a22a10 − a20a12)y + 1+a11 + a22 + a11a22 − a12a21, which is linearly dependent on x and y. (The eight constant coefficients aij are easily computed from the eight equations that result when successively substituting the coordinates and displacements of the four corner points into Eqs. (3.16a) and (3.16b).) This result also applies when the image is completely divided into quadrilaterals, which are the suitable polygons in case of a regular grid of control points. However, in the algorithm proposed in this chapter, the control points are on an irregular grid, and they are tessellated into a Delaunay triangulation. The inverse displacement of an arbitrary point in the image is then computed from the inverse displacements of the three control points constituting the enclosing triangle, by means of linear interpolation, according to

rx(x, y)=x +(a11x + a12y + a13), (3.18a)

ry(x, y)=y +(a21x + a22y + a23). (3.18b)

It can easily be derived that in this case the Jacobian becomes

(1 + a ) a J = 11 12 r a (1 + a ) 21 22 (3.19)

=1+a11 + a22 + a11a22 − a12a21, which is a constant within every triangle. (Again, the coefficients are computed by substituting the coordinates and displacements of the three corner points into Eqs. (3.18a) and (3.18b), and by solving the resulting system of equations.) At this point, there are two possibilities: either (i) the coefficients aij are negligible and hence it is no problem to take Jr = 1, or (ii) the coefficients do have significant values and Jr needs to be computed explicitly for every triangle. It is important to note that in the latter case, Jr needs to be computed only once for every triangle, which implies a great advantage in terms of computational speed as compared to 3.A Appendix: Computation of the Jacobian Factor 67

the approach described by e.g. Mandava et al. [230], where Jr must be computed separately for every point within the region of interest. It is even more important to consider the consequences of including the Jacobian factor into the computations. Since, in our implementation, Jr is a constant within every triangle, it is nothing but a constant grey-level scaling factor. Here we have another two possibilities: either (i) the corresponding coefficients aij in neighboring triangles are almost equal, as a consequence of which it is no problem to take Jr =1, since it would only imply that we do not incorporate a constant grey-level scaling of the entire image, or (ii) the corresponding coefficients aij do change substantially from triangle to triangle, and should be recomputed. However, in the latter case, inclusion of the Jacobian factor in the computations will result in substantial grey- level discontinuities at the borders of the triangles, the resulting artifacts of which might even be worse than the ones that we were initially trying to correct for by including this factor. Notice that these artifacts do not only occur in the case of linear interpolation of displacement vectors by Eqs. (3.18a) and (3.18b). They will also occur in the case of bilinear interpolation by Eqs. (3.16a) and (3.16b), which can clearly be seen from the several examples shown by Mandava et al. [230]. In order to avoid this type of artifacts, we imposed an additional constraint to the warping algorithm, viz., that the grey-level distribution in the resulting subtraction images should vary in a “continuous” fashion. As already pointed out in Section 3.3.5, this requirement can only be satisfied by explicitly taking Jr to be constant in the entire image. A natural choice is Jr = 1 since any other value for Jr will cause the entire subtraction image to be grey-level scaled, which makes no sense.

And there is no other way of doing anything with certainty than by drawing conclusions from experiments (...) Whatever is cer- tain in [natural] philosophy is owing to this method and nothing can be done without it.

— Isaac Newton, in an intended preface for the  edition of the Opticks

Chapter 4

Evaluation of a Fast and Fully Automatic Technique for Motion Artifact Reduction in Digital Subtraction Angiography

Abstract — The purpose of the study reported in this chapter was to evaluate the performance of the automatic registration technique for motion artifact reduction in digital subtraction angiography (DSA) images described in the previous chap- ter. One hundred and four cerebral DSA images were processed both manually, by means of pixel shifting, and automatically, by using the automatic technique. Four observers assessed the quality of the resulting corrected images, by comparing them both mutually and to the corresponding original (uncorrected) images. The results of the evaluation indicated that the automatic technique is not only considerably faster, but also statistically significantly better than manual pixel shifting.

4.1 Introduction

atient motion artifacts are a major cause of image quality degradation in digital subtraction angiography (DSA). Although several techniques have been pro- Pposed over the past two decades to improve the acquisition of DSA images in relation to this problem [28,30,82,131,133,178,191–193,198,221,263,345,419], motion artifacts cannot be entirely avoided. Currently, the only post-processing techniques available on clinical DSA devices are manual remasking and pixel shifting, which al- low for reduction of artifacts caused by uniform translational motion only [215, 244]. Generally, however, patient movements have a more complex nature, which limit the effectiveness of these reduction techniques. This problem has been recognized by researchers in the field of image processing and has been the incentive to the develop- ment of a number of semi- or even full-automatic, nonlinear retrospective registration techniques [38, 63, 97, 100, 140, 182, 206, 297, 298, 301, 363, 364, 369, 388,393, 423, 428]. 70 4 Evaluation of an Automatic Registration Technique for DSA

However, apart from two exceptions [140,363], clinical evaluations of these techniques have never been reported in the literature. Another major problem with these tech- niques is that they are too time consuming for routine use in clinical practice. In the previous chapter, a new, fully automatic registration technique was de- scribed, which is capable of nonlinearly aligning pairs of images within less than a second [247,248, 250]. The preliminary experiments described in that chapter indi- cated the potentiality of the technique. The purpose of the study reported in this chapter was to perform a clinical evaluation of the effectiveness of the automatic tech- nique in reducing patient motion artifacts, by comparing it to that of manual pixel shifting. The study was carried out on cerebral DSA images.

4.2 Materials and Methods 4.2.1 Images and Equipment During a five-month period, 104 cerebral X-ray angiography runs from 21 patients (13 men and 8 women, age range 28-82 years) were archived digitally. The clinical information of the patients and the DSA images that had been printed on film by the radiologists were retrieved afterwards from the archive of our hospital. From each run, we randomly selected one mask-contrast image pair, of which the corresponding DSA image had been printed on film. All images had been acquired on an Integris V3000 C-arm imaging system (Philips Medical Systems, Best, the Netherlands), with a 20cm or 25cm image intensifier, a matrix size of either 512 × 512 pixels (31 images) or 1024 × 1024 pixels (73 images), and a grey-level resolution of 10 bits per pixel. Post-processing operations as well as the image quality assessments were carried out on an Octane workstation (Silicon Graphics, De Meern, the Netherlands) with one 195MHz MIPS R10000 processor, 256MB main memory (instruction and data cache size both 32KB), and an “IMPACTSR” graphics board with 4MB texture memory. All images were displayed in a window of 700×700 pixels on a 19 inch monitor (Silicon Graphics, De Meern, the Netherlands), which had a resolution of 1280 × 1024 pixels (refresh rate 75Hz). By using this window, images were displayed with the same 1 effective diameter (approximately 11 2 inch) as they are usually displayed on the 15 inch progressive display monitor of the Integris V3000. The contrast and brightness settings of the window were fixed during the evaluation.

4.2.2 Manual and Automatic Registration The 104 mask-contrast image pairs were registered both manually and automatically. Manual registrations were obtained by using the pixel-shifting technique. For this purpose, a special pixel-shifting tool was developed to be executed on the Octane, by means of which images could be registered manually in the same fashion (using a mouse) and with the same precision (1/8th of a pixel) as the pixel-shifting facility on the viewing console of the Integris V3000 used in daily practice. Automatic registrations were obtained by using the algorithm described in the previous chapter [247,248,250]. For completeness, its operation is briefly summarized here. First, the algorithm applies edge detection to the mask image in order to extract 4.2 Materials and Methods 71 regions that have a high potential for showing artifacts. Next, control points for the eventual nonlinear warping operation are automatically selected at local maxima of the gradient magnitude, while constraining the minimum and maximum distance between these points. Subsequently, the local displacement of image structures at the control points are computed by means of a template matching procedure based on the energy similarity measure recently proposed by Buzug et al. [38], which has been shown to be the most accurate measure for this purpose [37, 244]. Inconsistent displacement vectors are then detected and corrected by comparison with neighboring vectors. Finally, the mask image is warped according to the displacement vector field resulting from linear interpolation of the local displacements at the control points. This is done very efficiently by using a triangulation of the set of control points in combination with hardware accelerated texture-mapping operations. For the present evaluation, the parameters of the algorithm were fixed to the values shown in Table 3.2 of the previous chapter (see Page 60).

4.2.3 Method of Evaluation Four observers (three radiologists and a resident) participated in the evaluation, which consisted of two parts. In the first part, manual registrations of the 104 mask-contrast image pairs were carried out separately and independently by the four observers, using the aforementioned pixel-shifting tool. Since the optimal manual registration of any image pair is task dependent, the observers were provided with the clinical indication for acquisition of the images, which was either a cerebral aneurysm (41 images; 7 patients), a stenosis in the carotid arteries (34 images; 8 patients), a tumor (20 images; 4 patients), or (9 images; 2 patients). However, the images were presented in random order. During this first part, the final horizontal and vertical mask shift parameters for all mask-contrast pairs as indicated by each of the observers, as well as the time it took for each observer to carry out the manual registration of each pair, were stored automatically by the computer. The resulting manually corrected DSA images were also stored. The DSA images resulting from automatic registration of all mask-contrast pairs were computed and stored separately. The second part of the study concerned the comparison of the quality of the auto- matically and manually corrected DSA images. To this end, the following three DSA image pairs were formed for each of the 104 original DSA images: (i) automatically corrected DSA image and original (uncorrected) DSA image, (ii) manually corrected DSA image and original DSA image, and (iii) automatically corrected DSA image and manually corrected DSA image. This resulted in a total of 312 DSA image pairs, which were presented to the observers. Although the original and automatically cor- rected DSA images were the same for all four observers, each of the observers was confronted with his or her own manual corrections resulting from the first part. For each of the pairs, the differences between the two images (denoted “Image A” and “Image B”) could be assessed by alternating the image that was displayed. The observers were given the clinical information of all images and were then asked to rate the relative quality of the two images by choosing one of the following: (AB) Image A and Image B are similar (i.e., the amount of artifacts and the magnitude of the artifacts is the same in the diagnostically relevant parts or in the entire images), 72 4 Evaluation of an Automatic Registration Technique for DSA

(A+) Image A is better than, or (A++) much better than Image B (i.e., the amount of artifacts or the magnitude of the artifacts in Image A is smaller, or much smaller than in Image B, in the diagnostically relevant parts or in the entire image), (B+) Image B is better than, or (B++) much better than Image A (i.e., the amount of artifacts or the magnitude of the artifacts in Image B is smaller, or much smaller than in Image A, in the diagnostically relevant parts or in the entire image). Similar to the first session, the second session was carried out separately and independently by the four observers. However, prior to this session, there was a meeting between the observers in order to obtain consensus regarding the rating of relative image quality. For this consensus meeting, 10 sample cerebral DSA image pairs were used, which were not included in the actual evaluation. To avoid any bias in the ratings, the images were presented to the observers in a completely randomized and blinded fashion; not only were the 312 DSA image pairs randomized, but also the order of the images within each pair was randomized, and the observers were ignorant of the type of correction (no correction, manual correction, or automatic correction) that was applied to the images. Furthermore, to reduce the possibility of observers recognizing their own manual corrections, the time period between the end of the first and the start of the second session was at least three weeks for each of the observers.

4.2.4 Statistical Analyses Inter-observer agreement for the image quality ratings resulting from the second part of the study was assessed by using a kappa (κ) test. In order to take account of the degree of disagreement, we used the weighted kappa (κw)testproposedbyCo- hen [6,60,104]. The six individual κw values computed for the four observers were sum- marized by computing a compound κw value along the lines described by Fleiss [103], with a 95% confidence interval (95%CI) based on the standard deviation between the

κw values. A κw value of 1.0 indicates that the agreement is perfect and a value of 0.0 that the agreement is not different from chance agreement. For the interpretation of κw values in between these extremes, we used the guidelines described by Landis and Koch [203]: 0.00 − 0.20 indicates slight agreement, 0.21 − 0.40 fair agreement, 0.41 − 0.60 moderate agreement, 0.61 − 0.80 substantial agreement, and 0.81 − 1.00 almost perfect agreement. The ratings resulting from the second part allowed us to make both implicit and explicit comparisons of the effectiveness of the automatic and the manual registration technique in reducing motion artifacts. For this purpose, the ratings of the 312 DSA image pairs were divided into three groups: (i) ratings expressing the quality of automatically corrected DSA images relative to corresponding original (uncorrected) DSA images (or vice versa), (ii) ratings expressing the quality of manually corrected DSA images relative to corresponding original (uncorrected) DSA images (or vice versa), and (iii) ratings expressing the quality of automatically corrected DSA images relative to corresponding manually corrected DSA images (or vice versa). Since the images of each pair were presented in random order during the evaluation, the original ratings were converted by using the rules presented in Table 4.1 in order to be able to express the quality of any of the images in a given pair in terms of the other. Implicit 4.3 Results 73

Original rating Relative image quality A++ A+ AB B+ B++ Image A compared to Image B ++ + 0 −−− Image B compared to Image A −− − 0+++

Table 4.1. Rules for conversion of the original ratings resulting from the second part of the study, necessary in order to be able to express the quality of any one of the DSA images in a given pair in terms of the other. comparison of the performance of the automatic and manual registration technique was then obtained by constructing a frequency table of the converted ratings from groups (i) and (ii). Explicit comparison was obtained by analyzing the ratings from group (iii). These comparisons were carried out separately for the results of each observer. A comparison based on the average frequencies was also carried out. The statistical significance of the possible improvement of the automatic registra- tion technique over manual pixel shifting was assessed by using a Chi-squared (χ2) test [6] applied to the frequency tables containing the ratings from groups (i) and (ii). Since one of the variables in these tables (viz., relative image quality) represents ordered categories (viz.,“−−”, “−”, “0”, “+”, “++”), we did not use the ordinary χ2 test, but the more powerful χ2 test for linear trend [6], also known as the χ2 test for slope [104]. For this test, we used uniform spacing of the categories. The null hypothesis for this test was that the automatic and manual registration technique would be equally effective in reducing motion artifacts. A probability of p<0.05 for this hypothesis was chosen to indicate a statistically significant difference between the two techniques.

4.3 Results

In the first part of the study, the observers found that most of the 104 original cerebral DSA images could be improved to some extent by manual correction, since on average 88% of the mask shift parameters differed from zero. The maximum shift recorded in either direction was 8.0 pixels, while the average length of the shift vectors of all four observers was 1.2 pixels. From this it may be concluded that, although in some cases patient motion was quite severe, in most cases the motion artifacts were due to relatively small displacements only. The timing information stored along with the shift parameters revealed that, on average, manual correction required about 12 seconds per mask-contrast image pair. In contrast, the automatic registration algorithm required on average only about one second per pair. The agreement between the observers in the second part, expressed in terms of compound κw  95%CI, was 0.65  0.06. According to the Landis-Koch guidelines, this indicates substantial agreement. Therefore, we restrict ourselves to presenting averages. The average frequencies of the ratings from groups (i) and (ii), as described in Section 4.2.4, are presented in Table 4.2. From the implicit comparison that can 74 4 Evaluation of an Automatic Registration Technique for DSA

Relative image quality Comparison −− − 0+++ Manually corrected vs. original 0% 4% 25% 62% 9% Automatically corrected vs. original 0% 0% 15% 69% 16%

Table 4.2. Frequencies of the ratings resulting from the comparison of corrected DSA images and corresponding original (uncorrected) DSA images. The results presented here are averages of the frequencies of the ratings resulting from the four observers in the second part of the evaluation. The two types of correction are manual correction, by means of pixel shifting, and automatic correction, by means of the technique described in the previous chapter.

Relative image quality Comparison −− − 0+++ Automatically vs. manually corrected 0% 5% 44% 48% 3%

Table 4.3. Frequencies of the ratings resulting from the explicit comparison of auto- matically corrected DSA images and corresponding manually corrected DSA images. The results presented here are averages of the frequencies of the ratings resulting from the four observers in the second part of the evaluation. Automatic corrections were obtained by means of the algorithm described in the previous chapter. Manual corrections were obtained by means of pixel shifting. be based on the results presented in this table, it is clear that overall, the automatic registration technique resulted in better image quality than manual pixel shifting. In order to be able to apply the χ2 for trend, the frequencies in the columns “−−” and “−” had to be combined, since this test does not allow rows or columns to be entirely filled with zeroes. The χ2 test for trend applied to the modified frequency table showed that the probability for the null hypothesis of equal effectiveness to be true is p<0.05, from which it can be concluded that the automatic correction technique is statistically significantly better than manual pixel shifting in reducing motion artifacts. We note that the same conclusion was found with this test applied to the results of the observers separately. The average frequencies of the ratings from group (iii), representing the results of the explicit comparison of the quality of automatically and manually corrected images, are presented in Table 4.3. These results support the conclusion drawn from the implicit comparison. Examples of cases in which the automatic registration technique was superior compared to manual pixel shifting, are given in Figs. 4.1 and 4.2. An example where automatic and manual registration performed equally well is shown in Fig. 4.3. Finally, an example of a case in which pixel shifting resulted in a better image than the automatic registration technique is provided in Fig. 4.4. 4.3 Results 75

Figure 4.1. Example of a case in which the automatic registration technique was found to be superior compared to manual pixel shifting. Top left: original lateral cerebral DSA image of a patient suffering from a hypervascular tumor. Top right and bottom left: resulting DSA image after manual registration by means of pixel shifting, for two of the four observers. Notice that due to the rotational nature of the patient’s movement, it was not possible to obtain an overall optimal correction of motion artifacts by means of this technique. From these two images it is clear that a reduction of artifacts in one part of the image may result in a deterioration of the artifacts in another part. Bottom right: DSA image resulting from application of the automatic registration technique. 76 4 Evaluation of an Automatic Registration Technique for DSA

Figure 4.2. Second example of a case in which the automatic registration technique was found to be superior to pixel shifting. Top left: original oblique cerebral DSA image of a patient suffering from multiple ischemic lesions (suspected vasculitis). Top right and bottom left: resulting DSA image after manual registration by means of pixel shifting, for two of the four observers. Similar to the previous exam- ples, the patient’s movement as projected in the imaging plane was more complex than uniform translation, and again it was not possible to obtain an overall optimal correction of motion artifacts by means of this technique. Bottom right: DSA image resulting from application of the automatic registration technique. 4.3 Results 77

Figure 4.3. Example of a case in which the automatic registration technique was found to perform comparably to manual pixel shifting. Top left: original posterior/anterior cerebral DSA image of a patient suffering from a tumor. Top right and bottom left: resulting DSA image after manual registration by means of pixel shifting, for two of the four observers. Notice that, apparently, the patient’s movement as projected in the imaging plane could be modeled accurately by uniform translation, thereby allowing for overall optimal correction of motion artifacts by means of this technique. Bottom right: DSA image resulting from application of the automatic registration technique. Although all corrected images were found to be better than the original image, the observers judged the differences between the automatically and manually corrected images to be negligible. 78 4 Evaluation of an Automatic Registration Technique for DSA

Figure 4.4. Example of a case in which manual pixel shifting was found to yield a somewhat better overall reduction of motion artifacts than the automatic registra- tion technique. Top left: original lateral cerebral DSA image of a patient suffering from an occlusion of the right carotid artery and a stenosis in the left carotid artery. Top right and bottom left: resulting DSA image after manual registration by means of pixel shifting, for two of the four observers. As in the previous example (Fig. 4.3), the patient’s movement as projected in the imaging plane could be mod- eled very well by uniform translation, thereby allowing for overall optimal correction of motion artifacts by means of this technique. Bottom right: DSA image result- ing from application of the automatic registration technique. Although most of the artifacts were removed by this technique, there are some remaining artifacts in the top-left and bottom-right regions of the image. 4.4 Discussion 79

4.4 Discussion

Of all DSA images included in this study, 37% had been manually corrected by means of pixel shifting before being printed on film and stored in the archive of our hospi- tal. From the fact that, in the first part of the study, the observers found that no less than 88% of the images could be improved to some extent by this technique, we may conclude that in practice more images contain motion artifacts than are usually corrected. Manual registration of all images by means of pixel shifting is a labor intensive operation, which has to be carried out at the console. The results of our evaluation indicated that, on average, 12 seconds per DSA image are required to optimally apply this technique. The results also showed that, apart from being sta- tistically significantly better than manual pixel shifting, the automatic registration technique [247,248,250] is considerably faster: on average, it requires only one second per DSA image. Moreover, the algorithm is full-automatic and hence does not require any effort from the radiologist. We note that although the weighted kappa test indicated substantial agreement between the observers in the second part of the study, there was some spread in the individual κw values, as indicated by the 95%CI. This may be caused by the fact that in this second part, the observers were confronted with their own manual corrections, which were sometimes quite different for the different observers. Furthermore, although the χ2 test for trend, applied to either the average results or the results of the individual observers, indicated a statistically significant difference (p<0.05) in the effectiveness of the automatic registration technique and manual pixel shifting, the outcome of an ordinary χ2 test would have been somewhat less persuasive: for two of the observers, this test would have given p<0.1forthenull hypothesis to be true. However, the χ2 test for trend is more appropriate in this case, since we are dealing with ordered categories and it is indeed to be expected that the departure of the observed frequencies from the frequencies that would have been obtained were the null hypothesis true, is due to a linear trend in proportions across the categories: the “higher” the category, the larger the ratio between the frequencies of the superior and inferior registration technique. As clearly illustrated by the examples presented in Figs. 4.1–4.4, manual pixel shifting often results in improved image quality in and near the diagnostically relevant parts of an image, but may sometimes result in a deterioration of artifacts in the remainder of the image. This is a direct consequence of the fact that with this technique, patient motion as projected in the imaging plane is assumed to be uniform translational. One may argue that this is not really a problem in practice as long as artifacts can be reduced in the diagnostically relevant parts of the image, and that it is therefore sufficient to use manual pixel shifting rather than a more sophisticated automatic registration technique. However, even if there would be no difference in performance from the point of view of image quality, it would still be advantageous to use the automatic technique evaluated in this chapter, since it is considerably less time consuming than manual pixel shifting. The fact that the automatic registration technique performed statistically signif- icantly better than manual pixel shifting does not imply that in practice the former technique will always be better than the latter. The average results from the explicit 80 4 Evaluation of an Automatic Registration Technique for DSA comparison of the two techniques (Table 4.3) indicated that in 5% of all cases, the corrected DSA image resulting from manual pixel shifting was found to be better than the corresponding automatically corrected DSA image. A representative example of such a case was shown in Fig. 4.4, from which it is clear that the automatic technique does not introduce new artifacts, but that it is sometimes unable to reduce some of the artifacts at the borders of the image. This may be caused by the lack of image content in those regions, which reduces the possibilities for any template matching procedure to find the correct local displacement vectors. In practice, however, this is not a problem as long as the automatic technique is made available as an optional tool and the radiologist can choose between automatic or manual registration. As mentioned in the introduction (Section 4.1), many automatic techniques have been developed over the past two decades for the purpose of reducing motion artifacts in DSA images. In most cases, the evaluation of these techniques involved only one or at most a few clinical DSA images, or phantoms, and the quality of the resulting corrected images was assessed by the same persons that developed the algorithm. To the best of our knowledge, the only more elaborate and objective evaluation studies reported in this area are the ones by Takahashi et al. [363] and Hayashi et al. [140]. In the former study, three techniques for motion artifact reduction were evaluated: manual remasking, manual pixel shifting, and an automatic registration technique [363]. Three observers (of unknown expertise) assessed the resulting quality of a total of 205 DSA images of the head and neck, and concluded that remasking was most effective. They also found that, after having applied remasking, remaining artifacts were reduced equally well by manual pixel shifting and their automatic registration technique. It is unknown whether the images involved in this study were presented to the observers in a randomized and blinded fashion. Furthermore, no details as to the inter-observer agreement were given. In the study of Hayashi et al. [140], the authors carried out an explicit comparison of the performance of two techniques: manual pixel shifting and an automatic registration technique developed by some of the co-authors of that study. Five radiologists compared the quality of a total of 16 cerebral DSA image series after application of the two techniques. The image pairs were randomized and blinded. In 14 cases, the images resulting from the automatic registration technique were rated by at least three of the observers as having better quality. It is not clear what the ratings of other radiologists were. In the other two cases, the techniques were found to perform comparably. In none of the two studies discussed here were the findings supported by statistical analyses. Due to the lack of detailed information provided by the authors of these papers, it is difficult to explicitly compare their findings to ours. We note, however, that the automatic registration techniques used by Takahashi et al. [363] and Hayashi et al. [140] are quite different from the one evaluated in our study. First, their algorithms are based on a regular grid of control points, while the algorithm evaluated in the present study uses an irregular grid, which has been shown to yield faster and more accurate registrations [244]. Hayashi et al. [140] reported that their algorithm requires about eight minutes of computation time. In contrast, the algorithm evaluated in our study [247,248,250] requires on average only about one second per DSA image, which certainly makes it a more suitable technique for use in clinical practice. Second, their algorithms make use of sub-optimal similarity measures in the template matching pro- 4.5 Conclusions 81 cedure, such as cross-correlation [363] and the sum of least differences [140], while the algorithm used here is based on the energy of the histogram of grey-value differences. Contrary to most other similarity measures, the energy measure is insensitive to mean grey-level offsets and inherent dissimilarities caused by contrasted vessels [244]. As a result, we found that in 95% of all cases, the algorithm performs either comparably to, better than, or even much better than manual pixel shifting. Finally, we mention the fact that our study involved only images that were al- ready considered clinically useful. Frequently it occurs that, during acquisition, the patient’s movements are too severe to result in diagnostically useful DSA images, even when using pixel shifting afterwards, and in such cases the run is repeated. In some cases, the automatic registration technique might help avoid a second DSA run. On-line availability of the automatically corrected DSA images would offer the radi- ologist the possibility to check directly whether a new run must be acquired, thereby avoiding the need to go back to the console to check it manually by means of pixel shifting. We also note that in our study, we were only interested in overall image qual- ity enhancement, without relation to specific diagnostic tasks, such as the grading of stenoses or the detection of small . It may be that the automatic registra- tion technique also implies an improvement in that respect compared to manual pixel shifting. Confirmation of these claims is the goal of future studies.

4.5 Conclusions

In this chapter, a clinical evaluation of the automatic registration technique described in the previous chapter was presented. The study involved 104 cerebral DSA images, which were corrected by both the automatic technique and manual pixel shifting. The quality of the DSA images resulting from the two techniques was assessed by four observers, who compared the images both mutually and to their corresponding original (uncorrected) images. The results from the latter comparison indicated a statistically significant difference (p<0.05 for the null hypothesis of equality) between the two techniques. The results from the mutual comparisons indicated that, on average, the automatic registration technique performed either comparably to (44%), better than (48%), or sometimes even much better than (3%) manual pixel shifting. In the cases (5%) where manual pixel shifting resulted in somewhat better image quality compared to the automatic technique, the remaining artifacts were in the diagnostically non- relevant regions of the image. In addition, we found that the automatic technique implies a considerable reduction in post-processing time (on average, one second vs. 12 seconds per DSA image) compared to manual pixel shifting.

The opinion seems to have got abroad, that in a few years all the great physical constants will have been approximately estimated, and that the only occupation which will then be left to men of science will be to carry these measurements to another place of decimals.

— James Clerk Maxwell, Introductory Lecture on Experimental Physics (October )

Chapter 5

Nonlinear Diffusion Filtering for Improved Vessel Visualization and Quantification in Three-Dimensional Rotational Angiography

Abstract — Three-dimensional rotational angiography (3DRA) is a new and promising technique for obtaining high-resolution isotropic 3D images of vascular structures. However, due to the relatively high noise level and the presence of other background structures in clinical 3DRA images, application of noise reduction tech- niques is inevitable. In this chapter, we analyze the effects of several linear and nonlinear noise reduction techniques on threshold-based visualization and quantifi- cation of vascular anomalies in 3DRA images. The results of in vitro experiments show that edge-enhancing anisotropic diffusion filtering is most suitable: the in- crease in the user-dependency of visualizations and quantifications is considerably less with this technique compared to linear filtering techniques, and it is better at reducing noise near edges than isotropic nonlinear diffusion. However, in view of the memory and computation-time requirements of this technique, the latter scheme may be considered a useful alternative.

5.1 Introduction

hree-dimensional rotational angiography (3DRA) is a relatively new technique for imaging blood vessels in the human body, that has the potential to over- Tcome some of the limitations and drawbacks of conventional 2D projective X-ray angiography. With the latter type of imaging, projections from different angles are often required in order to substantiate the accuracy of diagnostic findings, such as e.g. the precise location, size, and morphology of arterial stenoses and aneurysms — a fact that has been known for over half a century [188,223,293]. This does not only result in prolonged examination times, and hence prolonged exposure to X-rays, but 84 5 Vessel Visualization and Quantification in 3DRA also requires multiple injections of contrast material, which altogether significantly increase the discomfort for the patient. One of the early attempts to avoid multiple injections while retaining multiple projections was described by Campeau & Saltiel [46] in the case of angiocardiography. They proposed rapid manual rotation of the patient table during acquisition, from the lateral to the frontal position, and found that this technique was superior to single plane angiography in the study of tetralogy of Fallot. In neuroradiology, the first efforts towards rotational imaging were made by Cornelis et al. [62]. For improved diagnosis of intracranial aneurysms, they proposed rotation of the X-ray tube and coupled image intensifier over 90 to 180 degrees in five to six seconds while acquiring about four images per second, following a single injection of contrast material. The technique was further investigated by Voigt et al. [368,397,398]. Recent evaluation studies in cerebral [154], carotid [25,78], and coronary [372] an- giography have indicated that rotational imaging allows for improved detection and visualization of cerebral aneurysms as well as carotid and coronary stenoses compared to conventional digital (subtraction) angiography. The use of subtraction in combina- tion with rotational angiography was first evaluated by Schumacher et al. [342], who acquired an additional rotational run prior to injection of contrast material so as to obtain a mask image for each of the contrast images. They found that the subtrac- tion technique was of additional benefit in a substantial number of cases, although the possibility of patient motion in between the runs posed an additional problem. Similar conclusions were drawn by others [374]. Apart from the possibility to study vascular structures from different angles after a single acquisition, rotational angiography also allows for “mental 3D reconstruction” of these structures by means of stereoscopy or by displaying the individual images of a rotational run in rapid succession, as has been pointed out by several authors [342, 365, 368, 372, 374, 397, 398]. However, since the images in a rotational run are always acquired in a single plane in 3D, it is not possible to view the structures from any angle retrospectively. Therefore, accurate determination of optimal projection angles for treatment [386,416] or quantification [287] of e.g. aneurysms is limited with this technique and calls for real 3D reconstruction. Early experiments with image-intensifier based rotational imaging and 3D recon- struction for angiography were reported by Ning et al. [269]. In a phantom study, they achieved an at least three-fold increase of through-plane resolution with their prototype system compared to conventional CT scanning [270]. Initial in vivo re- sults with rotational subtraction angiography and 3D reconstruction were first de- scribed by Saint-F´elix et al. [327,328], who used a modified CT scanner gantry. The use of standard C-arm imaging systems for 3DRA was investigated by several au- thors [88, 185, 186,341, 373]. The problems associated with this type of system, such as mechanical nonidealities and distortions in the image-intensifier due to the curved input screen and the deflection of electrons in the earth’s magnetic field, have been studied thoroughly over the past years and have resulted in several correction tech- niques [87–89, 186, 265, 324, 340]. At present, the clinical use of C-arm based 3DRA imaging is increasing [10,179,258,373]. Its potential success lies in the fact that for the first time, it is now possible to use a single system for obtaining both high-resolution isotropic 3D volume reconstructions and real-time 2D fluoroscopy sequences. 5.1 Introduction 85

Figure 5.1. Different visualizations of a 3DRA dataset of a 57-year old patient with a giant aneurysm at the splitting of the middle cerebral artery. Left column: max- imum intensity projections. Middle-left column: simulated X-ray angiograms. Middle-right column: alpha-blending based volume renderings. Right column: surface renderings. The volume and surface renderings were obtained after uniform filtering of the original dataset. Top row: visualizations showing the largest diame- ter of the neck of the aneurysm. Bottom row: visualizations showing the smallest neck diameter. The projection angles used to generate these visualizations were computed by using the technique of Van der Weide et al. [386].

For the visualization of 3DRA datasets, several techniques are available and have been reported in the literature: maximum intensity projection (MIP) [10, 88, 141, 327, 328, 373], simulated (or pseudo) X-ray projection [185, 186], volume rendering [179, 185, 258, 341, 373], and surface rendering [10, 141, 186, 373]. Examples of each are shown in Fig. 5.1, from which it is clear that volume and surface rendering allow for a much better perception of 3D morphology than do MIP and simulated X-ray projection. This is even stronger when the renderings are generated in an interactive fashion, which is possible with the use of modern (3D) texture-mapping hardware. It is important to note that both volume and surface visualization algorithms require information concerning the grey-levels or image structures that must be con- sidered fully transparent and should not appear in the renderings. In the case of volume rendering, this information is usually provided in the form of a mapping that linearly transforms grey-levels above a certain user-defined lower threshold into colors and opacities. Surface rendering, on the other hand, requires an explicit segmentation of a dataset into relevant and non-relevant structures. Since the morphology of the structures of interest (vessels and their anomalies) can be quite complex and may vary 86 5 Vessel Visualization and Quantification in 3DRA

Figure 5.2. Volume and surface renderings of the 3DRA dataset used in Figure 5.1, for different values of the grey-level threshold parameter. Prior to rendering, the dataset was normalized so as to make the average background intensity about 0.0 and the average intensity within the aneurysm about 1.0. Left and middle-right column: volume and surface renderings, respectively, of the original raw dataset with the threshold parameter set to 0.3 (top), 0.4 (middle), and 0.5 (bottom). Middle-left and right column: volume and surface renderings, respectively, of the original dataset after post-processing by uniform filtering (filter U5;seeSec- tion 5.2.1), using the same thresholds and other parameter settings.

topologically from case to case, it is by no means trivial to develop accurate automatic segmentation techniques for this purpose. Therefore, the segmentation problem is as yet solved by means of grey-level thresholding. This makes the appearance of both volume and surface renderings user dependent. In a recent study by Anxionnat et al. [10], the authors found that surface rendering is usually superior to MIP in reveal- ing the spatial relationship between e.g. an aneurysm and its surrounding vascular structures, but that the user-controlled segmentation involved in the former visual- ization technique may result in an incorrect rendering of the vascular dimensions, 5.2 Noise Reduction Techniques 87 making small vessels disappear (see Fig. 5.2) or introducing relationships that do not exist in MIPs of the same dataset. Therefore, they recommended the simultaneous use of MIP and surface rendering, since they yield complementary information. It must also be pointed out that visualization of raw clinical 3DRA datasets usually does not yield smooth results, due to the relatively high noise level and the presence of other background structures resulting from inhomogeneous surrounding tissue. In order to improve the quality of volume and surface renderings, some form of noise reduction must be applied to the original data prior to visualization. In the 3DRA visualization software used at our department, this is currently implemented by simple uniform filtering. Others have mentioned the use of median filtering [341]. Although application of noise reduction techniques usually results in qualitatively better renderings, the effects of such techniques on the quantification of vessels and their anomalies based on those renderings have not yet been reported in the literature. Analysis of these effects is important, as particular techniques may influence the just mentioned user-dependency of volume and surface renderings, and thus the reliability of quantitative measurements obtained from those renderings. Figure 5.2 illustrates this point in the case of uniform filtering. In this chapter, we analyze the effects of several noise reduction techniques on the accuracy of quantification and the quality of visualization of vascular anomalies in 3DRA images. A brief description of the noise reduction techniques included in this study is given in Section 5.2, followed by a discussion of the vascular anomalies and the measurements involved in their quantification in Section 5.3. The materials and methods used in the in vitro experiments are described in Section 5.4. The results of these experiments are presented in Section 5.5, and discussed in Section 5.6. Concluding remarks are made in Section 5.7.

5.2 Noise Reduction Techniques

Noise reduction techniques can be divided into linear and (adaptive) nonlinear tech- niques. Concerning the former, we limited ourselves to uniform and Gaussian filtering. The nonlinear filtering techniques included in this study were regularized isotropic nonlinear diffusion and edge-enhancing anisotropic diffusion.

5.2.1 Uniform Filtering The simplest and computationally cheapest approach to reduce noise in images is to average the grey-values of voxels in a cubic neighborhood around each voxel. This can be implemented by means of separable uniform filtering (UF), also known as neighborhood averaging [121] or box filtering [165]:

I(x)=I0(x) ∗ Um(x), x =(x, y, z) ∈ X, (5.1)

3 where I0 denotes the original 3D image, X ⊂ R is the image domain, and Um denotes the 3D normalized uniform filter given by

Um(x)=um(x)um(y)um(z), (5.2) 88 5 Vessel Visualization and Quantification in 3DRA

R → R with um : defined as ( −1 | | 6 1 m if ξ 2 m, um(ξ) , (5.3) 0otherwise. In these equations, the parameter m ∈ N, m odd, determines the size of the uniform filter, i.e., the number of voxels in each dimension involved in the averaging. The noise reduction capability of UF is explained from the fact that Um is a low-pass filter; its Fourier transform can easily be derived to be

U˜m(f)=sinc(mfx)sinc(mfy )sinc(mfz ), (5.4) 3 where f =(fx,fy,fz) ∈ R denotes spatial frequency.

5.2.2 Gaussian Filtering Another frequently used approach to image smoothing is Gaussian filtering (GF). Similar to UF, it can be implemented by separable convolution:

I(x)=I0(x) ∗ Gσ(x), x =(x, y, z) ∈ X, (5.5) where Gσ denotes the 3D Gaussian filter with standard deviation σ,givenby

Gσ(x)=gσ(x)gσ(y)gσ(z), (5.6) R → R with gσ : defined as   1 −ξ2 gσ(ξ) , √ exp . (5.7) σ 2π 2σ2 The ubiquitousness of the Gaussian convolution kernel in digital image process- ing applications is explained by the fact that it possesses some important properties: application of this kernel does not create spurious details and the result is indepen- dent of the location and orientation of image structures [183]. These are essential requirements for the purpose of segmentation and analysis of image structures. For more details on these and other properties of the Gaussian kernel, we refer to several books on scale-space theory [106,217,354]. The noise reduction capability of GF can be explained in the frequency domain by the fact that Gσ is a low-pass filter; its Fourier transform is again a Gaussian, of the form   ˜ − 2 2 2 2 2 Gσ(f)=exp 2π σ fx + fy + fz . (5.8) In the spatial domain, the smoothing effect of GF follows from the observation that Eq. (5.5) constitutes the solution to the diffusion equation [183], also known as the heat conductance equation:

∂tI(x; t)=∇·∇I(x; t), (5.9a)

I(x;0)=I0(x), (5.9b) √ provided that σ = 2t. The linear diffusion process governed by Eqs. (5.9a,b) is known to gradually destroy all image structure and eventually result in a homogeneous image with intensity equal to the mean of the original image. 5.2 Noise Reduction Techniques 89

5.2.3 Regularized Isotropic Nonlinear Diffusion In order to preserve edges while reducing noise, a smoothing algorithm should take into account local image contrast. For this purpose, several nonlinear diffusion schemes have been proposed [366, 402, 403]. In the present study, we included the scheme originally due to Perona & Malik [292] and improved by Catt´e et al. [49]. This so called regularized isotropic nonlinear diffusion scheme — in the sequel referred to as regularized Perona-Malik diffusion (RPM) — is obtained by modifying Eq. (5.9a) so as to include a gradient-dependent diffusivity:    2 ∂tI(x; t)=∇· D k∇I(x; τ)k ∇I(x; t) , (5.10) √ where the gradient is computed at scale σn = 2τ, τ>0. This noise-scale parameter makes the filter insensitive to noise at scales smaller than σn, and also serves as a regularization parameter which guarantees well-posedness of the process [49,403]. In order to achieve intra-regional smoothing while avoiding smoothing across ob- ject boundaries, the diffusivity D must be chosen such that D → 1 when the gradient magnitude is small and D → 0 when it is large. In our implementation, we used the following diffusivity [403,405,406]: !  −C D ξ2 =1− exp , (5.11) (ξ/ζ)8 where ζ>0 acts as a “contrast” parameter: structures with k∇I(x; τ)k >ζare regarded as edges, for which D → 0 and hence diffusion is inhibited, while structures with k∇I(x; τ)k <ζare assumed to belong to the interior of a region, for which D → 1 and hence (5.10) approaches the linear diffusion equation (5.9a). The constant C must be chosen such that the flux function ξD(ξ2), the derivative of which determines whether Eq. (5.10) describes forward or backward diffusion, is increasing for ξ ∈ [0,ζ] and decreasing for ξ ∈ (ζ,∞). This implies that C =3.31488 [403,405].

5.2.4 Edge-Enhancing Anisotropic Diffusion The second nonlinear diffusion scheme included in this study is edge-enhancing an- isotropic diffusion (EED), which does not only take into account the contrast of an edge, but also its orientation. This is achieved by replacing the scalar-valued diffu- sivity in Eq. (5.10) by a diffusion tensor:    ∂tI(x; t)=∇· D ∇I(x; τ) ∇I(x; t) , (5.12) where D is constructed from the system of orthonormal eigenvectors

v1 k∇I(x; τ), (5.13a)

v2 ⊥∇I(x; τ), (5.13b)

v3 ⊥∇I(x; τ)andv3 ⊥ v2, (5.13c) 90 5 Vessel Visualization and Quantification in 3DRA and corresponding eigenvectors  2 λ1 = D k∇I(x; τ)k , (5.14a)

λ2 = D(0) = 1, (5.14b)

λ3 = D(0) = 1, (5.14c)  with D as given in Eq. (5.11). This is equivalent to saying D = D ∇I(x; τ)∇I(x; τ)T , where the argument of D is known as the structure tensor [18, 271, 307, 405]. With this choice of D, smoothing along edges is preferred over smoothing across them.

5.3 Quantification of Vascular Anomalies

Three-dimensional rotational angiography is currently used primarily for visualization and subsequent quantification of carotid stenosis and intracranial aneurysms [10,141, 258,373]. In this section, we briefly discuss the measures involved in the quantification of these particular vascular anomalies.

5.3.1 Quantification of Carotid Stenosis For the quantification of the degree of stenosis of the internal carotid artery (ICA), many measures have been proposed and applied in the past [113]. Currently, the most frequently used measures are the ones used in the North American Symptomatic Trial (NASCET) [272,273] and the European Carotid Surgery Trial (ECST) [84–86], and the common carotid measure (CC) [415]. These measures are defined as follows:   dS DNASCET = 1 − , (5.15) dICA   dS DECST = 1 − , (5.16) dO   dS DCC = 1 − , (5.17) dCCA

with diameters dS, dICA, dO,anddCCA as indicated in Fig. 5.3. All three measures involve measuring the luminal diameter at the point of maximum stenosis (dS), but the denominator used to compute the degree of stenosis differ. The NASCET measure involves the diameter (dICA ) of a visible portion of disease-free ICA distal to the stenosis, whereas the ECST measure uses the estimated normal luminal diameter (dO) at the site of the lesion, based on a visual impression of where the normal arterial wall was prior to the development of stenosis. The CC measure involves the diameter

(dCCA) of the visible disease-free distal common carotid artery (CCA). It has been shown that, given fixed percentage ranges to categorize stenosis sever- ity, the differences between results based on the NASCET and ECST measures are considerable and of major clinical importance [323]. Results based on the ECST and 5.3 Quantification of Vascular Anomalies 91

Figure 5.3. Diameters involved in the different measures for quantification of the degree of internal carotid stenosis (left), and the size and shape of intracranial saccular aneurysms (right). The aneurysm depicted here is located at the tip of the basilar artery (BA). See, respectively, Section 5.3.1 and 5.3.2 for details.

CC measures are comparable though, which is explained from the fact that the es- timated normal luminal diameter at the site of the lesion is usually approximately equal to the luminal diameter of the CCA [323]. When using the NASCET mea- sure, stenosis may be classified as mild (0%–29%), moderate (30%–69%), or severe (70%–99%). (Corresponding percentage ranges for the ECST and CC measure can be found by making use of the approximately linear or parabolic relationships which have been shown to exist between experimental results of the three measures [79, 323].) It has been demonstrated that surgery is beneficial in symptomatic patients with se- vere stenosis [272, 282], while the immediate risks of surgery outweigh any potential long-term benefit in patients with mild stenosis. However, there are no definitive conclusions regarding the treatment of patients with moderate stenosis.

5.3.2 Quantification of Intracranial Aneurysms For the quantification of intracranial aneurysms, several measures are important. In an attempt to assess the risk of rupture of an aneurysm, early studies have focussed solely on the dome diameter (dD, see Fig. 5.3). It has been stated that unruptured saccular aneurysms less than 10mm in diameter have a very low probability of subse- quent rupture [172,414]. Later studies have indicated that smaller aneurysms are also associated with a risk of rupture [163, 173, 256,322,333,376, 418, 424]. Some authors have recommended treatment for aneurysms larger than approximately 5mm in diam- eter [256], while others were unable to find a critical size for unruptured aneurysms below which there is a benign prognosis [173, 322]. Although increased size has been found to relate significantly to risk of rupture [163,173,424], the critical size (in terms of dome diameter) for rupture is still controversial.

Knowledge of the diameter of the aneurysmal neck (dN) is important in selecting an appropriate clip in the case of surgical intervention [93]. Neck size has also been 92 5 Vessel Visualization and Quantification in 3DRA shown to be an important factor in predicting successful obliteration of the aneurysmal lumen in the case of endovascular treatment [92, 157]. In the literature, the neck of intracranial saccular aneurysms has been classified into small (6 4mm) and large (> 4mm) [92,287,396]. It has been shown that the probability of achieving complete occlusion is considerably larger for small-necked aneurysms, which is explained from the fact that the smaller the neck, the higher the probability that the mesh of coils bridges across the neck area [92,129,147,396]. Other studies have indicated the possible importance of ratios. For example, the ratio between the neck diameter (dN) and the dome diameter (dD)oftheaneurysmmay be used as a guideline in deciding between surgical or endovascular treatment [15,287]. It has also been reported that the outcome of surgery for prolate spheroidal aneurysms

(having a small value for the ratio between dome diameter (dD) and dome height (dH)) is generally worse than for more spherical lesions [76,287]. A recent study on the effects of size and shape on the hemodynamics of saccular aneurysms has revealed that the ratio between the height (or depth) of the aneurysm (dH) and the neck diameter (dN) is an important parameter in determining the dynamics of the flow [377]. It was found that aneurysms with depth/neck ratios of more than 1.6 require special care, regardless of actual sizes, because the associated localized low-flow conditions are suspected to induce degeneration of the chemical structure of the aneurysmal wall, leading to increased risk for rupture [377].

5.4 In Vitro Experiments

In order to investigate the capabilities of the filtering techniques described in Sec- tion 5.2 to reduce noise and result in improved visualization and quantification of vascular anomalies in 3DRA images, in vitro experiments were carried out, involving phantoms for which ground truth was available. In this section we briefly describe the phantoms, the image acquisition, and the method of evaluation.

5.4.1 Phantoms and Image Acquisition For the experiments concerning the quantification of the degree of carotid stenosis, we used a carotid anthropomorphic vascular phantom (CAVP) with an asymmetrical stenosis in the ICA. The experiments concerning the quantification of intracranial aneurysms were carried out on an intracranial anthropomorphic vascular phantom (IAVP), which contains a berry aneurysm located at the tip of the basilar artery (BA). Both phantoms (R. G. Shelley Ltd., North York, Ontario, Canada) represent average dimensions of the corresponding vascular structures in the human body [90,353]. The relevant diameters in the phantoms are listed in Table 5.1 and 5.2, respectively. Three-dimensional images of each of the phantoms were obtained as follows. First, the phantom was filled with contrast material (Ultravist-300 (Schering, Weesp, the Netherlands), diluted to 50% with Natriumchloride 0.9% (Fresenius, ’s-Hertogenbosch, the Netherlands)). Next, the rotational angiography facility of an Integris V3000 C- arm imaging system (Philips Medical Systems, Best, the Netherlands) was used to acquire a sequence of 100 X-ray angiography images (see Fig. 5.4 for examples) at 5.4 In Vitro Experiments 93

Diameter Description

dS =1.68mm Luminal diameter at the point of maximum stenosis

dICA =5.60mm Luminal diameter of the internal carotid artery

dCCA =8.00mm Luminal diameter of the common carotid artery

Table 5.1. Relevant diameters in the CAVP. These diameters are equal to the ones mentioned by Smith et al. [353].

Diameter Description

dN =2.6mm Luminal diameter of the neck of the aneurysm

dD =12.9mm Luminal diameter of the dome of the aneurysm

Table 5.2. Relevant diameters in the IAVP. We note that these diameters were obtained from the manufacturer and may differ from the ones originally described by Fahrig et al. [90].

different views by automatic rotation of the C-arm over 180 degrees in about eight seconds. All projection images were acquired with a 20cm image intensifier, having a matrix size of 512×512 pixels, and a grey-level resolution of 10 bits per pixel. The X- ray intensity was 60kV, 15ms exposure per image. Finally, a filtered back-projection algorithm [125] (a modification of Feldkamp’s cone-beam algorithm [91]) was applied to generate 3D reconstructions at two different resolutions: 128 × 128 × 128 voxels of 0.6 × 0.6 × 0.6mm3 (hereafter referred to as the low-resolution reconstruction), and 256 × 256 × 256 voxels of 0.3 × 0.3 × 0.3mm3 (referred to as the high-resolution reconstruction). In both cases, the grey-level resolution was 16 bits per voxel.

5.4.2 Method of Evaluation We first investigated the capabilities of the filtering techniques to reduce background noise while retaining vessel contrast as much as possible. In order to quantify this, we used the contrast-to-noise ratio (CNR), which is defined as the squared difference between the mean grey-value within a vessel segment of interest and the mean grey- value in a neighboring background region, divided by the variance of the grey-values in that background region [1,73]:

 2 hIiV −hIiB CNR = , (5.18) σB where V and B denote vessel and background regions, respectively. Since a given filtering technique can be expected to behave similarly in all parts of the background in the phantom images, we selected only a single background region in each of the images. However, the effects of any technique on the local contrast may be dependent 94 5 Vessel Visualization and Quantification in 3DRA

Figure 5.4. Sample X-ray projection images taken from the rotational angiography runs of the CAVP (left)andtheIAVP(right). These images are meant to give an impression of the morphology and complexity of the modeled vasculature. on the size and shape of the vessel segment of interest. Therefore, the contrast was measured separately for the CCA, the ICA, and the point of maximum stenosis in the images of the CAVP, and for the dome and neck in the images of the IAVP. The CNR was measured as a function of “evolution time”. This variable, t,isex- plicitly present in the RPM and EED scheme (see Eq. (5.10) and (5.12), respectively) and, together with the temporal step-size ∆t, determines the number of iterations of the discretized version of the differential equation involved. In the GF scheme (when implemented by Gaussian convolution), this variable is related to the standard devi- 1 2 ation of the Gaussian kernel by t = 2 σ , as explained in Section 5.2.2. In order to obtain an “evolution time” for the UF scheme, we used that same expression, with σ the standard deviation of the kernel defined in Eq. (5.3). This implies that t = m2/24, where m is a discrete variable for which we took values of 3, 5, 7, 9, and 11. In order to allow for a direct comparison of the results of the techniques, the CNR measure- ments for the GF, RPM, and EED scheme were carried out at the corresponding evolution times t =0.375, 1.042, 2.042, 3.375, and 5.042. The measurements were also performed in the original 3DRA images (corresponding to t =0.0). Next, the effects of the different noise reduction techniques on the quantification of the vascular anomalies discussed in Section 5.3 were investigated. Concerning the quantification of the degree of internal carotid stenosis, the experiments were limited to determining DNASCET and DCC, which implied measuring dS, dICA,anddCCA (see Eqs. (5.15) and (5.17), and Fig. 5.3). According to the specifications of the CAVP

(Table 5.1), these measurements should yield DNASCET = 70% and DCC = 79%. The ECST measure was not determined, since it requires the normal luminal diameter at the point of maximum stenosis, dO, which cannot be measured in the phantom images used in this study. As for the quantification of intracranial aneurysms, the experiments 5.4 In Vitro Experiments 95

were limited to measuring dN and dD (see Fig. 5.3 and Table 5.2). The dome height, dH, was not determined, since it would require user interaction to indicate the transition between the dome and the neck of the aneurysm. The vessel diameters were measured as a function of both evolution time, t,and the user-controlled threshold parameter, θ. Concerning the former, we used the same values as in the CNR measurements. As explained in the introduction (Section 5.1), the threshold parameter is currently used in practice to separate relevant (vascular) structures from non-relevant (noise and other background) structures in the volume or surface renderings, on the base of which quantification takes place. In order to be able to use acquisition independent values for this parameter, the phantom images were “normalized” in such a way that the average background intensity was 0.0, and the average intensity within the vessels of interest 1.0. The measurements were carried out for thresholds ranging from 0.1to0.9, with a step size of 0.02. Together with the ground-truth values, the results of these experiments allowed for the assessment of both accuracy and robustness to threshold selection of quantitative measurements, and their dependency on the filter strength. The actual determination of luminal diameters was done as follows. For each of the vessel segments involved, a perpendicular cut-plane was determined interactively. In this plane, grey-level profiles passing through the center of the vessel in question were analyzed. Given a profile, the luminal diameter was defined as the distance between the points on either side of the center of the vessel along that profile at which the grey-level passed through the user-defined threshold level θ.Thelocation of these points was determined with a precision of 1/100th of a voxel by using trilinear interpolation. In order to increase the robustness of the measurements, we used 10 profiles, equally divided over 360 degrees within the cut-plane, and the average of the resulting diameters was taken as the final diameter. This was done for all segments, except for the neck of the aneurysm, which was the only segment in the phantoms that did not have a circular cross section. Therefore, we used only a single profile for the neck diameter measurements. Since in our IAVP only the (smallest) neck diameter in the anterior/posterior direction was specified (dN in Table 5.2), the single profile was taken in that direction. Finally, we looked at the visual (qualitative) effects of the different noise reduction techniques. These concerned the apparent (not measured) dimensions of the vascular anomalies in 3D visualizations of the filtered 3DRA datasets and their dependency on the user-controlled threshold parameter, as well as the apparent smoothness of the vascular structures in these visualizations. These effects may be important when 3D visualizations are used for navigational purposes, such as e.g. in (future) endovascular interventional applications. In order to give an impression of these effects, both exo- and endovascular surface renderings were generated. All surface renderings were obtained by using Schlick’s modification [334] of the Phong light model [295], which separates reflected light into an ambient component (factor ka), a diffuse component (factor kd), and a specular component (factor ks). In all renderings, the following settings were used for these parameters: ka =0.2, kd =0.4, and ks =0.4. We note that, apart from the time parameter, the RPM and EED schemes have two additional parameters: the noise scale σn and the contrast parameter ζ.The former causes the gradient-magnitude computations to be relatively insensitive to 96 5 Vessel Visualization and Quantification in 3DRA

variations at scales smaller than σn. Since the was to preserve the entire vascu- lature as much as possible, and particular (segments of) vessels were quite small (e.g. the point of maximum stenosis!), we chose to use a small value for this parameter, viz., σn =0.5. The contrast parameter ζ acts as a threshold against which local gradient magnitudes are compared in deciding between destruction or preservation of the underlying image structure. Using the same arguments, we concluded that the value of this parameter should be chosen as small as possible. After initial experi- mentation with RPM and EED applied to the normalized phantom images, we found that ζ =0.05 yields satisfactory results for both schemes; much larger values resulted in additional blurring of the vessel walls, while too much noise was preserved with much smaller values. The just mentioned values for the noise and contrast parameter were kept fixed in all experiments.

5.5 Results

The results of the CNR measurements carried out in the 3DRA images of the CAVP and IAVP are presented in Fig. 5.5 and 5.6, respectively. From the plots it follows that, for the range of evolution times considered in these experiments, the four schemes UF, GF, RPM, and EED reduced noise equally well in vessel segments with a large luminal diameter, where “large” has to be taken relative to the voxel size of the image. This applies to the CCA and the dome of the aneurysm in both the high- and low- resolution reconstruction of the CAVP and IAVP, respectively, and the ICA in the high-resolution reconstruction of the CAVP. For the segments with smaller diameters, viz., the point of maximum stenosis and the neck of the aneurysm in both the high- and low-resolution reconstruction of, respectively, the CAVP and IAVP, as well as the ICA in the low-resolution reconstruction of the CAVP, the nonlinear filtering techniques (RPM and EED) outperformed the linear techniques (UF and GF) for larger evolution times. The results of the experiments concerning the effects of the different noise re- duction techniques on carotid stenosis and quantification are presented in Figs. 5.7 and 5.8, and Figs. 5.9 and 5.10, respectively. The plots show that for the linear techniques (UF and GF), the dependency of the measurements on the user-controlled threshold parameter (θ) increased dramatically (in both the high- and low-resolution reconstructions of the CAVP and IAVP) as the filtering was made stronger (larger t). The RPM scheme, on the other hand, had a negligible influence on this dependency, irrespective of resolution or evolution time. The effects of the EED scheme on the user-dependency were found to be negligible only in the high- resolution reconstructions. Concerning the low-resolution reconstructions, the effects of EED were most noticeable in the quantification of the degree of stenosis. Finally, examples of exo- and endovascular surface renderings generated from the high-resolution 3DRA images of the CAVP and IAVP after application of the differ- ent noise reduction techniques, are presented in Figs. 5.11 and 5.12, and Figs. 5.13 and 5.14, respectively. The renderings show close-up 3D visualizations of the vas- cular anomalies and give a visual impression of the effects of the techniques on the smoothness of the vessel walls and the changes in the apparent dimensions of the 5.5 Results 97

CNR(t) [dB] in stenosis at low res. CNR(t) [dB] in stenosis at high res. 60 60

50 50

40 40

30 30

20 UF 20 UF GF GF 10 RPM 10 RPM EED EED 0 0 0 1 2 3 4 5 0 1 2 3 4 5

CNR(t)[dB]inICAatlowres. CNR(t) [dB] in ICA at high res. 60 60

50 50

40 40

30 30

20 UF 20 UF GF GF 10 RPM 10 RPM EED EED 0 0 0 1 2 3 4 5 0 1 2 3 4 5

CNR(t) [dB] in CCA at low res. CNR(t) [dB] in CCA at high res. 60 60

50 50

40 40

30 30

20 UF 20 UF GF GF 10 RPM 10 RPM EED EED 0 0 0 1 2 3 4 5 0 1 2 3 4 5

Figure 5.5. Contrast-to-noise ratio (CNR) as a function of evolution time (t)for the four noise reduction techniques described in Section 5.2, measured in the stenosis (top row), ICA (middle row), and CCA (bottom row)inthelow-(left column) and high-resolution (right column)3DRAreconstructionoftheCAVP.

anomalies when varying the user-controlled threshold parameter. The renderings support the findings of the quantification experiments: the linear techniques (UF and GF) increased the user-dependency of the (measured or observed) dimensions of the anomalies. In contrast, the negative effects of the nonlinear techniques (RPM and EED) in the high-resolution 3DRA reconstructions were negligible. Notice, however, 98 5 Vessel Visualization and Quantification in 3DRA

CNR(t)[dB]inneckatlowres. CNR(t) [dB] in neck at high res. 60 60

50 50

40 40

30 30

20 UF 20 UF GF GF 10 RPM 10 RPM EED EED 0 0 0 1 2 3 4 5 0 1 2 3 4 5

CNR(t)[dB]indomeatlowres. CNR(t)[dB]indomeathighres. 60 60

50 50

40 40

30 30

20 UF 20 UF GF GF 10 RPM 10 RPM EED EED 0 0 0 1 2 3 4 5 0 1 2 3 4 5

Figure 5.6. Contrast-to-noise ratio (CNR) as a function of evolution time (t)for the four noise reduction techniques described in Section 5.2, measured in the neck (top row) and dome (bottom row)oftheaneurysminthelow-(left column) and high-resolution (right column)3DRAreconstructionoftheIAVP.

that the smoothness of the vessel walls was considerably improved by EED, while mostofthenoiseintheseedgeregionswasretainedbyRPM.

5.6 Discussion

Techniques for the reduction of noise in digital images have been developed and re- ported since the 1970s. Concerning the preservation of edges, early evaluation stud- ies [53, 236, 422] already indicated the superiority of nonlinear techniques such as median filtering or adaptive K-nearest neighbor averaging over linear techniques. However, these nonlinear techniques may easily result in a loss of resolution due to their tendency to suppress fine details, as has been pointed out in the field of medical imaging by e.g. Gerig et al. [117]. Developments in the past decade have resulted in new approaches to noise reduction [49,271,292,403–406], based on nonlinear diffusion filtering. These techniques were explicitly designed to preserve edges and fine details, and to overcome the major drawbacks of conventional filtering techniques, such as the inevitable trade-off between localization accuracy and detectability, which occurs 5.6 Discussion 99

DNASCET(θ; t)[%]forUFatlowres. DNASCET(θ; t)[%]forUFathighres. 90 90

80 80

70 70

60 t =0.0 60 t =0.0 t =0.4 t =0.4 50 t =1.0 50 t =1.0 t =2.0 t =2.0 40 t =3.4 40 t =3.4 t =5.0 t =5.0 30 30 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

DNASCET(θ; t)[%]forGFatlowres. DNASCET(θ; t)[%]forGFathighres. 90 90

80 80

70 70

60 t =0.0 60 t =0.0 t =0.4 t =0.4 50 t =1.0 50 t =1.0 t =2.0 t =2.0 40 t =3.4 40 t =3.4 t =5.0 t =5.0 30 30 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

DNASCET(θ; t)[%]forRPMatlowres. DNASCET(θ; t) [%] for RPM at high res. 90 90

80 80

70 70

60 t =0.0 60 t =0.0 t =0.4 t =0.4 50 t =1.0 50 t =1.0 t =2.0 t =2.0 40 t =3.4 40 t =3.4 t =5.0 t =5.0 30 30 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

DNASCET(θ; t) [%] for EED at low res. DNASCET(θ; t) [%] for EED at high res. 90 90

80 80

70 70

60 t =0.0 60 t =0.0 t =0.4 t =0.4 50 t =1.0 50 t =1.0 t =2.0 t =2.0 40 t =3.4 40 t =3.4 t =5.0 t =5.0 30 30 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 5.7. For explanation, see Page 107. 100 5 Vessel Visualization and Quantification in 3DRA

DCC(θ; t)[%]forUFatlowres. DCC(θ; t)[%]forUFathighres. 90 90

80 80

70 t =0.0 70 t =0.0 t =0.4 t =0.4 t =1.0 t =1.0 60 t =2.0 60 t =2.0 t =3.4 t =3.4 t =5.0 t =5.0 50 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

DCC(θ; t)[%]forGFatlowres. DCC(θ; t)[%]forGFathighres. 90 90

80 80

70 t =0.0 70 t =0.0 t =0.4 t =0.4 t =1.0 t =1.0 60 t =2.0 60 t =2.0 t =3.4 t =3.4 t =5.0 t =5.0 50 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

DCC(θ; t)[%]forRPMatlowres. DCC(θ; t) [%] for RPM at high res. 90 90

80 80

70 t =0.0 70 t =0.0 t =0.4 t =0.4 t =1.0 t =1.0 60 t =2.0 60 t =2.0 t =3.4 t =3.4 t =5.0 t =5.0 50 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

DCC(θ; t) [%] for EED at low res. DCC(θ; t) [%] for EED at high res. 90 90

80 80

70 t =0.0 70 t =0.0 t =0.4 t =0.4 t =1.0 t =1.0 60 t =2.0 60 t =2.0 t =3.4 t =3.4 t =5.0 t =5.0 50 50 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 5.8. For explanation, see Page 107. 5.6 Discussion 101

dN(θ; t)[mm]forUFatlowres. dN(θ; t)[mm]forUFathighres. 5 5 t =0.0 t =0.0 t =0.4 t =0.4 4 t =1.0 4 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 3 t =5.0 3 t =5.0

2 2

1 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

dN(θ; t)[mm]forGFatlowres. dN(θ; t) [mm] for GF at high res. 5 5 t =0.0 t =0.0 t =0.4 t =0.4 4 t =1.0 4 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 3 t =5.0 3 t =5.0

2 2

1 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

dN(θ; t) [mm] for RPM at low res. dN(θ; t) [mm] for RPM at high res. 5 5 t =0.0 t =0.0 t =0.4 t =0.4 4 t =1.0 4 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 3 t =5.0 3 t =5.0

2 2

1 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

dN(θ; t) [mm] for EED at low res. dN(θ; t) [mm] for EED at high res. 5 5 t =0.0 t =0.0 t =0.4 t =0.4 4 t =1.0 4 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 3 t =5.0 3 t =5.0

2 2

1 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 5.9. For explanation, see Page 107. 102 5 Vessel Visualization and Quantification in 3DRA

dD(θ; t)[mm]forUFatlowres. dD(θ; t)[mm]forUFathighres. 18 18 t =0.0 t =0.0 t =0.4 t =0.4 16 t =1.0 16 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 14 t =5.0 14 t =5.0

12 12

10 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

dD(θ; t) [mm] for GF at low res. dD(θ; t) [mm] for GF at high res. 18 18 t =0.0 t =0.0 t =0.4 t =0.4 16 t =1.0 16 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 14 t =5.0 14 t =5.0

12 12

10 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

dD(θ; t) [mm] for RPM at low res. dD(θ; t) [mm] for RPM at high res. 18 18 t =0.0 t =0.0 t =0.4 t =0.4 16 t =1.0 16 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 14 t =5.0 14 t =5.0

12 12

10 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

dD(θ; t) [mm] for EED at low res. dD(θ; t) [mm] for EED at high res. 18 18 t =0.0 t =0.0 t =0.4 t =0.4 16 t =1.0 16 t =1.0 t =2.0 t =2.0 t =3.4 t =3.4 14 t =5.0 14 t =5.0

12 12

10 10 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 5.10. For explanation, see Page 107. 5.6 Discussion 103

UF GF RPM EED

Figure 5.11. Exovascular surface renderings illustrating the effects of the different noise reduction techniques on the smoothness of the vessel walls and the appar- ent degree of stenosis when varying the user-controlled threshold parameter. The renderings show a close-up of the stenosis and its related vessels (ICA, ECA, and CCA; see the left diagram in Fig. 5.3), and were generated from the high-resolution 3DRA image of the CAVP after application of, respectively, UF (left column), GF (middle-left column), RPM (middle-right column), and EED (right col- umn), at evolution time t =2.042. The thresholds used were, respectively, θ =0.2 (top row), θ =0.3(middle row), and θ =0.4(bottom row). 104 5 Vessel Visualization and Quantification in 3DRA

UF GF RPM EED

Figure 5.12. Exovascular surface renderings illustrating the effects of the different noise reduction techniques on the smoothness of the vessel walls and the appar- ent size of especially the neck of the aneurysm, when varying the user-controlled threshold parameter. The renderings show a close-up of the neck and the dome of the aneurysm and its related vessels (the BA and both PCAs; see the right di- agram in Fig. 5.3), and were generated from the high-resolution 3DRA image of the IAVP after application of, respectively, UF (left column), GF (middle-left column), RPM (middle-right column), and EED (right column), at evolution time t =2.042. The thresholds used were, respectively, θ =0.3(top row), θ =0.4 (middle row), and θ =0.5(bottom row). 5.6 Discussion 105

UF GF RPM EED

Figure 5.13. Endovascular surface renderings illustrating the effects of the dif- ferent noise reduction techniques on the smoothness of the vessel walls and the apparent degree of stenosis when varying the user-controlled threshold parameter. The renderings show the ECA (left passage) and the stenosis in the ICA (right pas- sage), viewed from within the CCA, and were generated from the high-resolution 3DRA image of the CAVP after application of, respectively, UF (left column), GF (middle-left column), RPM (middle-right column), and EED (right col- umn), at evolution time t =2.042. The thresholds used were, respectively, θ =0.2 (top row), θ =0.3(middle row), and θ =0.4(bottom row). 106 5 Vessel Visualization and Quantification in 3DRA

UF GF RPM EED

Figure 5.14. Endovascular surface renderings illustrating the effects of the different noise reduction techniques on the smoothness of the vessel walls and the apparent size of the neck of the aneurysm, when varying the user-controlled threshold parame- ter. The renderings show the neck and the BA (straight-through passage) and PCAs (left and right passages) behind it, viewed from within the dome of the aneurysm, and were generated from the high-resolution 3DRA image of the IAVP after ap- plication of, respectively, UF (left column), GF (middle-left column), RPM (middle-right column), and EED (right column), at evolution time t =2.042. The thresholds used were, respectively, θ =0.3(top row), θ =0.4(middle row), and θ =0.5(bottom row). 5.6 Discussion 107

Figure 5.7 (Page 99). The degree of internal carotid stenosis (DNASCET)asa function of the user-controlled threshold parameter (θ) and evolution time (t)for the four noise reduction techniques: UF (first row), GF (second row), RPM (third row), and EED (last row), as measured in the low- (left column)and high-resolution (right column) 3DRA reconstruction of the CAVP. The horizontal

line at DNASCET = 70% indicates the true value.

Figure 5.8 (Page 100). The degree of internal carotid stenosis (DCC) as a function of the user-controlled threshold parameter (θ) and evolution time (t) for the four noise reduction techniques: UF (first row), GF (second row), RPM (third row), and EED (last row), as measured in the low- (left column) and high-resolution

(right column) 3DRA reconstruction of the CAVP. The horizontal line at DCC = 79% indicates the true value.

Figure 5.9 (Page 101). The diameter of the neck of the aneurysm (dN)asa function of the user-controlled threshold parameter (θ) and evolution time (t)for the four noise reduction techniques: UF (first row), GF (second row), RPM (third row), and EED (last row), as measured in the low- (left column)and high-resolution (right column) 3DRA reconstruction of the IAVP. The horizontal

line at dN =2.6mm indicates the true value.

Figure 5.10 (Page 102). The diameter of the dome of the aneurysm (dD)as a function of the user-controlled threshold parameter (θ) and evolution time (t) for the four noise reduction techniques: UF (first row), GF (second row), RPM (third row), and EED (last row), as measured in the low- (left column)and high-resolution (right column) 3DRA reconstruction of the IAVP. The horizontal

line at dD =12.9mm indicates the true value.

e.g. in Canny’s approach [47] to edge detection based on linear operators, or the dif- ficulty of scale selection or multi-scale integration, which occurs in recently reported approaches to vessel-enhancement filtering [50, 71, 72, 116, 224, 277, 300, 329]. Several nonlinear diffusion schemes have already been applied successfully in biological and medical imaging [16, 115, 117, 222,290,350, 351, 355]. Evaluations of such techniques for the present application have not been reported previously, however. The experimental results presented in the previous section show that the four noise reduction techniques considered in this study (UF, GF, RPM, and EED) were equally capable of increasing the CNR in vessel segments with a relatively large lu- minal diameter. This can be explained from the fact that in the experiments, the vessel regions V (see Section 5.4.2) were taken rather small (typically a few voxels in all three dimensions) and close to the center of the lumen. As a consequence, for the range of evolution times considered, linear filtering (UF or GF) did not result in a blurring of the vessel walls to the extent that it reduced the mean grey-level within regions V in segments with a luminal diameter larger than about 10 voxels. Furthermore, the nonlinear techniques (RPM and EED) approached GF in these non- edge regions — Eqs. (5.10) and (5.12) both converge to (5.9a) for k∇I(x; τ)kζ. 108 5 Vessel Visualization and Quantification in 3DRA

In the vessel segments with smaller diameters, the contrast-reducing effects of linear filtering were noticeable at much earlier evolution times. This explains the lagging CNR(t) curves of the linear techniques compared to those of the nonlinear techniques in these cases (see again the plots in Figs. 5.5 and 5.6), where the time of parting is determined not only by the local luminal diameter, but also by the morphology of the surrounding vasculature. The CNR measurement results also show that, of the nonlinear techniques, RPM was superior to EED regarding the preservation of local contrast in vessel segments with very small diameters (in these experiments only the point of maximum stenosis in the low-resolution reconstruction of the CAVP, where the local luminal diameter was less than three voxels). This is due to the fact that near edges, blurring is completely inhibited with RPM, while the anisotropic behavior of EED still allows for some blurring in the plane orthogonal to the local gradient. Whereas the CNR measurements concerned the behavior of the noise reduction techniques in the background and the interior of vessels, the diameter measurements were intended to study their performance at the transitions from background to vessel interior. The plots in Figs. 5.7–5.10 reveal that the differences between UF and GF were negligible in that respect: the increase in the dependency of the measurements on the user-controlled threshold θ was comparable with the two techniques. However, as expected, this increase was considerably less in the high-resolution reconstructions compared to the low-resolution reconstructions. For example, in the low-resolution reconstruction of the CAVP resulting from UF at t =1.042, changing the threshold from θ =0.2toθ =0.4 implied an increase in DNASCET from 60% to well over 90%. In the high-resolution reconstruction, on the other hand, the increase was only from about 66% to about 77%. Since DNASCET = 70% is usually considered an important threshold in deciding between intervention or no intervention [113, 272, 282,323], we may conclude that UF and GF put high demands on the resolution at which user- controlled measurements are to be carried out. In contrast, RPM did not increase the user-dependency of the measurements, and the plots show that this dependency was somewhat less in the high–resolution reconstructions. The small amount of anisotropic blurring allowed by the EED scheme near edges did not have appreciable effects on the user-dependency of the measurements in the high-resolution reconstructions, in which all diameters were larger than about five voxels. The relatively large effects on this dependency in the low-resolution reconstruction of the CAVP can be ascribed primarily to the blurring effects at the point of maximum stenosis, where the local diameter was considerably less than five voxels. Notice that in these quantification experiments, we measured only diameters. This is justified by the fact that, except for the neck of the aneurysm, all vessel segments in the 3DRA phantom images were known to have circular cross sections. Moreover, determining diameters fits in with the currently used measures for quantification of vascular anomalies (see Section 5.3). The reason that these diameter-based measures have become so established is that, for many decades, quantification has been based on 2D projective X-ray angiography, notably DSA. In fact, DSA is still considered by many the gold standard for this purpose. In principle, 3DRA allows us to express important measures such as the degree of carotid stenosis in terms of cross-sectional areas rather than diameters. This would indeed be more realistic, since in practice vessels do not necessarily have circular cross sections and, in principle, the blood 5.6 Discussion 109 volume passing through a vessel per unit time is dependent on its cross-sectional area. It is important to note, however, that the observed effects of filtering on quantification will be much more severe when using areas instead of diameters, due to the quadratic relation that exists between these two. The sample exo- and endovascular surface renderings in Figs. 5.11–5.14 clearly illustrate that noise was heavily reduced with UF and GF, but the increased user- dependency easily resulted in a misleading rendering of the dimensions of vessel segments with relatively small diameters (mainly the point of maximum stenosis in Figs. 5.11 and 5.13, and the PCAs and connected neck of the aneurysm in Figs. 5.12 and 5.14). The conceptual differences between the two nonlinear techniques, men- tioned previously in the discussion of the quantification results, are also manifest in these figures. Although the user-dependency of the apparent vascular dimensions was considerably less with both techniques, the anisotropic behavior of EED resulted in smoother vessel walls, while most of the noise remained after application of RPM. Notice that the renderings in these figures were generated from the high-resolution reconstructions, and at time t =2.042. Clearly, the observed effects were much more pronounced in the low-resolution reconstructions and/or at larger t. One might argue that the negative effects of linear filtering could be confined by keeping t low. How- ever, this would also limit the improvement in CNR (see again Figs. 5.5 and 5.6). It is to be expected that clinical 3DRA images require even larger t, since these images do not only contain reconstruction noise, but also unwanted variations due to sur- rounding tissue. The effects of the different techniques applied to the clinical dataset used at the beginning of this chapter are shown in Fig. 5.15. Overall, the results of the experiments suggest that for sufficiently high-resolution reconstructions, EED is most suitable: the increase in the user-dependency of quan- tifications and visualizations is considerably less than with UF or GF, and EED is better at reducing noise at the vessel walls than RPM. The sub-optimal performance of EED in vessel segments with very small luminal diameters (occuring at lower res- olutions), is most probably due to the fact that the amount of blurring in the plane orthogonal to a local gradient is equal in all directions — the eigenvalues λ2 and λ3 of the diffusion tensor are equal, see Section 5.2.4. We suspect that in order for EED to work adequately also in these cases, it is necessary to make a distinction between the directions corresponding to minimal and maximal curvature; especially in vessel segments with small diameters, the behavior of EED in these directions can be quite different. However, this would require the use of second-order information (Hessian), which is not incorporated in the present scheme. Early experiments with curvature-based anisotropic diffusion schemes [189] have shown promising results, but more elaborate evaluations are required to determine the clinical implications. Other disadvantages of the current implementation of EED are its memory re- quirements and its relatively high computational cost. Concerning the former, EED requires an amount of memory equal to nine times the size of the original image. With RPM and UF/GF, respectively, only four and two times the size of the original image is required. If computations are carried out with floating-point precision, this implies that in order to process an image of size 256 × 256 × 256 voxels, the amount of memory required by EED, RPM, and UF/GF, would be about 605MB, 270MB, and 135MB, respectively. For an image of size 128 × 128 × 128 voxels, this would be 110 5 Vessel Visualization and Quantification in 3DRA

UF GF RPM EED

Figure 5.15. Surface renderings illustrating the effects of the different noise re- duction techniques applied to the clinical 3DRA dataset used in Figs. 5.1 and 5.2. The renderings were generated after application of, respectively, UF (left column), GF (middle-left column), RPM (middle-right column), and EED (right col- umn), at evolution time t =2.042. The thresholds used were, respectively, θ =0.3 (top row), θ =0.4(middle row), and θ =0.5(bottom row).

about 76MB, 34MB, and 17MB, respectively. Considering the fact that the amount of memory available in current workstations is usually 256MB or 512MB, we conclude that application of EED is as yet limited to small-sized reconstructions. Regarding computational cost, the major difference between the linear and the nonlinear schemes is that the former require only a single application of the corresponding convolution equation in order to arrive at any given time t (UF allows only particular discrete times though), while the latter usually require repeated application of the discretized version of the differential equation involved. The number of iterations is then deter- 5.7 Conclusions 111 mined by the temporal step-size, ∆t. In order to guarantee stability with explicit or 1 Euler-forward implementations, it is required that ∆t< 2N ,withN the dimension- ality of the dataset to which the schemes are applied [403,405, 406]. In the case of RPM, the use of additive operator splitting results in a much more efficient implemen- tation [406]. However, such an approach is less beneficial in the case of anisotropic diffusion. The use of a diffusion tensor (EED) instead of a scalar-valued diffusivity (RPM) further increases the amount of operations to be carried out. We observed that 1 with a step-size of ∆t = 8 , EED required about eight minutes in order to arrive at t =2.0 with a dataset of size 128 × 128 × 128 voxels, while the other schemes required only a fraction of that time. This was measured on an Octane workstation (Silicon Graphics, De Meern, the Netherlands) with one 195MHz MIPS R10000 processor and 256MB main memory (instruction and data cache size both 32KB). To compare: the filtered back-projection algorithm running on the same machine requires less than five minutes to reconstruct a volume of that size.

5.7 Conclusions

In this chapter, we investigated the effects of linear (UF, GF) and nonlinear (RPM, EED) noise reduction techniques on the visualization and quantification of vascular anomalies (carotid stenosis and intracranial aneurysms) in 3DRA images. Several experiments were carried out on low-resolution (0.6 × 0.6 × 0.6mm3 voxels) and high- resolution (0.3×0.3×0.3mm3 voxels) 3DRA reconstructions of a CAVP and an IAVP, modeling an asymmetrical stenosis in the ICA and a berry aneurysm located at the tip of the BA, respectively. The results of CNR measurements indicated that RPM and EED are better capable of reducing background noise while preserving local contrast than UF or GF. In addition, the increase in the dependency of diameter measure- ments on the user-controlled threshold was shown to be considerably less with RPM and EED compared to UF or GF. In both type of experiments, we observed that in vessel segments with very small luminal diameters (a few voxels), RPM performs somewhat better than EED. However, for the range of diameters considered in this study, the differences between the two techniques were found to be negligible in high- resolution reconstructions. Finally, exo- and endovascular surface renderings of the phantom images after processing with the different techniques revealed that RPM does not improve the quality of visualizations near vessel walls. Therefore we con- clude that, as far as the trade-off between accuracy of quantification and quality of visualization is concerned, EED is to be preferred for high-resolution reconstructions. However, considering the relatively high demands of this scheme in terms of memory and computation time, RPM may be considered a useful alternative in case these are decisive factors.

To describe a geometrical curve which shall pass through any given points.... Although the problem may seem to be intractable at first sight, it is quite the contrary. Perhaps indeed it is one of the prettiest problems I can ever hope to solve.

— Isaac Newton, in a letter to Henry Oldenburg ( October )

Chapter 6

Quantitative Evaluation of Convolution-Based Methods for Medical Image Interpolation

Abstract — Interpolation is required in a variety of medical image processing ap- plications. Although many interpolation techniques are known from the literature, evaluations of these techniques for the specific task of applying geometrical trans- formations to medical images are still lacking. In this chapter we present such an evaluation. We consider convolution-based interpolation methods and rigid trans- formations. A large number of sinc-approximating kernels are evaluated, includ- ing piecewise polynomial kernels and windowed sinc kernels, with spatial supports ranging from 2 to 10 grid intervals. In the evaluation we use images from a wide variety of medical image modalities. The results of the evaluation show that for all modalities, spline interpolation constitutes the best trade-off between accuracy and computational cost, and therefore is to be preferred over all other methods.

6.1 Introduction

nterpolation of sampled data is required in many digital image processing op- erations, such as subpixel translation, rotation, elastic deformation or warping, Imagnification, or minification, which need to be carried out for the purpose of image registration or volume visualization. In most applications, it is of paramount importance to limit as much as possible the grey-value errors introduced by interpo- lation. For example, in multimodality registration of computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET) data, it has been observed that interpolation errors influence the value of the optimiza- tion cost function, which may lead to registration errors [299]. Similar problems had been reported earlier in monomodality registration of MRI images [134]. It has been pointed out that in digital subtraction angiography (DSA), improved registration and resampling methods result in improved image quality [286], which allows for reduction of contrast material or X-ray dose. It has also been pointed out that in functional 114 6 Evaluation of Convolution-Based Interpolation Methods magnetic resonance imaging (fMRI), interpolation errors induced by registration op- erations may influence the interpretation of longitudinal studies [278]. The common denominator in all of these applications is geometrical transforma- tion of medical image data. Although many interpolation techniques have been put forward over the years, evaluations of these techniques for this particular task are still lacking. One of the earlier studies in this area was reported by Parker et al. [286], who compared the performance of nearest-neighbor and linear interpolation, as well as cubic convolution, by analyzing the effects of these techniques on the rotation of im- ages. No quantitative measures were computed, however, and the only medical image included in that study was a single coronary angiogram. A quantitative evaluation of the performance of convolution-based interpolation techniques in combination with specific fast image rotation algorithms was presented by Unser et al. [382]. Apart from the interpolation techniques analyzed by Parker et al. [286], their evaluation also in- cluded spline and sinc interpolation. However, no medical images were used. A more recent study was presented by Ostuni et al. [278], who compared the performance of linear, cubic spline, and truncated and Hann-windowed sinc interpolation for the geometrical transformation of fMRI images. However, that study did not include im- ages from other modalities. More elaborate evaluation studies were recently published by Lehmann et al. [213] and Grevera & Udupa [128]. The former study concerned geometrical transformation of medical images. However, only MRI and dental X-ray images were considered. The latter study involved a number of both convolution- and shape-based interpolation methods for the purpose of slice doubling in MRI and CT images. However, the effects of these techniques on the geometrical transformation of images from these and other medical image modalities were not investigated. The same holds for the studies of Schreiner et al. [339] and Chuang [54], in which inter- polation techniques were compared for the purpose of generating maximum intensity projections (MIPs) from MRA data, and surface rendering, respectively. Finally, we mention our recent study [249], in which we analyzed the effects of several piece- wise polynomial interpolation kernels on the geometrical transformation of images. However, no medical images were included in that study. The purpose of this chapter is to present the results of an elaborate evaluation, in which we quantitatively studied the performance of a large number of interpolation methods when using them to apply geometrical transformations to images from a wide variety of medical imaging modalities. The results of this evaluation are important for the tasks of e.g. mono- and multimodality medical image registration: the use of optimal interpolation methods minimizes the loss of information caused by the trans- formation of images. In order to limit the size of this work, we considered only rigid transformations, in particular rotations and translations. We also restricted ourselves to convolution-based interpolation techniques. Although recent developments have resulted in new, fundamentally different interpolation techniques, such as shape- or morphology-based methods [127, 130, 144–146, 308], or Fourier-based methods, such as voxel-shift interpolation or, equivalently, zero-filled interpolation [74, 77, 162,187], the vast majority of interpolation techniques used in medical image registration are convolution-based techniques. The reason for this is probably that these techniques are less complex than shape-based techniques. That is to say, they are easier to imple- ment and require no or considerably less preprocessing time. Furthermore, compared 6.2 Convolution-Based Interpolation 115 to Fourier-based techniques, they are better suited for local interpolation problems, such as those occurring in registration based on control points. This chapter is organized as follows. First, in Section 6.2, we provide the nec- essary theoretical background information and conclude that convolution-based in- terpolation requires the use of what we call sinc-approximating kernels. Next, the sinc-approximating kernels incorporated in this study are presented and discussed briefly in Section 6.3. The evaluation strategy and the results are described in Sec- tion 6.4. Both are discussed in detail in Section 6.5. Finally, concluding remarks are made in Section 6.6.

6.2 Convolution-Based Interpolation

In general, a digital N-dimensional (ND) real-valued image Is is the result of a number of local measurements (observations) of a physical source field, or a number of evaluations of a mathematical function describing some synthetic object or scene. Continuous measurements or evaluations would have resulted in an image I(x), x = N (x1,...,xN) ∈ R . In digital image processing, the only available information about I is the set of samples Is(p), p =(p1,...,pN) ∈ P ,whereP is usually a Cartesian grid Z(∆i) ×···×Z(∆N ), with ∆i,i=1,...,N, denoting the inter-sample distances in each dimension. However, it is frequently desired to know the image value I at a position x ∈/ P in a certain region of interest X ⊂ RN , while resampling of the original field or function is not possible since it is no longer available. Under these circumstances, it is required to reconstruct the image I(x), x ∈ X,fromitssamples Is(p) in that region by means of interpolation. From the Whittaker-Shannon sampling theorem [171,274,348,410,412] it follows that exact reconstruction of a continuous ND image, I, is possible in those cases 1 ∀ where the sampling frequencies Fsi satisfied the Nyquist criterion: Fsi > 2Fmi , i =

1, 2,...,N,whereFmi is the highest frequency in the ith dimension of the original image I. To this end, the sampled image Is mustbeconvolvedwithafilterhaving the following Fourier spectrum: ( | | 6 1 ∀ κ if fi Fsi , i =1,...,N, H˜ (f)= 2 (6.1) 0otherwise,

1It must be pointed out that in those cases where, apart from the original signal I(x), also the k k derivatives ∂ I(x)/(∂xi) , ∀k =1, 2,...,K and ∀i =1, 2,...,N are sampled, it is sufficient for →∞ the sampling frequencies Fsi to satisfy Fsi > 2Fmi /(K + 1). (In the limiting case K ,the requirement becomes Fsi > 0, which implies that in order to be able to reconstruct the original signal it is sufficient to sample the function and its derivatives at a single position only, since in that case the samples provide a complete Taylor series representation.) This was first remarked by Shannon [348] and has later been stated and proved by Fogel [108], without reference to Shannon’s remark. The form of the sampling theorem involving the original signal and its first derivative was subsequently presented by Jagerman & Fogel [164]. As an explicit response to Shannon’s remark, the generalized sampling theorem was presented by Linden & Abramson [218, 219]. We note that it also follows directly from the generalized sampling expansion proposed by Papoulis [283]. Although this version of the theorem may be of interest in specific application areas, it is not of practical importance in medical imaging since, in most cases, samples of the derivatives of the original image are not available. Therefore, this issue will not be considered further here. 116 6 Evaluation of Convolution-Based Interpolation Methods

Q ∈ RN N −1 where f =(f1,...,fN ) denotes ND frequency, and κ = i=1 Fsi .Byusing this ND box-filter, the continuous image I anditssampledversionIs are related by: I˜(f)=I˜s(f)H˜ (f), where I˜ and I˜s are the Fourier transforms of I and Is, respectively. It can easily be verified that, since the ND box-filter can be written as a product of N one-dimensional box-filters, ND image reconstruction in the spatial domain can be carried out by N successive 1D convolutions:    I(x)= ··· Is(p) ∗ h(x1) ∗ h(x2) ∗··· ∗ h(xN ), (6.2) where the convolution kernel h : R → R is the inverse Fourier transform of the 1D box-filter. By assuming unit distance between the grid points,2 this kernel can be derived to be the well-known sinc function: sin(πx) h(x)=sinc(x) , . (6.3) πx Although the sinc function is the theoretically optimal kernel for convolution- based interpolation of originally band-limited images, it is not the ideal kernel in most practical situations. First of all, since the objects that are being imaged have fi- nite spatial extent, the resulting images cannot be strictly band limited. This implies that, in practice, it is not possible for the sampling frequencies to satisfy the Nyquist criterion. Consequently, it is impossible to retrieve the original images exactly from the resulting samples by means of sinc interpolation. Another problem of sinc inter- polation is the fact that, since the sinc function has infinite support, Eq. (6.2) cannot be computed in practice, except in the case of periodic images [202, 204, 331], which are not likely to occur in medical imaging. Furthermore, interpolation by means of a band-limiting convolution kernel may result in Gibbs phenomena, which are very disturbing in images. For convolution-based interpolation, the only solution to these problems is to choose an alternative convolution kernel. However, in order for any convolution kernel to actually interpolate the given samples, it must satisfy the following requirements, which are ultimately satisfied by the sinc function: ( 1ifx =0, h(x)= (6.4) 0ifx ∈ Z,x=06 .

In this chapter, we will refer to kernels satisfying Eq. (6.4) as sinc-approximating kernels, even though there exist infinitely many kernels that satisfy these requirements but do not necessarily “resemble” the sinc function.

6.3 Sinc-Approximating Kernels

In this section, we introduce the sinc-approximating kernels incorporated in the eval- uation. These include the nearest-neighbor and linear interpolation kernel, as well

2 This is not a restriction; any function I(x1,...,xN ),xi ∈ Z(∆i), can be reparameterized so as toendupwithafunctionI(x1,...,xN ),xi ∈ Z. For example, spatial or temporal quantities may be expressed in pixels instead of millimeters or seconds. 6.3 Sinc-Approximating Kernels 117

ζ(x) φ(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-2 -1 0 1 2 -2 -1 0 1 2

Figure 6.1. Left: the nearest-neighbor interpolation kernel. Right: the linear interpolation kernel. See Section 6.3.1 for the definitions of these kernels.

as the Lagrange, generalized convolution, cardinal spline, and windowed sinc ker- nels. Since the main purpose of this chapter is to present the results of an empirical evaluation of the performance of interpolation kernels for the specific task of geomet- rically transforming medical images, we do not discuss the application-independent spatial and spectral properties of the kernels in great detail in this section. For more in-depth discussions of these particular properties, we refer to numerous other sources [20,227,228,235,245,249,257,285,286,332,420].

6.3.1 Nearest-Neighbor and Linear Interpolation Kernel The simplest and computationally cheapest approach to obtain a sinc-approximating kernel that complies with the definition of Eq. (6.4) is to use zeroth-degree or first- degree polynomials, resulting in the nearest-neighbor and linear interpolation kernel, respectively defined as ( 1if− 1 6 x< 1 , ζ(x)= 2 2 (6.5) 0otherwise, and ( 1 −|x| if 0 6 |x| < 1, φ(x)= (6.6) 0if16 |x|.

Plots of these kernels are provided in Fig. 6.1. As can be concluded from recent literature, the linear interpolation kernel is still the most frequently used kernel in a wide variety of applications [128].

6.3.2 Lagrange Interpolation Kernels In order to obtain higher order interpolants, one possibility is to use classical polyno- mial interpolation formulae. In that case, an interpolant is expressed either in terms 118 6 Evaluation of Convolution-Based Interpolation Methods of (divided) differences, as proposed by Gregory [126] and Newton [266, 267] in the late 17th century, or directly in terms of the sample values, as in the interpolation for- mula originally due to Waring [399] and Euler [83], but nowadays usually attributed to Lagrange [201]. In principle, these schemes are all equivalent.3 When using Lagrange central interpolation, an nth-degree interpolant is obtained by evaluating the following sum:

kXmax n Is(pk)Lk (x), (6.7)

k=kmin > −b c d e b c ∈ R n where n 1, kmin = n/2 , kmax = n/2 , pk =(p0 + k), p0 = x , x ,andLk are the so called Lagrange coefficients, defined by

kYmax − n , (x pi) Lk (x) . (6.8) (pk − pi) i=kmin i=6 k As shown by Schafer & Rabiner [330], the Lagrange central interpolation formula (6.7) can be rewritten in the form of a convolution:

X+∞ n Is(pk)λ (x − pk), (6.9) k=−∞ where λn is the nth-degree Lagrange central interpolation kernel. As can be observed from (6.7) and (6.9), the explicit form of this kernel is obtained by evaluating the following set of equations:  n n  λ (ξ − kmax)=L (ξ),  kmax   .  .  .   n − n  λ (ξ 1) = L1 (ξ), n n  λ (ξ)=L0 (ξ), (6.10)   n n  λ (ξ +1) = L−1(ξ),   .  .  .  λn(ξ − k )=Ln (ξ), min kmin where ( [0, 1] if n odd, ξ ∈   (6.11) − 1 1 2 , 2 if n even,

3This can easily be deduced from the fact that for any two polynomial interpolants of degree b b b b n, e.g. I1(x)andI2(x), x ∈ R, the difference I1(x) − I2(x) is a polynomial of at most degree n, while it has n + 1 zeroes, viz., the sample points. According to the fundamental theorem of algebra b b this can only be true if I1(x) − I2(x)=0, ∀x ∈ R. For more details regarding classical polynomial interpolation we refer to Whittaker & Robinson [411], Hildebrand [148], or Jeffreys & Jeffreys [170]. 6.3 Sinc-Approximating Kernels 119 and by explicitly defining λn(x)=0for|x| > (n +1)/2. It is important to respect the requirement expressed in (6.11). If, for any n even, the set of equations given in (6.10) is evaluated in the interval [0, 1], the resulting kernel will not be symmetric. This has led some authors to the (incorrect) conclusion that, in general, even-degree Lagrange kernels are not symmetric and lead to phase distortions [330,420]. The kernels constituted by these equations were studied e.g. by Schaum [332]. Notice that the Lagrange kernel corresponding to n = 1 is equal to the linear inter- polation kernel given in Eq. (6.6). In the evaluation presented in this chapter, we also included the quadratic (n = 2), cubic (n = 3), quartic (n = 4), quintic (n =5), sextic (n =6),septic(n = 7), octic (n = 8), and nonic (n = 9) Lagrange central interpolation kernel. See Fig. 6.2 for plots of some of these kernels. As can be appre- ciated from these plots, λn more and more resembles the sinc function as n increases. Infact,itcanbeshownthatforn →∞, the Lagrange central interpolation kernel converges to the sinc function [166,245].

6.3.3 Generalized Convolution Kernels The symmetrical piecewise polynomial kernels described in the previous subsections all result in interpolants which are not continuously differentiable. In particular ap- plications, it may be desirable to use smoother interpolation kernels, which allow for the computation of higher order derivatives of the interpolant. In this subsection we describe a class of smooth piecewise polynomial kernels, which contains important special cases that are well known in the literature. In general, piecewise nth-degree polynomial kernels can be written in the form:  Xn  i aij|x| if j − ξ 6 |x| 1, ξ =1/2forn even and ξ =0forn odd, j =0, 1,...,m+ ξ − 1, and the parameter m determines the spatial support of the kernel. In the evaluation presented in this chapter, we restricted ourselves to the class of kernels for which n and m are related by n =2m − 1. This is a rather broad class, which includes the linear interpolation kernel of Eq. (6.6) and all of the Lagrange central interpolation kernels described in the previous subsection. It also includes the quadratic piecewise polynomial kernel due to Dodgson [70]:   − | |2 6 | | 1  1 2 x if 0 x < 2 , ψ2(x)= 3 − 5 |x| + |x|2 if 1 6 |x| < 3 , (6.13)  2 2 2 2  3 6 | | 0if2 x . In the remainder of this subsection we concentrate on a family of odd-degree 1 convolution kernels which are at least C .The(n +1)m coefficients aij of the odd- degree polynomial pieces can be solved by imposing constraints on the shape of the kernel. The first and most important constraint was already given in Eq. (6.4). In 120 6 Evaluation of Convolution-Based Interpolation Methods

λ2(x) λ3(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 λ4(x) λ5(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 λ6(x) λ7(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

Figure 6.2. From top left to bottom right: the quadratic, cubic, quartic, quintic, sextic, and septic Lagrange central interpolation kernel. See Section 6.3.2 for the precise definitions of these kernels. order for the resulting interpolant to have continuous derivatives, it is also required that ψ(l)(x) is continuous at |x| =0, 1, 2,...,m, where the superscript (l) denotes the lth-order derivative. It can be shown [249] that, given any odd degree n > 3, the maximum allowable value for l that will not result in an over-constrained problem is n − 2, in which case the total number of equations to be solved is (n +1)m − 1. This implies that the kernels can be expressed in terms of a free parameter, which we denote by α. In order to obtain a unique value for α, one additional constraint needs to be imposed. In this chapter, we used the following constraints: (i) the 6.3 Sinc-Approximating Kernels 121

Kernel ας α∼ α[ 3 − − 3 − 1 ψ 1 4 2 5 11 1 3 ψ 96 13 64 7 − 1027 − 3133 − 71 ψ 452574 2275008 83232 9 34814699 17671607 3829 ψ 2509872453120 2324998440576 788235264

Table 6.1. The values of the free parameter α for the cubic, quintic, septic, and nonic convolution kernel described in Section 6.3.3, resulting from the slope con- straint (ας ), continuity constraint (α∼), and flatness constraint (α[), respectively.

slope constraint [315], which implies that the slope of the kernel is constrained in such a way that it equals the slope of the sinc function at x = 1; (ii) the continuity constraint [352], which implies that the kernel is constrained in such a way that its (n−1)th-order derivative is continuous at x = 1; (iii) the flatness constraint [249,285], which implies that the frequency spectrum Ψ(˜ f) of the kernel is required to be flat at f = 0. It can be shown that the latter constraint yields the mathematically most precise interpolant, in the sense that the Taylor series expansion agrees in as many terms as possible with the original signal [180,243]. A well-known member of this family of kernels is the cubic convolution kernel [180, 285, 315, 352]. In fact, the aforementioned constraints were adopted from the literature on cubic convolution. The cubic convolution kernel as a function of the free parameter α is given by   − | |2 | |3 6 | |  1 (α +3)x +(α +2)x if 0 x < 1, 3 − | |− | |2 | |3 6 | | ψ (x)= 4α +8α x 5α x + α x if 1 x < 2, (6.14)  0if26 |x|.

In the literature on visualization and computer graphics, the cubic convolution kernel resulting from the flatness constraint is also known as the Catmull-Rom spline [48] or the modified cubic spline [143], and is sometimes erroneously referred to as the cardinal cubic spline [255,257]. By analogy with cubic convolution, the kernels from this family are referred to as generalized convolution kernels in this chapter. Apart from the quadratic (n =2) and cubic (n = 3) convolution kernel, we also included the quintic (n =5),septic (n = 7), and nonic (n = 9) convolution kernel in the evaluation. See Fig. 6.3 for plots of some of these kernels. For precise definitions of the higher order kernels we refer to an earlier paper [249]. The corresponding values of the free parameter α resulting from the aforementioned constraints, are presented in Table 6.1.

6.3.4 Cardinal Spline Kernels An alternative approach to piecewise polynomial interpolation is spline interpolation, originally proposed by Schoenberg [335, 336], which involves the use of so called B- 122 6 Evaluation of Convolution-Based Interpolation Methods

ψ2(x) ψ3(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 ψ5(x) ψ7(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

Figure 6.3. From top left to bottom right: the quadratic convolution kernel, and the cubic, quintic, and septic convolution kernel resulting from the flatness constraint. See Section 6.3.3 for details.

splines, recursively obtained by auto-convolution of a rectangular pulse (equal to the nearest-neighbor kernel given in Eq. (6.5)), that is,

βn(x)=βn−1(x) ∗ β0(x), with β0(x)=ζ(x). (6.15)

The explicit form of a B-spline of degree n can be obtained by analyzing the Fourier transform of Eq. (6.15), see e.g. Unser [378], and is given by     1 nX+1 n +1 n +1 n βn(x)= (−1)i x − i + , (6.16) n! i 2 i=0 + with ( xn if x > 0, (x)n , (6.17) + 0ifx<0.

Since βn does not satisfy the requirements expressed in Eq. (6.4) for all n > 2, interpolation by means of B-splines requires preprocessing of the raw image data in those cases. This can be done either by matrix manipulations [159,207], or by means 6.3 Sinc-Approximating Kernels 123 of recursive filtering techniques [379–381], the latter of which are easier to implement and are computationally much more efficient. When using the latter approach, nth- degree spline interpolation is carried out by evaluating the following expression:

X+∞  n −1 n (b ) ∗ Is (k)β (x − k), (6.18) k=−∞ where (bn)−1 denotes the recursive prefilter, also known as the direct B-spline filter (see Appendix 6.B for more details). Although it is never explicitly implemented this way, the double convolution in (6.18) can be rewritten so as to obtain the implicit interpolation kernel:

X+∞ ηn(x)= (bn)−1(k)βn(x − k), (6.19) k=−∞ which is known as the cardinal spline of degree n. We note that the cardinal spline of degree one is equal to the linear interpolation kernel of Eq. (6.6). Examples of higher order B-splines and their corresponding cardinal splines are shown in Fig. 6.4. Notice that the cardinal splines satisfy the requirements expressed in Eq. (6.4). We also note that for n →∞, the cardinal spline converges to the sinc function [4,337]. In the evaluation presented in this chapter, we included quadratic (n = 2), cubic (n = 3), quartic (n = 4), quintic (n = 5), sextic (n =6),septic(n = 7), octic (n =8), and nonic (n = 9) spline interpolation, implemented by using Eq. (6.18). Notice that the corresponding B-spline kernels are piecewise nth-degree polynomial kernels which are non-zero only in the interval (−m, m), where n and m are related by n =2m − 1, similar to the Lagrange and generalized convolution kernels.

6.3.5 Windowed Sinc Kernels A fundamentally different approach to obtain a sinc-approximating kernel is to mul- tiply the sinc function, defined in Eq. (6.3), with a window function of limited spatial support: ( w(x)if06 |x|

β2(x) η2(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 β3(x) η3(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 β4(x) η4(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 β5(x) η5(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

Figure 6.4. Left column: the quadratic, cubic, quartic, and quintic B-spline. Right column: the corresponding cardinal splines. See Section 6.3.4 for details. 6.4 Quantitative Evaluation 125

Window Definition

|x| Bartlett wBar , 1 − m   πx 2πx Blackman wBla , 0.42 + 0.50 cos +0.08 cos m  m  πx 2πx Blackman-Harris (3-term) wBH3 , 0.42323 + 0.49755 cos +0.07922 cos m  m πx Blackman-Harris (4-term) wBH4 , 0.35875 + 0.48829 cos +  m  0.14128 cos 2πx +0.01168 cos 3πx    m   m |x| πx 1 π|x| Bohman wBoh , 1 − cos + sin m m π m πx Cosine wCos , cos 2m   1 x 2 Gaussian wGau , exp − α 2 m  πx Hamming wHam , 0.54 + 0.46 cos m , πx Hann wHan 0.5+0.5cos m q I0(β) x 2 Kaiser wKai , , β=α 1−( ) I0(α)  m , πx Lanczos wLan sinc m

Rectangular wRec , 1 , − x2 Welch wWel 1 m2

Table 6.2. Definitions of window functions. In the definition of the Kaiser window, α ∈ R+ is a free parameter, for which values of 5.0, 6.0, 7.0, and 8.0wereusedinthe evaluation. I0 is the zeroth-order modified Bessel function of the first kind, which can accurately be approximated by using its series expansion [138,420]. For the free parameter α ∈ R+ in the definition of the Gaussian window, values of 2.5, 3.0, 3.5, and 4.0 were used in the evaluation.

Kaiser [174], Lanczos [202], Rectangular [420], and Welch [407]. Definitions of these windows are given in Table 6.2. Plots of some windows and their corresponding sinc- approximating kernel are shown in Fig. 6.5. For more elaborate discussions of the spectral properties of window functions we refer to Harris [138] or Wolberg [420].

6.4 Quantitative Evaluation

The sinc-approximating kernels described in the previous section were quantitatively evaluated by using them to apply several geometrical transformations to a variety of images, and by computing figures of merit (FOMs) based on the grey- value differences between the transformed images and their corresponding reference images. The computational cost of these kernels was also determined. In this section we present the evaluation strategy and the results. 126 6 Evaluation of Convolution-Based Interpolation Methods

wRec(x) hRec(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

wKai(x) hKai(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

wLan(x) hLan(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

wHan(x) hHan(x)

1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0

-0.2 -0.2

-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4

Figure 6.5. Examples of windows and windowed sinc kernels. Left column: the Rectangular, Kaiser (α =5.0), Lanczos, and Hann window for m =3. Right column: the corresponding windowed sinc kernels. See Section 6.3.5 for details. 6.4 Quantitative Evaluation 127

6.4.1 Evaluation Strategy The medical images used in the evaluation were obtained from 3D brain datasets of dif- ferent modalities, viz., computed tomography (CT), proton-density weighted (PD-w) magnetic resonance imaging (MRI), T1-weighted (T1-w) MRI, T2-weighted (T2-w) MRI, positron emission tomography (PET), single photon emission computed tomog- raphy (SPECT), and 3D rotational angiography (3DRA). Images from 2D cerebral X-ray angiography (XRA) sequences were also included.4 From every subset (eight in total), we selected five datasets. The five CT datasets were of size 512 ×512 times 28, 28, 29, 30, and 33 voxels, respectively, all with a voxel size of 0.65 × 0.65 × 4.0mm3. The PD-w, T1-w, and T2-w MRI datasets (15 in total) were all of size 256 × 256 × 26 voxels, with a voxel size of 1.25 × 1.25 × 4.0mm3. The five PET datasets were of size 128 × 128 × 15 voxels, one with a voxel size of 1.94 × 1.94 × 8.0mm3,andthe others with a voxel size of 2.59 × 2.59 × 8.0mm3. The five SPECT datasets were of size 64 × 64 times 30, 34, 36, 38, and 40 voxels, respectively, all with a voxel size of 3.91 × 3.91 × 3.91 mm3. The five 3DRA datasets were all of size 128 × 128 × 128 voxels, with a voxel size of 0.6 × 0.6 × 0.6mm3. Finally, the five 2D XRA images were all of size 512×512 pixels and were arbitrarily selected from their corresponding image sequences. In order to be able to study the performance of the interpolation kernels in different slice directions, one transversal (axial) and one sagittal slice was selected from each of the 3D datasets. This resulted in a total of 75 different 2D test images. Some examples of test images are shown in Fig. 6.6. The test images were subjected to several geometrical transformations. As ex- plained in the introduction (Section 6.1), we considered only rotations and subpixel translations, as these are the most frequently required transformations in mono- or multimodality registration problems. In the rotation experiments, the 2D test images were successively rotated over 0.7◦,3.2◦,6.5◦,9.3◦,12.1◦,15.2◦,18.4◦,21.3◦,23.7◦, 26.6◦,29.8◦,32.9◦,35.7◦,38.5◦,41.8◦,and44.3◦, which adds up to a total of 360◦.We note that for every test image, these 2D transformations were carried out in the plane of the image. The interpolation errors made in the transversal and sagittal slices are representative for those resulting from a rotation of the entire 3D dataset around its z-andx-axis, respectively. In the subpixel translation experiments, the test images were successively shifted over 0.01, 0.04, 0.07, 0.11, 0.15, 0.18, 0.21, 0.24, 0.26, 0.29, 0.32, 0.35, 0.39, 0.43, 0.46, and 0.49 pixels, which adds up to a total of 4.00 pixels. Similar to the rotations, the subpixel translations were carried out in the plane of the test image. Notice, however, that these are 1D transformations. For the transversal slices, the translations were carried out in the x-direction, while for the sagittal slices they were carried out in the direction corresponding to the through-plane direction

4The CT, MR, and PET datasets were obtained from patients undergoing neurosurgery at Vander- bilt University Medical Center and were originally used in the project “Evaluation of Retrospective Image Registration”, National Institutes of Health, Project Number: 1 R01 NS33926-01, Principal Investigator: Prof. Dr. J. M. Fitzpatrick, Vanderbilt University, Nashville, TN, USA (see West et al. [409] for more details concerning the acquisition of these datasets). The SPECT datasets were obtained from patients with suspected functional abnormalities and were acquired at the University Medical Center Utrecht, the Netherlands, under the authority of the Department of Child Psychiatry (see Stokking [357] for more details). The 3DRA and XRA datasets were obtained from patients with suspected cerebral aneurysms and were also acquired at the University Medical Center Utrecht, the Netherlands. 128 6 Evaluation of Convolution-Based Interpolation Methods

CT PD-w MRI T1-w MRI T2-w MRI

PET SPECT 3DRA XRA

Figure 6.6. Examples of the medical test images used in the experiments described in Section 6.4. For every modality (except XRA, of course), one transversal slice (top image) and one sagittal slice (bottom image) is shown. Note that for display purposes, the images of the sagittal slices of the 3D datasets shown in this figure were scaled so as to correct for the voxel anisotropy. in the original 3D dataset. The resulting interpolation errors are representative for those resulting from the application of subpixel translations to the entire 3D dataset in these same directions. For every test image, the experiments were repeated for all interpolation kernels. Of the types described in Section 6.3, we used all kernels with a spatial support equal to or less than 10 grid intervals (m 6 5), which amounts to a total of 126 kernels (viz., the nearest-neighbor and linear interpolation kernel, the quadratic convolution kernel, the cubic, quintic, septic, and nonic convolution kernel using three different values for the free parameter α, the quadratic, cubic, quartic, quintic, sextic, septic, 6.4 Quantitative Evaluation 129 octic, and nonic Lagrange and spline interpolation kernels, and finally 13 families of windowed sinc kernels (two of which have a free parameter for which we used the four different values indicated in Table 6.2), obtained by using five different settings for m). We note that in order to avoid border problems, all test images were mirrored around the borders in each dimension. For every combination of test image, experiment (rotation or translation), and interpolation kernel, the cumulative interpolation errors in the resulting processed image were determined. Since in these experiments the grid points of the processed images coincided with those of the corresponding original images, a gold standard was available: for the rotation experiments, the reference images were simply the original images, and for the translation experiments the reference images were obtained by translating the original image by four pixels (which requires no interpolation). The errors were summarized in two FOMs: the root-mean-square error (RMSE) and the largest absolute error (LAE). In order to avoid quantization errors to interfere with the results, all computations were carried out with double precision floating-point numbers (12 significant decimals). Finally, the relative computational cost of all interpolation kernels was assessed by carrying out a timing experiment, in which a synthetic 3D test image of size 128 × 128 × 128 voxels was translated over (γ, γ, γ) voxels (where 0 <γ<1was an arbitrary, but fixed offset) by using non-separated 3D interpolation operations. For this timing experiment, special attention was paid to computationally optimal implementation of each individual interpolation approach.

6.4.2 Results The computation of two FOMs for all processed images, resulting from the application of 126 different kernels to perform two types of transformations on 75 different test images, resulted in a total of 37800 error figures. In order to be able to present these results in a compact form, we first make some general observations. In many applications, the important issue is not just accuracy, but the trade- off between accuracy and computational cost. In order to get an impression of the performance of all interpolation kernels in these terms, scatter plots were generated. To this end, the five error figures (either RMSEs or LAEs) resulting from every kernel in a given experiment (either rotation or translation) applied to the slices (either transversal or sagittal) of a given group of five datasets from any of the eight different modalities, were averaged. In order to correct for possible intrinsic differences in the dynamic range of grey values between the images within a group of five, the individual error figures were normalized with respect to the dynamic range of their corresponding image, before being averaged. It was observed that regardless of image modality (CT, PD-w MRI, T1-w MRI, T2-w MRI, PET, SPECT, 3DRA, XRA), slice direction (transversal, sagittal), type of experiment (rotation, translation), or figure of merit (RMSE, LAE), spline interpo- lation constitutes the best trade-off between accuracy and computational cost. That is to say, none of the other approaches is more accurate and at the same time compu- tationally cheaper. The scatter plots resulting from the different experiments carried out on CT data are shown in Fig. 6.7. For all modalities, the scatter plots showing 130 6 Evaluation of Convolution-Based Interpolation Methods averaged, normalized RMSEs resulting from the rotation experiment carried out on transversal slices are presented in Fig. 6.8. As can easily be seen from these plots, the results of spline interpolation constitute a “lower boundary” in all cases. It is important to note that in the timing experiments, kernel values were de- termined by exact computations during convolution. In practice, interpolation oper- ations can be accelerated by using look-up tables of densely sampled, precomputed kernel values, as has been pointed out by e.g. Wolberg [420]. In principle, the addi- tional errors due to the spatial quantization of a kernel can be reduced to any level simply by increasing the density of kernel samples. When using this approach, the relative computational cost of convolution kernels is determined solely by their spatial support. This implies that although spline interpolation seems the best approach in the case of exact computations (as suggested by the plots in Figs. 6.7 and 6.8), it does not necessarily have to be so when using look-up tables. Therefore it makes sense to mutually compare the accuracy of interpolation approaches of which the corresponding convolution kernels have equal spatial support. To this end, the averaged, normalized error figures resulting from all kernels in the different experiments were analyzed separately for m =1, 2, 3, 4, and 5. In order to limit the extent of this analysis, kernels with non-integer values of m (the even-degree piecewise polynomial kernels) were included in the group corresponding to the smallest larger integer value of m (e.g., the quartic Lagrange central interpolation kernel, for which m =2.5, was included in the group of kernels for which m =3).Itwasobserved that regardless of image modality, slice direction, type of experiment, or FOM, given the value of m, the corresponding B-spline kernel performed either comparably to, or considerably better than all other kernels with the same spatial support. Therefore, we decided to present only the errors resulting from spline interpolation and to indicate whether or not these errors were statistically significantly smaller. The averaged, normalized RMSEs and LAEs introduced by spline interpolation in the rotation and subpixel translation experiments applied to the test images of different modalities and slice directions (Slc), either transversal (Tr) or sagittal (Sa), are shown in Tables 6.3 and 6.4. Recall from Section 6.3.4 that the relation between the spatial support parameter m and the degree n of the B-spline kernels is n =2m−1. For every modality, slice direction, type of experiment, and FOM, the normalized errors figures resulting from spline interpolation in the five images were compared pairwise to the figures resulting from all other methods with the same spatial support for the corresponding convolution kernel. By using a paired t-test [6], the errors resulting from spline interpolation were found to be statistically significantly smaller (p<0.05), under the null hypothesis that the methods should yield similar results, except in those cases marked by the “?” symbol in Table 6.4.

6.5 Discussion

In the literature, several alternative approaches have been proposed for the evaluation or comparison of the accuracy of interpolation methods. In this section, we first discuss these approaches and explain why we have chosen the strategy described in the previous section. We also discuss the results of the present evaluation. 6.5 Discussion 131

RMSE/Rot/Tr for CT RMSE/Tra/Tr for CT 10 1

1 0.1

0.1 0.01 0.01

0.001 0.001

0.0001 0.0001 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

RMSE/Rot/Sa for CT RMSE/Tra/Sa for CT 10 1

1

0.1

0.1

0.01 0.01 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

LAE/Rot/Tr for CT LAE/Tra/Tr for CT 10 10

1 1

0.1

0.1 0.01

0.01 0.001 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

LAE/Rot/Sa for CT LAE/Tra/Sa for CT 10 10

1 1

0.1 0.1 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

Figure 6.7. For explanation, see Page 135. 132 6 Evaluation of Convolution-Based Interpolation Methods

RMSE/Rot/Tr for CT RMSE/Rot/Tr for PD-w MRI 10 10

1 1

0.1 0.1 0.01

0.01 0.001

0.0001 0.001 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

RMSE/Rot/Tr for T1-w MRI RMSE/Rot/Tr for T2-w MRI 10 10

1 1

0.1

0.1 0.01

0.001 0.01 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

RMSE/Rot/Tr for PET RMSE/Rot/Tr for SPECT 10 10

1 1

0.1 0.1

0.01 0.01

0.001 0.001 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

RMSE/Rot/Tr for 3DRA RMSE/Rot/Tr for XRA 10 10

1 1

0.1

0.1 0.01

0.01 0.001 1e-7 1e-6 1e-5 1e-4 1e-3 1e-7 1e-6 1e-5 1e-4 1e-3

Figure 6.8. For explanation, see Page 135. 6.5 Discussion 133

Averaged RMSEs (subpixel translation experiments) Modality Slc m =1 m =2 m =3 m =4 m =5 CT Tr 1.30% 0.13% 0.05% 0.04% 0.03% Sa 5.99% 3.06% 2.43% 2.12% 1.94% PD-w MRI Tr 3.23% 1.45% 1.08% 0.90% 0.78% Sa 6.58% 3.56% 2.82% 2.46% 2.23% T1-w MRI Tr 3.22% 1.48% 1.11% 0.92% 0.80% Sa 6.59% 3.66% 2.97% 2.63% 2.42% T2-w MRI Tr 3.57% 1.96% 1.54% 1.31% 1.16% Sa 6.23% 3.84% 3.07% 2.65% 2.39% PET Tr 1.72% 0.34% 0.25% 0.22% 0.20% Sa 8.76% 4.16% 3.30% 2.88% 2.62% SPECT Tr 2.94% 0.33% 0.18% 0.14% 0.12% Sa 3.39% 0.75% 0.59% 0.50% 0.43% 3DRA Tr 3.52% 2.28% 1.86% 1.61% 1.45% Sa 3.47% 2.02% 1.61% 1.39% 1.24% XRA 1.26% 0.61% 0.48% 0.41% 0.37%

Averaged RMSEs (rotation experiments) Modality Slc m =1 m =2 m =3 m =4 m =5 CT Tr 1.72% 0.15% 0.08% 0.07% 0.06% Sa 6.28% 2.84% 2.16% 1.85% 1.65% PD-w MRI Tr 3.90% 1.61% 1.22% 1.07% 0.98% Sa 7.27% 3.60% 2.84% 2.49% 2.27% T1-w MRI Tr 3.86% 1.68% 1.30% 1.13% 1.04% Sa 7.26% 3.66% 2.94% 2.62% 2.42% T2-w MRI Tr 4.27% 2.36% 1.94% 1.74% 1.63% Sa 6.93% 4.05% 3.31% 2.95% 2.73% PET Tr 2.28% 0.42% 0.32% 0.29% 0.27% Sa 8.59% 3.62% 2.81% 2.43% 2.19% SPECT Tr 4.07% 0.40% 0.23% 0.19% 0.17% Sa 4.96% 0.75% 0.51% 0.41% 0.35% 3DRA Tr 4.00% 2.58% 2.18% 1.99% 1.87% Sa 4.34% 2.67% 2.24% 2.03% 1.91% XRA 1.60% 0.79% 0.68% 0.63% 0.61%

Table 6.3. Averaged, normalized RMSEs introduced by linear (m = 1), cubic (m = 2), quintic (m = 3), septic (m = 4), and nonic (m = 5) spline interpolation in the subpixel translation (top) and rotation (bottom) experiments. 134 6 Evaluation of Convolution-Based Interpolation Methods

Averaged LAEs (subpixel translation experiments) Modality Slc m =1 m =2 m =3 m =4 m =5 CT Tr 21.61% 3.86% 1.89% 1.31% ?1.06% Sa 45.57% 24.71% 19.17% 16.63% 14.73% PD-w MRI Tr 42.35% 19.36% 13.27% 10.92% 9.31% Sa 48.36% 32.67% ?24.90% ?20.16% 16.98% T1-w MRI Tr 44.86% 21.69% 15.59% 12.17% 10.00% Sa 52.70% 31.93% ?25.75% ?21.71% ?18.92% T2-w MRI Tr 42.66% 22.03% 16.26% 13.23% ?11.11% Sa 38.58% 27.31% ?20.42% ?16.07% ?14.26% PET Tr 11.21% 2.38% 1.75% 1.39% 1.16% Sa 34.94% 18.84% 14.75% 11.82% 10.57% SPECT Tr 13.91% 2.10% 1.37% 1.02% 0.81% Sa 22.42% 7.19% 4.57% 3.54% ?2.97% 3DRA Tr 38.77% 19.77% 14.07% ?11.29% ?9.46% Sa 29.26% 14.11% 9.53% 7.89% 6.89% XRA 37.75% 20.56% 13.96% ?10.57% ?8.60%

Averaged LAEs (rotation experiments) Modality Slc m =1 m =2 m =3 m =4 m =5 CT Tr 25.32% 4.23% 2.55% ?2.30% ?2.11% Sa 46.17% 22.82% 17.49% 14.69% ?12.66% PD-w MRI Tr 47.50% 19.13% 14.03% ?12.14% ?11.27% Sa 51.72% ?33.65% ?26.76% ?22.47% ?19.76% T1-w MRI Tr 47.32% 23.95% ?17.85% ?14.69% ?12.99% Sa 54.01% 29.45% 24.12% 21.34% ?18.86% T2-w MRI Tr 50.70% ?25.32% 18.51% ?15.69% ?14.07% Sa 49.75% ?33.51% ?26.55% ?22.13% ?19.63% PET Tr 14.33% 2.92% 2.36% 2.16% 2.04% Sa 35.86% ?18.06% 12.82% 10.25% 8.97% SPECT Tr 18.95% 2.33% 1.72% 1.41% 1.21% Sa 23.97% ?8.28% 5.37% ?4.18% ?3.49% 3DRA Tr 43.76% ?24.03% ?17.39% 14.35% 13.35% Sa 41.56% ?25.42% ?20.01% ?17.11% ?15.34% XRA Tr 53.95% ?36.07% ?28.75% ?24.97% ?22.79%

Table 6.4. The averaged, normalized LAEs introduced by linear (m = 1), cubic (m = 2), quintic (m = 3), septic (m = 4), and nonic (m = 5) spline interpo- lation in the subpixel translation (top) and rotation (bottom) experiments. See Section 6.4.2 for details, including the meaning of the “?”symbol. 6.5 Discussion 135

Figure 6.7 (Page 131). Scatter plots showing interpolation error (ordinates) versus computational cost (abscissae) for the different interpolation kernels applied to CT data. The label “A/B/C” on top of each plot provides details concerning the results shown, where A indicates the type of interpolation error (RMSE or LAE), B the type of experiment (rotation (Rot) or translation (Tra)), and C the type of slice (transversal (Tr) or sagittal (Sa)) on which the experiment was carried out. For every kernel in each plot, the presented FOM is an average of the individual FOMs (expressed as fractions of the dynamic range of grey values) resulting from the five datasets. Notice that the computational costs (shown here in seconds per voxel) were obtained from separate experiments. Open circles indicate the results of spline interpolation, where the left-most circle corresponds to zeroth-degree and the right-most to ninth-degree spline interpolation. See Section 6.4.2 for details.

Figure 6.8 (Page 132). Scatter plots showing interpolation error (ordinates) versus computational cost (abscissae) for the different interpolation kernels applied to all modalities (see the label on top of each plot) incorporated in this study. For every kernel in each plot, the FOM presented on the ordinate axis is an average of the RMSEs (expressed as fractions of the dynamic range of grey values) resulting from the rotation experiment carried out on five transversal slices. Notice that the computational costs (shown here in seconds per voxel) were obtained from separate experiments. As in Fig. 6.7, open circles indicate the results of spline interpolation, where the left-most circle corresponds to zeroth-degree and the right-most to ninth- degree spline interpolation. See Section 6.4.2 for details.

6.5.1 Discussion of Evaluation Strategies A frequently used approach to the evaluation of interpolation kernels is to compare the spatial and spectral properties of these kernels to those of the sinc function, either by discussing their low-frequency band-pass and high-frequency suppression capabilities [228,286], or by using such metrics as “sampling and reconstruction (SR) blur” [284, 332], “smoothing”, “post-aliasing”, or “overshoot” [235], “truncation er- ror”, or “non-sinc error” [227], to mention but a few. These approaches are based on the fundamental assumption that in all cases, the sinc function is the optimal in- terpolation kernel. As such, they provide insight in the theoretical behavior of these kernels as low-pass filters. However, the conclusions of such evaluations are often not easily translated to specific image processing tasks. Alternatively, interpolation ker- nels may be compared by subjective visual inspection of image quality, after having used the kernels to perform certain resampling operations [70,159,286,338,420], or by analyzing their abilities to reconstruct certain mathematical test functions [180,352]. However, given an image processing task, the most useful evaluation is obtained by applying the kernels to perform that task and then to compare the results to what is considered the gold standard. In a recently published paper by Grevera & Udupa [128], an elaborate comparison of a number of well-known scene-based and object-based interpolation methods was presented. In the evaluation, 3D medical images from different modalities were first subsampled in the slice direction with a factor of two. Next, the subsampled images 136 6 Evaluation of Convolution-Based Interpolation Methods were supersampled with the same factor in order to restore the original dimensions, where the supersampling was carried out by using the different interpolation methods. The subsampled-supersampled images were then compared to their originals by using different FOMs. We note that this evaluation approach was designed to assess the performance of interpolation methods for a specific task: increasing the number of slices for the purpose of improved 3D object quantification or visualization. The con- clusions of this study can not simply be generalized to other interpolation problems, such as those occurring in e.g. image registration. In addition, two properties of this evaluation strategy are questionable. First, it is known from Fourier analysis that subsampling introduces aliasing artifacts which are not easily corrected by interpola- tion. Because of the low spatial resolution, this is especially true for the slice direction. These aliasing errors may have influenced the results and conclusions. For example, in some cases the cubic convolution kernel resulting from the flatness constraint (referred to as the modified cubic spline) performed statistically significantly worse than linear interpolation, while it is known from many other studies [180, 249, 278, 285, 286, 420] (including the present one) that the former kernel is generally superior. Second, the evaluation does not assess the performance of entire kernels, but only of a few distinct function values of these kernels. For example, in the evaluation of the cubic convo- lution kernel, only the values at x = −1.5, −0.5, 0.5, 1.5 are taken into consideration. This implies that any other function that has the same values at these points would have given the same results. A frequently used alternative approach to study the performance of interpolation kernels for the purpose of applying geometrical transformations, is to apply these transformations to a number of test images, followed by the inverse transformation so as to bring the images back in their original position [65,134, 213, 227, 249, 278]. Ideally, the forward-backward transformed images should be identical to their re- spective originals, so that a quantitative performance measure can be based on the grey-value differences between the images. Although this approach may be of value when comparing certain families of interpolation kernels, its use is limited in the case of a large number of fundamentally different kernels, since the negative effects of a kernel in the forward transformation may be canceled out by the backward transfor- mation. This occurs e.g. when employing a nearest-neighbor interpolation scheme in a forward-backward subpixel translation operation. While we know that this type of interpolation yields very large errors in the forward transformation, the backward transformed image is nevertheless exactly identical to the original image. In the research described in this chapter, we used an alternative evaluation strat- egy. Rather than analyzing the spatial and spectral properties of interpolation kernels compared to the sinc function, we studied the actual performance of these kernels for the specific task of applying geometrical transformations to real medical image data. The strategy is a refined version of an approach used by Unser et al. [382], who con- sidered rotation over 16 × 22.5◦ = 360◦. The approach is entirely objective in the sense that it does not involve artificially created gold standards. It circumvents the aforementioned problems with other approaches: the test images are treated at their intrinsic resolution, thereby avoiding additional aliasing artifacts due to subsampling. Furthermore, by taking into consideration a large number of different rotation angles and translation vectors, interpolation errors are contributed to by the entire shape 6.5 Discussion 137 of the kernels, not just by a limited number of kernel values.5 Finally, only forward transformations are applied in order to better avoid cancellation of errors.

6.5.2 Discussion of the Results As follows from the results presented in Section 6.4.2, the RMSEs introduced by spline interpolation were statistically significantly smaller than those caused by all other convolution-based interpolation approaches, regardless of image modality (CT, PD-w MRI, T1-w MRI, T2-w MRI, PET, SPECT, 3DRA, or XRA), slice direction (transversal or sagittal), or type of transformation (rotation or translation). How- ever, this was not always the case for the LAEs. Although according to this figure of merit, linear interpolation (first-degree spline interpolation) performed statistically significantly better than all other kernels with a spatial support of two grid intervals (m = 1), cubic spline interpolation did not perform statistically significantly better than cubic convolution and cubic Lagrange interpolation in the cases marked by the “?” symbol in Table 6.4 (“m = 2” columns). In the higher-degree non-significant cases, especially the Welch, Cosine, Kaiser, and Lanczos windowed sinc kernels (in that order) performed comparably to spline interpolation. However, since these al- ternative methods did not perform statistically significantly better than spline inter- polation, nothing is lost by using spline interpolation in these cases. An explanation for the superiority of spline interpolation in the vast majority of cases may be obtained from approximation theory: it has been shown recently by Blu & Unser [20, 21] that spline interpolation has the largest possible order of approximation, given the spatial support of the B-spline convolution kernel. This im- plies that given the samples of any originally continuous input image, the interpolated image resulting from splines converges most rapidly to the original image as the inter- sample distance vanishes. Although there exist other interpolation kernels with this property, such as the Lagrange central interpolation kernels, splines have the unique additional property that they also yield the smoothest interpolant: in contrast with all other approaches considered in this chapter, nth-degree spline interpolation results in an interpolant which is n−1 times continuously differentiable. See Table 6.5 for an overview of the convolution kernels incorporated in this study and their corresponding properties as mentioned in this paragraph: spatial support, smoothness or regularity, and the rate of convergence of the resulting interpolant. In order to give an impression of the errors introduced by spline interpolation of different degrees, the results of the rotation experiment for a transversal slice of a CT and a T1-weighted MRI dataset, as well as a sagittal slice of a PET dataset, are shown in Figs. 6.9, 6.10, and 6.11, respectively. (Notice that in the latter figure, the

5We note that in this evaluation we have considered only subpixel translations over less than 0.5 pixels and rotation angles smaller than 45◦. We claim that this is sufficient to demonstrate the performance of the interpolation kernels. It can easily be seen that when performing a translation over k + γ pixels, with k ∈ Z and γ ∈ [0, 1) ⊂ R, the required kernel values are determined by γ, not by k. Furthermore, due to the symmetry of all kernels, a translation over 0.5 6 γ<1.0pixels involves the same kernel values as a translation over 1.0−γ pixels. Similarly, when rotating an image around its center over 90κ+ϕ degrees, with κ ∈ Z and ϕ ∈ [0, 90) ⊂ R, the required kernel values are determined by ϕ,notbyκ, and due to the symmetry of the operation, a rotation over 45 6 ϕ<90 degrees involves the same kernel values as a rotation over 90 − ϕ degrees. 138 6 Evaluation of Convolution-Based Interpolation Methods

Kernel Support Smoothness Convergence Notes Nearest-neighbor 1 −O(∆1) Linear 2 C0 O(∆2) Lagrange n +1 C0 O(∆n+1) n > 1odd n +1 −O(∆n+1) n > 2even Dodgson 3 C0 O(∆2)

n−2 3 Generalized convolution n +1 C O(∆ ) n > 3 odd, α = α[

n−2 1 n +1 C O(∆ ) n > 3 odd, α =6 α[ B-spline n +1 Cn−1 O(∆n+1) requires prefiltering

Bartlett, Blackman-Harris, 2mC0 O(∆0) Hamming windowed sinc Gaussian windowed sinc 2mC0 O(∆0) ∀ α ∈ R+ Kaiser windowed sinc 2mC0 O(∆0) ∀ α ∈ R+,α<∞ Cosine, Lanczos, 2mC1 O(∆0) Welch windowed sinc Blackman, Bohman, 2mC2 O(∆0) Hann windowed sinc

Table 6.5. The convolution kernels described in Section 6.3 and some of their prop- erties: spatial support, smoothness or regularity, and the rate of convergence of the resulting interpolant. In the second-last column, “O” denotes Landau’s order sym- bol and ∆ is the inter-sample distance as described in Section 6.2. See Section 6.5.2 and Appendix 6.A for more details. displayed images were scaled so as to visually correct for the voxel anisotropy.) As was to be expected from the figures in Tables 6.3 and 6.4, the interpolation errors in the CT image are smallest. We note that the errors made in the rotation and subpixel translation experiments are cumulative errors. That is, they are considerably larger than the errors in practical interpolation problems of the same nature; one usually does not perform e.g. rotation by successive intermediate rotations. Nevertheless, the experiments give a representative impression of the average relative performance of the different interpolation kernels. As can be observed from the results of the subpixel translation experiment shown in Tables 6.3 and 6.4, the errors in the through-plane direction can be much larger than those made in the in-plane direction in images with a relatively large voxel anisotropy (in our experiments notably the CT, MRI, and PET images). This can be explained from sampling theory: the lower the sampling frequency, the more pre- and post-aliasing artifacts can be expected to be introduced by sampling and non-ideal reconstruction operations. The results indicate that in order to reduce interpolation 6.5 Discussion 139

Figure 6.9. Visual impression of the errors resulting from the rotation experiment carried out on a transversal slice of a CT dataset (top left), when using (from top middle to bottom right) nearest-neighbor or zeroth-degree spline interpola- tion, linear or first-degree spline interpolation, and cubic, quintic, and septic spline interpolation, respectively. errors when performing 3D geometrical transformations, it is inefficient to choose a larger (more expensive) kernel for the in-plane interpolations, if nothing is done to considerably improve the through-plane interpolations. To give an example, for the CT and PET images considered in this evaluation, about ninth-degree spline interpolation was required in the through-plane direction in order to have similar RMSEs as linear interpolation in the in-plane direction (Table 6.3). For the other modalities, the difference between in-plane and through-plane interpolation errors was less drastic, due to the smaller voxel anisotropy. Finally, a note concerning the computational cost of spline interpolation. As explained in Section 6.3.4, interpolation by means of B-spline convolution kernels re- quires prefiltering of the raw image data for all degrees n > 2. Although the timing experiments indicated that spline interpolation (including the prefiltering) is compu- tationally cheaper than windowed sinc interpolation, it is somewhat more expensive than the alternative piecewise polynomial schemes. When using look-up tables, as discussed in Section 6.4.2, the required prefiltering causes spline interpolation to be 140 6 Evaluation of Convolution-Based Interpolation Methods

Figure 6.10. Visual impression of the errors resulting from the rotation experiment carried out on a transversal slice of a T1-weighted MRI dataset (top left), when using (from top middle to bottom right) nearest-neighbor or zeroth-degree spline interpolation, linear or first-degree spline interpolation, and cubic, quintic, and septic spline interpolation, respectively.

the computationally most expensive approach. However, since the prefiltering opera- tions can always be carried out separably, their computational cost becomes relatively small in higher-dimensional interpolation problems. Moreover, in applications where many transformations have to be applied to the original image, such as in registra- tion and visualization, the prefiltering needs to be carried out only once, so that the additional cost becomes negligible. Therefore, in the plots shown in Figs. 6.7 and 6.8, only the computational costs of the actual convolution operations were used.

6.6 Conclusions

In this chapter, we presented the results of a quantitative evaluation of sinc-approxim- ating kernels for convolution-based medical image interpolation. The evaluation com- prised the application of geometrical transformations (rotations and subpixel transla- tions) to medical images from different modalities (CT, MRI, PET, SPECT, 3DRA, and XRA), by using the different kernels. The interpolation errors in the result- 6.6 Conclusions 141

Figure 6.11. Visual impression of the errors resulting from the rotation experi- ment carried out on a sagittal slice of a PET dataset (top left), when using (from top middle to bottom right) nearest-neighbor or zeroth-degree spline interpola- tion, linear or first-degree spline interpolation, and cubic, quintic, and septic spline interpolation, respectively.

ing transformed images were analyzed by computing the root-mean-square and the largest absolute deviation from the corresponding reference images. The evaluation was designed in such a way that the original images could be used as references. A total of 126 different kernels were evaluated. These included piecewise polynomial kernels (nearest-neighbor, linear, Lagrange, generalized convolution, and B-spline ker- nels) and a large number of windowed sinc kernels (Bartlett, Blackman, Blackman- Harris, Bohman, Cosine, Gaussian, Hamming, Hann, Kaiser, Lanczos, Rectangular, and Welch windows), with spatial supports ranging from 2 to 10 grid intervals. The combined results of accuracy and timing experiments showed that regard- less of image modality, slice direction (transversal or sagittal), type of transformation (rotation or translation), or figure of merit (RMSE or LAE), spline interpolation con- stitutes the best trade-off between accuracy and computational cost. That is to say, none of the other approaches included in this study was more accurate and at the same time computationally cheaper. In addition, pairwise comparisons of the error figures resulting from kernels with equal spatial support indicated that spline inter- polation is statistically significantly better in the vast majority of cases. Therefore we conclude that spline interpolation is to be preferred over all other methods. The results also revealed that, especially in images with a relatively large voxel anisotropy (in our experiments notably the CT, MRI, and PET images), the errors caused by interpolation in the through-plane direction are considerable larger than those resulting from interpolation in the in-plane direction. This implies that in general, it requires higher-degree spline interpolation in the through-plane direction in order to have similar errors as linear interpolation in the in-plane direction. When comparing different degrees of spline interpolation, it can be concluded that cubic spline interpolation results in a considerable (28%–91%) reduction of interpo- lation errors as compared to linear interpolation (first-degree spline interpolation). Even better results (66%–98% reduction) are obtained with higher-degree spline in- terpolation, albeit at a considerable increase in computational cost. 142 6 Evaluation of Convolution-Based Interpolation Methods

6.A Appendix: Piecewise Polynomial Interpolators and B-Splines

In Section 6.5.2, some of the theoretical properties of the interpolation kernels de- scribed in Section 6.3 were discussed briefly and summarized in Table 6.5. By compar- ing the figures presented in this table, it was concluded that the interpolants obtained by B-spline interpolation are smoother and converge faster to the original continuous images than those resulting from any of the other convolution-based interpolation methods described in this chapter. When using B-splines of at most degree n, it is possible to reproduce any polyno- mial of the same degree, i.e., including the alternative piecewise polynomial kernels mentioned in this chapter. The purpose of this appendix is to give more insight in the properties of the alternative piecewise polynomial kernels by explicitly express- ing these kernels in terms of B-splines. The relations presented here were obtained by computing the Fourier transform of the kernels and by factoring out sincn+1(f), which corresponds to βn(x) in the spatial domain. As already mentioned in Section 6.3.4, the zeroth-degree and first-degree B-splines are identical to, respectively, the nearest-neighbor and linear interpolation kernel:

ζ(x)=β0(x), (6.21) and

φ(x)=β1(x). (6.22)

For the Lagrange central interpolation kernels described in Section 6.3.2, the following relations can be derived:

λ1(x)=β1(x), (6.23)

  1 ∂2 λ2(x)= 1 − β2(x), (6.24) 8 ∂x2   1 ∂2 λ3(x)= 1 − β3(x), (6.25) 6 ∂x2   5 ∂2 3 ∂4 λ4(x)= 1 − + β4(x), (6.26) 24 ∂x2 128 ∂x4   1 ∂2 1 ∂4 λ5(x)= 1 − + β5(x), (6.27) 4 ∂x2 30 ∂x4   7 ∂2 259 ∂4 5 ∂6 λ6(x)= 1 − + − β6(x), (6.28) 24 ∂x2 5760 ∂x4 1024 ∂x6 6.A Appendix: Piecewise Polynomial Interpolators and B-Splines 143

  1 ∂2 7 ∂4 1 ∂6 λ7(x)= 1 − + − β7(x), (6.29) 3 ∂x2 120 ∂x4 140 ∂x6   3 ∂2 47 ∂4 3229 ∂6 35 ∂8 λ8(x)= 1 − + − + β8(x), (6.30) 8 ∂x2 640 ∂x4 322560 ∂x6 32768 ∂x8   5 ∂2 13 ∂4 41 ∂6 1 ∂8 λ9(x)= 1 − + − + β9(x). (6.31) 12 ∂x2 144 ∂x4 3024 ∂x6 630 ∂x8 As can be observed from Eqs. (6.23)–(6.31), any nth-degree Lagrange central inter- polation kernel can be expressed in terms of a B-spline of the same degree and its derivatives. The presence of higher-order derivatives does not influence the approxi- mation order of the composition; it equals that of the corresponding B-spline. This can be confirmed by testing the so called Strang-Fix conditions, as described by Blu & Unser [20,21] and also by Th´evenaz et al. [367]. The derivatives do, however, reduce the smoothness of the composition. Since the kernels are symmetric, only even-order derivatives are involved, which explains why the odd-degree kernels are continuous and the even-degree kernels are not (see also Table 6.5). Dodgson’s quadratic convolution kernel described in Section 6.3.3 can be ex- pressed in terms of first- and second-degree B-splines as follows:  2 2 − 1 1 − 1 1 1 ψ (x)=2β (x) 2 β (x 2 )+β (x + 2 ) . (6.32) For the odd-degree generalized convolution kernels described in that same section, the following relations exist:  3 3 − 2 − 1 2 1 ψ (x)=3β (x) β (x 2 )+β (x + 2 ) , (6.33) Z Z x t1  5 − 3 3 ψ (x)= 8 110β (t)+ −∞ −∞  15 β3(t − 1) + β3(t +1) −  (6.34) 2 − 1 2 1 − 67 β (t 2 )+β (t + 2 )  2 − 3 2 3 3 β (t 2 )+β (t + 2 ) dt dt1, Z Z Z Z x t3 t2 t1  7 5 3 ψ (x)= 578 87038β (t)+ −∞ −∞ −∞ −∞  26880 β3(t − 1) + β3(t +1) +  497 β3(t − 2) + β3(t +2) −  (6.35) 2 − 1 2 1 − 62512 β (t 2 )+β (t + 2 )  2 − 3 2 3 − 8313 β (t 2 )+β (t + 2 )  2 − 5 2 5 71 β (t 2 )+β (t + 2 ) dt dt1 dt2 dt3, 144 6 Evaluation of Convolution-Based Interpolation Methods

Z Z Z x t5 t1  9 −35 ··· 3 ψ (x)= 684232 293240340β (t)+ −∞ −∞ −∞  126402003 β3(t − 1) + β3(t +1) +  8421606 β3(t − 2) + β3(t +2) +  34461 β3(t − 3) + β3(t +3) −  (6.36) 2 − 1 2 1 − 228578875 β (t 2 )+β (t + 2 )  2 − 3 2 3 − 50983407 β (t 2 )+β (t + 2 )  2 − 5 2 5 − 1912129 β (t 2 )+β (t + 2 )  2 − 7 2 7 ··· 3829 β (t 2 )+β (t + 2 ) dt dt1 dt5.

Notice that Eqs. (6.33)–(6.36) hold true only for α = α[. Contrary to the Lagrange central interpolation kernels, any nth-degree generalized convolution kernel is com- posed of (integrated versions of) second- and third-degree B-splines. The integration operations do not influence the approximation order of the composition; for all ker- nels, it equals that of the second-degree B-spline, as can be confirmed by testing the aforementioned Strang-Fix conditions. Integration does, however, increase the smoothness of the resulting kernels (see also Table 6.5).

6.B Appendix: Implementation of Direct B-Spline Filters

Since the conclusion of the evaluation presented in this chapter is that spline inter- polation is to be preferred in general, it may be useful to provide some more details concerning the efficient implementation of this type of interpolation. The outline of the treatise presented here was first described by Unser et al. [379–381]. As explained in Section 6.3.4, nth-degree spline interpolation is carried out sepa- rably in every dimension by means of convolution according to: X+∞ c(k)βn(x − k),x∈ R, (6.37) k=−∞ where βn is the nth-degree B-spline as defined in Eq. (6.16), and c(k),k∈ Z,arethe so called B-spline coefficients, to be determined. In order for the convolution (6.37) to actually interpolate given samples s(k),k∈ Z, the following must hold: X+∞ c(l)βn(k − l)=s(k), ∀k ∈ Z. (6.38) l=−∞ The z-transform of Eq. (6.38) reads C(z)Bn(z)=S(z), and consequently, the B-spline coefficients can be found by evaluating −1 C(z)= Bn(z) S(z). (6.39) 6.B Appendix: Implementation of Direct B-Spline Filters 145

− In this equation, (BPn(z)) 1 is called the direct B-spline filter of degree n. Since, n n −k by definition, B (z)= k∈Z β (k)z , this filter can be obtained simply by inserting Eq. (6.16). When doing so, it turns out that (Bn(z))−1 =1forn =0andn =1, which implies that in these cases C(z)=S(z), that is to say, c(k)=s(k). For any n > 2, however, (Bn(z))−1 is a digital high-pass filter that corrects for the blurring introduced by the corresponding B-spline convolution kernel, due to the fact that this kernel does not possess the interpolation property expressed in Eq. (6.4). In order to obtain an efficient implementation of the direct B-spline filter corre- sponding to any n > 2, the best approach is to factorize it. Factorization of (Bn(z))−1 involves the computation of its poles. Due to the fact that all B-spline kernels are symmetric, we have that (Bn(z))−1 =(Bn(z−1))−1 for any degree n. This implies that all poles come in reciprocal pairs, and the filter can be written as

 bn/Y2c n −1 B (z) = cn G(z; zi), (6.40) i=1 where 1 c = , (6.41) n βn(bn/2c) and 1 −z G(z; z )= = i (6.42) i −1 − − −1 − −1 − z (z zi)(z zi ) (1 ziz )(1 ziz) { −1} | | is the factor corresponding to the pole pair zi,zi ,with zi < 1. Since the poles of (Bn(z))−1 are the zeroes of Bn(z), they are obtained by solving Bn(z)=0.Notice that for n > 6, this can only be done numerically (Abel’s theorem [2]), since in those cases the degree of the resulting algebraic equation is larger than four. Numerical rep- resentations of the poles |zi| < 1 of the direct B-spline filters of degree n =0, 1,...,9, as used in this chapter, are shown in Table 6.6. Efficient implementation of the factors G(z; zi) is obtained by a further factoriza- tion according to:

− + G(z; zi)=G (z; zi)G (z; zi), (6.43) with − + 1 − zi G (z; zi)= −1 and G (z; zi)= . (6.44) (1 − ziz ) (1 − ziz)

By using the z-transform property s(k + l) ←→z zlS(z), it can easily be derived that + − G(z; zi), implemented by successively applying G (z; zi)andG (z; zi) described above, gives rise to the following recursive filters in the spatial domain:

+ + s (k)=s(k)+zi s (k − 1), (causal filter) (6.45a)  − − + s (k)=zi s (k +1)− s (k) , (anti-causal filter) (6.45b) where the s(k) are input samples, the s+(k) are intermediate output samples resulting from the causal filter, and the s−(k) are the final output samples resulting from the subsequent application of the anti-causal filter. 146 6 Evaluation of Convolution-Based Interpolation Methods

Degree Poles Degree Poles n =0 − n =1 −

−1 −1 n =2 z1 = −1.71572875254 · 10 n =3 z1 = −2.67949192431 · 10

−1 −1 n =4 z1 = −3.61341225900 · 10 n =5 z1 = −4.30575347100 · 10 −2 −2 z2 = −1.37254292973 · 10 z2 = −4.30962882033 · 10

−1 −1 n =6 z1 = −4.88294589303 · 10 n =7 z1 = −5.35280430796 · 10 −2 −1 z2 = −8.16792710762 · 10 z2 = −1.22554615192 · 10 −3 −3 z3 = −1.41415180833 · 10 z3 = −9.14869480961 · 10

−1 −1 n =8 z1 = −5.74686909249 · 10 n =9 z1 = −6.07997389169 · 10 −1 −1 z2 = −1.63035269297 · 10 z2 = −2.01750520193 · 10 −2 −2 z3 = −2.36322946948 · 10 z3 = −4.32226085405 · 10 −4 −3 z4 = −1.53821310642 · 10 z4 = −2.12130690318 · 10

Table 6.6. Numerical representations (12-significant decimals) of the poles |zi| < 1, i =1,...,bn/2c, of the direct B-spline filters of degree n =0, 1,...,9.

Since in practice these filters will be applied to images of finite extent, it remains to describe how to compute the initial values of the recursions given in Eqs. (6.45a) and (6.45b). That is, given samples s(k),k=0, 1,...,K− 1, where K denotes the total number of samples, how to compute s+(0) and s−(K − 1)? The former may be + obtained by writing the application of G (z; zi)toS(z) as a convolution, rather than a recursion, in the spatial domain:

X+∞ + + s (k)= g (l; zi)s(k − l), (6.46) l=−∞

+ + where g (k; zi)istheinversez-transform of G (z; zi) which, by using the z-transform pair aku(k) ←→z z/(z − a), which holds true for |z| > |a| (see e.g. Kwakernaak & Sivan [200, p. 492]), can be derived to be ( 1ifk > 0, + k g (k; zi)=zi u(k), with u(k)= (6.47) 0ifk<0.

By substituting (6.47) into (6.46) and by using mirror-boundary conditions, that is, s(−k)=s(k), which turns the original finite-extent signal into an infinite-extent one with period 2K − 2, we have that

∞ ∞ − − X+ X+ 2XK 3 1 2XK 3 s+(0) = zls(l)= (z2K−2)k zls(l)= zls(l). (6.48) i i i − 2K−2 i l=0 k=0 l=0 1 zi l=0 6.B Appendix: Implementation of Direct B-Spline Filters 147

The initial value for the anti-causal recursion, s−(K − 1), may be obtained by writing the application of the compound filter G(z; zi)toS(z) as a convolution in the spatial domain: X+∞ − s (k)= g(l; zi)s(k − l), (6.49) l=−∞ where g(k; zi)istheinversez-transform of G(z; zi), which may be obtained by using the partial-fraction expansion of the right-hand side of Eq. (6.42):   −z 1 1 G(z; z )= i + − 1 , (6.50) i − 2 − −1 − (1 zi ) 1 ziz 1 ziz in combination with the following z-transform pairs: aku(k) ←→z z/(z − a), which holds for |z| > |a|,furthermore−aku(−k) ←→z a/(z − a), which holds for |z| < |a|, and finally 1 ←→z δ(k), which holds for all z ∈ C (see e.g. Kwakernaak & Sivan [200, p. 492]). Together, this results in −z  g(k; z )= i zku(k)+z−ku(−k) − δ(k) . (6.51) i − 2 i i (1 zi ) By substituting (6.51) into (6.49) and by again using mirror-boundary conditions, that is, s(K − 1+l)=s(K − 1 − l), it can easily be derived that −z  s−(K − 1) = i 2s+(K − 1) − s(K − 1) . (6.52) − 2 (1 zi ) In summary, interpolation by means of a B-spline kernel of degree n > 2requires prefiltering of the raw data in order to correct for the blurring nature of the kernel. This prefiltering is accomplished by carrying out the following operations on the given samples s(k),k=0, 1,...,K− 1: 1) Compute the initial value s+(0) for the causal filter by evaluating the right-hand 6 side of Eq. (6.48), using zi = z1 as given in Table 6.6.

2) Apply the causal filter, Eq. (6.45a), for k =1, 2,...,K− 1, using zi = z1. − 3) Compute the initial value s (K − 1) for the anti-causal filter by using zi = z1 in evaluating Eq. (6.52).

4) Apply the anti-causal filter, Eq. (6.45b), for k = K − 2,...,1, using zi = z1.

5) Repeat steps 1) – 4) for the remaining poles zi,i=2,...,bn/2c, belonging to degree n, as given in Table 6.6.

6) Multiply the resulting samples with the factor cn given in Eq. (6.41). In the case of multiple dimensions, steps 1) – 6) must be repeated separably in all dimensions. That is, first to the rows, then to the resulting columns, etc.

6Notice that if the number of samples, K, is sufficiently large, the summation may be terminated earlier than at l =2K − 3, since the contributions of the samples corresponding to larger values of l l are negligible due to the exponential decay of zi.

Certain authors, speaking of their works, say: “My book”, “My com- mentary”, “My history”, etc. (...) They would do better to say: “Our book”, “Our commentary”, “Our history”, etc., because there is in them usually more of other people’s than their own.

— Blaise Pascal, Pens´ees ()

Bibliography

[1] I. E. Abdou & W. K. Pratt, “Quantitative Design and Evaluation of Enhancement / Thresh- olding Edge Detectors”, Proceedings of the IEEE, vol. 67, no. 5, pp. 753–763, 1979. [2] N. H. Abel, “Beweis der Unm¨oglichkeit Algebraische Gleichungen von H¨oheren Graden als dem Vierten Allgemein Aufzul¨osen”, Journal f¨ur die Reine und Angewandte Mathematik,vol.1, pp. 65–84, 1826. [3] J. K. Aggerwal & N. Nandhakumar, “On the Computation of Motion from Sequences of Images — A Review”, Proceedings of the IEEE, vol. 76, no. 8, pp. 917–935, 1988. [4] A. Aldroubi, M. Unser, M. Eden, “Cardinal Spline Filters: Stability and Convergence to the Ideal Sinc Interpolator”, Signal Processing, vol. 28, no. 2, pp. 127–138, 1992. [5] R. Althof, M. G. J. Wind, J. T. Dobbins, “A Rapid and Automatic Image Registration Algorithm with Subpixel Accuracy”, IEEE Transactions on Medical Imaging, vol. 16, no. 1, pp. 308–316, 1997. [6]D.G.Altman,Practical Statistics for Medical Research, Chapman & Hall, London, UK, 1991. [7] A. A. Amini, “A Scalar Function Formulation for Optical Flow: Applications to X-Ray Imag- ing”, in Proceedings of the IEEE Workshop on Biomedical Image Analysis, IEEE Computer Society Press, Los Alamitos, California, USA, pp. 117–123, 1994. [8] R. E. Anderson, R. A. Kruger, R. G. Sherry, J. A. Nelson, P. Liu, “Tomographic DSA using Temporal Filtration: Initial Neurovascular Application”, American Journal of Neuroradiology, vol. 5, no. 3, pp. 277–280, 1984. [9] A. Antoniou, Digital Filters: Analysis, Design, and Applications, 2nd ed., McGraw-Hill, New York, USA, 1993. [10] R. Anxionnat, S. Bracard, J. Macho, E. Da Costa, R. Vaillant, L. Launay, Y. Trousset, R. Romeas, L. Picard, “3D Angiography: Clinical Interest. First Applications in Interven- tional Neuroradiology”, Journal of Neuroradiology, vol. 25, no. 4, pp. 251–262, 1998. [11] K. Astr¨˚ om & A. Heyden, “Stochastic Analysis of Image Acquisition and Scale-Space Smooth- ing”, in Gaussian Scale-Space Theory, J. Sporring, M. Nielsen, L. Florack, P. Johansen (eds.), vol. 8 of Computational Imaging and Vision, Kluwer Academic Publishers, Dordrecht, the Netherlands, Ch. 9, pp. 129–136, 1997. [12] Aurelius Augustinus, De Civitate Dei, 413-426 A.D. English translation: City of God,Penguin Books, London, UK, 1984. [13] D. I. Barnea & H. F. Silverman, “A Class of Algorithms for Fast Digital Image Registration”, IEEE Transactions on Computers, vol. 21, no. 2, pp. 179–186, 1972. [14] M. S. Bartlett, “Periodogram Analysis and Continuous Spectra”, Biometrika, vol. 37, pp. 1–16, 1950. [15] G. Bavinzski, B. Richling, A. Gruber, M. Killer, D. Levy, “Endosaccular Occlusion of Basilar Artery Bifurcation Aneurysms using Electrically Detachable Coils”, Acta Neurochirurgica, vol. 134, no. 3/4, pp. 184–189, 1995. 150 Bibliography

[16] F. J. Beekman, E. T. P. Slijpen, W. J. Niessen, “Selection of Task-Dependent Diffusion Filters for the Post-Processing of SPECT Images”, Physics in Medicine and Biology, vol. 43, no. 6, pp. 1713–1730, 1998. [17] T. Beier & S. Neely, “Feature-Based Image Metamorphosis”, Computer Graphics (SIG- GRAPH’92 Conference Proceedings), vol. 26, no. 2, pp. 35–42, 1992. [18] J. Big¨un, G. H. Granlund, J. Wiklund, “Multidimensional Orientation Estimation with Ap- plications to Texture Analysis and Optical Flow”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 775–790, 1991. [19] R. B. Blackman & J. W. Tukey, The Measurement of Power Spectra: From the Point of View of Communications Engineering, Dover publications, New York, USA, 1959. [20] T. Blu & M. Unser, “Quantitative Fourier Analysis of Approximation Techniques: Part I — Interpolators and Projectors”, IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2783–2795, 1999. [21] T. Blu & M. Unser, “Quantitative Fourier Analysis of Approximation Techniques: Part II — Wavelets”, IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2796–2806, 1999. [22]H.G.Bogren,J.A.Seibert,H.H.Hines,B.A.Porter,“TheBeneficialEffectsofShortPulse Width Acquisition and ECG-Gating in Digital Angiocardiography”, Investigative , vol. 19, no. 4, pp. 284–290, 1984. [23] H. Bohman, “Approximate Fourier Analysis of Distribution Functions”, Arkiv F¨or Matematik, vol. 4, no. 10, pp. 99–157, 1961. [24] F. L. Bookstein, “Principle Warps: Thin-Plate Splines and the Decomposition of Deforma- tions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 6, pp. 567–585, 1989. [25] Z. Bosanac, R. J. Miller, M.Jain, “Rotational Digital Subtraction Carotid Angiography: Tech- nique and Comparison with Static Digital Subtraction Angiography”, Clinical Radiology, vol. 53, no. 9, pp. 682–687, 1998. [26] L. M. Boxt, “Intravenous Digital Subtraction Angiography of the Thoracic and Abdominal Aorta”, CardioVascular and , vol. 6, pp. 205–213, 1983. [27] R. P. Brent, Algorithms for Minimization without Derivatives, Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1973. [28] W. R. Brody, “Hybrid Subtraction for Improved Arteriography”, Radiology, vol. 141, no. 3, pp. 828–831, 1981. [29] W. R. Brody, “Digital Subtraction Angiography”, IEEE Transactions on Nuclear Science, vol. 29, no. 3, pp. 1176–1180, 1982. [30] W. R. Brody, G. Blutt, A. Hall, A. Macovski, “A Method for Selective Tissue and Bone Visualization using Dual Energy Scanned Projection Radiography”, ,vol.8, no. 3, pp. 353–357, 1981. [31] W. R. Brody, D. R. Enzmann, L.-S. Deutsch, A. Hall, N. Pelc, “Intravenous Carotid Arte- riography using Line-Scanned Digital Radiography”, Radiology, vol. 139, no. 2, pp. 297–300, 1981. [32] L. G. Brown, “A Survey of Image Registration Techniques”, ACM Computing Surveys, vol. 24, no. 4, pp. 325–376, 1992. [33] E. Buonocore, T. F. Meaney, G. P. Borkowski, W. Pavlicek, J. Gallagher, “Digital Subtraction Angiography of the Abdominal Aorta and Renal Arteries”, Radiology, vol. 139, no. 2, pp. 281– 286, 1981. [34] F. H. Burbank, D. Enzmann, G. S. Keyes, W. R. Brody, “Hybrid Intravenous Digital Sub- traction Angiography of the Carotid Bifurcation”, Radiology, vol. 152, no. 3, pp. 725–729, 1984. [35] T. M. Buzug, C. Lorenz, J. Weese, “Improvement of Vessel Segmentation by Elastically Com- pensated Patient Motion in Digital Subtraction Angiography Images”, in Computer Analysis of Images and Patterns (CAIP’97), G. Sommer, K. Daniilidis, J. Pauli (eds.), vol. 1296 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 106–113, 1997. Bibliography 151

[36] T. M. Buzug & J. Weese, “Improving DSA Images with an Automatic Algorithm based on Template Matching and an Entropy Measure”, in Computer Assisted Radiology (CAR’96), H. U. Lemke, M. W. Vannier, K. Inamura, A. G. Farman (eds.), vol. 1124 of International Congress Series, Elsevier Science, Amsterdam, the Netherlands, pp. 145–150, 1996. [37] T. M. Buzug & J. Weese, “Similarity Measures for Subtraction Methods in Medical Imag- ing”, in Proceedings of the 18th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 140–141, 1996. [38] T. M. Buzug & J. Weese, “Image Registration for DSA Quality Enhancement”, Computerized Medical Imaging and Graphics, vol. 22, no. 2, pp. 103–113, 1998. [39] T. M. Buzug & J. Weese, “Voxel-Based Similarity Measures for Medical Image Registration in Radiological Diagnosis and Image Guided Surgery”, Journal of Computing and Information Technology, vol. 6, no. 2, pp. 165–179, 1998. [40] T. M. Buzug, J. Weese, C. Fassnacht, C. Lorenz, “Using an Entropy Similarity Measure to Enhance the Quality of DSA Images with an Algorithm based on Template Matching”, in Visualization in Biomedical Computing (VBC’96), K.-H. H¨ohne & R. Kikinis (eds.), vol. 1131 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 235–240, 1996. [41] T. M. Buzug, J. Weese, C. Fassnacht, C. Lorenz, “Elastic Matching based on Motion Fields obtained with a Histogram-Based Similarity Measure for DSA-Image Correction”, in Computer Assisted Radiology and Surgery (CAR’97),H.U.Lemke,M.W.Vannier,K.Inamura(eds.), vol. 1134 of International Congress Series, Elsevier Science, Amsterdam, the Netherlands, pp. 139–144, 1997. [42] T. M. Buzug, J. Weese, C. Fassnacht, C. Lorenz, “Image Registration: Convex Weighting Functions for Histogram-Based Similarity Measures”, in CVRMed-MRCAS’97, J. Troccaz, E.Grimson,R.M¨osges (eds.), vol. 1205 of Lecture Notes in Computer Science, Springer- Verlag, Berlin, Germany, pp. 203–212, 1997. [43] T. M. Buzug, J. Weese, C. Lorenz, “Weighted Least Squares for Point-Based Registration in Digital Subtraction Angiography (DSA)”, in Medical Imaging: Image Processing,K.M.Han- son (ed.), vol. 3661 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 139–150, 1999. [44] T. M. Buzug, J. Weese, C. Lorenz, W. Beil, “Histogram-Based Image Registration for Digital Subtraction Angiography”, in Image Analysis and Processing (ICIAP’97), A. Del Bimbo (ed.), vol. 1311 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 380– 387, 1997. [45] T. M. Buzug, J. Weese, K. C. Strasters, “Motion Detection and Motion Compensation for Digital Subtraction Angiography Image Enhancement”, Philips Journal of Research, vol. 51, no. 2, pp. 203–229, 1998. [46] L. Campeau & J. Saltiel, “Rotational Cineangiocardiography”, American Journal of Roentgenology, Radium Therapy, and , vol. 91, no. 3, pp. 544–549, 1964. [47] J. F. Canny, “A Computational Approach to Edge Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986. [48] E. Catmull & R. Rom, “A Class of Local Interpolating Splines”, in Computer Aided Geometric Design, R. E. Barnhill & R. F. Riesenfeld (eds.), Academic Press, New York, USA, pp. 317–236, 1974. [49] F. Catt´e, P.-L. Lions, J.-M. Morel, T. Coll, “Image Selective Smoothing and Edge Detection by Nonlinear Diffusion”, SIAM Journal on Numerical Analysis, vol. 29, no. 1, pp. 182–193, 1992. [50] H. Chen & J. Hale, “An Algorithm for MR Angiography Image Enhancement”, Magnetic Resonance in Medicine, vol. 33, no. 4, pp. 534–540, 1995. [51] J. Y. Chiang & B. J. Sullivan, “Coincident Bit Counting — A New Criterion for Image Registration”, IEEE Transactions on Medical Imaging, vol. 12, no. 1, pp. 30–38, 1993. [52] W. A. Chilcote, M. T. Modic, W. A. Pavlicek, J. R. Little, A. J. Furian, P. M. Duchesneau, M. A. Weinstein, “Digital Subtraction Angiography of the Carotid Arteries: A Comparitive Study in 100 Patients”, Radiology, vol. 139, no. 2, pp. 287–295, 1981. 152 Bibliography

[53] R. T. Chin & C.-L. Yeh, “Quantitative Evaluation of Some Edge-Preserving Noise-Smoothing Techniques”, Computer Vision, Graphics and Image Processing, vol. 23, no. 1, pp. 67–91, 1983. [54] K.-S. Chuang, “Comparison of Interpolation Methods in Three-Dimensional Surface Display”, in Medical Imaging IV: Image Processing, vol. 1233 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 443–452, 1990. [55] R. Close & J. S. Whiting, “Motion-Compensated Signal and Background Estimation from Coronary Angiograms”, in Medical Imaging: Image Processing, M. H. Loew (ed.), vol. 2434 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Wash- ington, USA, pp. 185–194, 1995. [56] R. A. Close & J. S. Whiting, “Decomposition of Projection Image Sequences into Moving Layers”, in Computer Assisted Radiology and Surgery (CAR’98),H.U.Lemke,M.W.Vannier, K. Inamura, A. G. Farman (eds.), vol. 1165 of International Congress Series, Elsevier Science, Amsterdam, the Netherlands, pp. 143–146, 1998. [57] R. A. Close & J. S. Whiting, “Comments on “Retrospective Motion Correction in Digital Subtraction Angiography: A Review” ”, IEEE Transactions on Medical Imaging, vol. 18, no. 6, p. 556, 1999. [58] R. A. Close & J. S. Whiting, “Decomposition of Coronary Angiograms into Non-Rigid Moving Layers”, in Medical Imaging: Image Processing, K. M. Hanson (ed.), vol. 3661 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 1515–1520, 1999. [59] G. Cohen, L. K. Wagner, E. N. Rauschkolb, “Evaluation of a Digital Subtraction Angiography Unit”, Radiology, vol. 144, no. 3, pp. 613–617, 1982. [60] J. Cohen, “Weighted Kappa: Nominal Scale Agreement with Provision for Scaled Disagreement or Partial Credit”, Psychological Bulletin, vol. 70, pp. 213–220, 1968. [61] E. U. Condon & H. Odishaw, Handbook of Physics, McGraw-Hill, New York, USA, 1958. [62] G. Cornelis, A. Bellet, B. van Eygen, Ph. Roisin, E. Libon, “Rotational Multiple Sequence Roentgenography of Intracranial Aneurysms”, Acta Radiologica, vol. 13, no. 1, pp. 74–76, 1972. [63] G. S. Cox & G. de Jager, “Automatic Registration of Temporal Image Pairs for Digital Sub- traction Angiography”, in Medical Imaging: Image Processing, M. H. Loew (ed.), vol. 2167 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Wash- ington, USA, pp. 188–199, 1994. [64] A. B. Crummy, C. M. Strother, J. F. Sackett, D. L. Ergun, C. G. Shaw, R. A. Kruger, C. A. Mistretta,W.D.Turnipseed,R.P.Lieberman,P.D.Myerowitz,F.F.Ruzicka,“Computer- ized : Digital Subtraction for Intravenous Angiocardiography and Arteriography”, American Journal of Roentgenology, vol. 135, no. 6, pp. 1131–1140, 1980. [65] P.-E. Danielsson & M. Hammerin, “High-Accuracy Rotation of Images”, CVGIP: Graphical Models and Image Processing, vol. 54, no. 4, pp. 340–344, 1992. [66] L. S. Davis, Z. Wu, H. Sun, “Contour-Based Motion Estimation”, Computer Vision, Graphics and Image Processing, vol. 23, no. 3, pp. 313–326, 1983. [67] P. Dawson, “Digital Subtraction Angiography — A Critical Analysis”, Clinical Radiology, vol. 39, no. 5, pp. 474–477, 1988. [68] K. de Jong, “Adaptive System Design: A Genetic Approach”, IEEE Transactions on Systems, Man, and Cybernetics, vol. 10, no. 9, pp. 566–574, 1980. [69] N. de Vries, F. J. Miller, M. M. Wojtowycz, P. R. Brown, D. R. Yandow, J. A. Nelson, R. A. Kruger, “Tomographic Digital Subtraction Angiography: Initial Clinical Studies using Tomosynthesis”, Radiology, vol. 157, no. 1, pp. 239–241, 1985. [70] N. A. Dodgson, “Quadratic Interpolation for Image Resampling”, IEEE Transactions on Image Processing, vol. 6, no. 9, pp. 1322–1326, 1997. Bibliography 153

[71] Y. P. Du & D. L. Parker, “Vessel Enhancement Filtering in Three-Dimensional MR Angiog- raphy”, Journal of Magnetic Resonance Imaging, vol. 5, no. 3, pp. 353–359, 1995. [72] Y. P. Du & D. L. Parker, “Vessel Enhancement Filtering in Three-Dimensional MR Angiograms using Long-Range Signal Correlation”, Journal of Magnetic Resonance Imaging, vol. 7, no. 2, pp. 447–450, 1997. [73] Y. P. Du, D. L. Parker, W. L. Davis, D. D. Blatter, “Contrast-to-Noise-Ratio Measurements in Three-Dimensional Magnetic Resonance Angiography”, Investigative Radiology, vol. 28, no. 11, pp. 1004–1009, 1993. [74] Y. P. Du, D. L. Parker, W. L. Davis, G. Cao, “Reduction of Partial-Volume Artifacts with Zero- Filled Interpolation in Three-Dimensional MR Angiography”, Journal of Magnetic Resonance Imaging, vol. 4, no. 5, pp. 733–741, 1994. [75] S. M. Dunn, P. F. van der Stelt, A. Ponce, K. Fenesy, S. Shah, “A Comparison of Two Registration Techniques for Digital Subtraction Radiography”, Dentomaxillofacial Radiology, vol. 22, no. 2, pp. 77–80, 1993. [76] K. Ebina, T. Shimizu, M. Sohma, T. Iwabuchi, “Clinico-Statistical Study on Morphological Risk Factors of Middle Cerebral Artery Aneurysms”, Acta Neurochirurgica, vol. 106, no. 3/4, pp. 153–159, 1990. [77] W. F. Eddy, M. Fitzgerald, D. C. Noll, “Improved Image Registration by using Fourier Interpolation”, Magnetic Resonance in Medicine, vol. 36, no. 6, pp. 923–931, 1996. [78]O.E.H.Elgersma,P.C.Buijs,A.F.J.W¨ust,Y.vanderGraaf,B.C.Eikelboom,W.P.T.M. Mali, “Maximum Internal Carotid Arterial Stenosis: Assessment with Rotational Angiography versus Conventional Intraarterial Digital Subtraction Angiography”, Radiology, vol. 213, no. 3, pp. 777–783, 1999. [79] M. Eliasziw, R. F. Smith, N. Singh, D. W. Holdsworth, A. J. Fox, H. J. M. Barnett, “Further Comments on the Measurement of Carotid Stenosis from Angiograms”, , vol. 25, no. 12, pp. 2445–2449, 1994. [80] K.-H. Englmeier, U. Fink, T. Hilbertz, “Automated Pixel Shifting in Digital Subtraction Angiography — An Application of Cepstral Filtering”, in Computer Assisted Radiology (CAR’93), H. U. Lemke, K. Inamura, C. C. Jaffe, R. Felix (eds.), Springer-Verlag, Berlin, Germany, p. 795, 1993. [81] D. R. Enzmann, W. T. Djang, S. J. Riederer, W. F. Collins, A. Hall, G. S. Keyes, W. R. Brody, “Low-Dose, High-Frame-Rate versus Regular-Dose, Low-Frame-Rate Digital Subtrac- tion Angiography”, Radiology, vol. 146, no. 3, pp. 669–676, 1983. [82] D. R. Enzmann & R. Freimarck, “Head Immobilization for Digital Subtraction Angiography”, Radiology, vol. 151, no. 3, p. 801, 1984. [83] L. Euler, “De Eximio Usu Methodi Interpolationum in Serierum Doctrina”, in Opuscula Analytica, vol. 1, Academia Imperialis Scientiarum, Petropoli, pp. 157–210, 1783. Can also be found in Leonhardi Euleri Opera Omnia. Series Prima: Opera Mathematica, vol. 15, Teubner, Lipsiae, pp. 435-498, 1927. [84] European Carotid Surgery Trialists’ Collaborative Group, “MRC European Carotid Surgery Trial: Interim Results for Symptomatic Patients with Severe (70-99%) or with Mild (0-29%) Carotid Stenosis”, The Lancet, vol. 337, no. 8752, pp. 1235–1243, 1991. [85] European Carotid Surgery Trialists’ Collaborative Group, “Endarterectomy for Moderate Symptomatic Carotid Stenosis: Interim Results from the MRC European Carotid Surgery Trial”, The Lancet, vol. 347, no. 9015, pp. 1591–1593, 1996. [86] European Carotid Surgery Trialists’ Collaborative Group, “Randomised Trial of Endarterec- tomy for Recently Symptomatic Carotid Stenosis: Final Results of the MRC European Carotid Surgery Trial (ECST)”, The Lancet, vol. 351, no. 9113, pp. 1379–1387, 1998. [87] R. Fahrig, A. J. Fox, D. W. Holdsworth, “Characterization of a C-Arm Mounted XRII for 3D Image Reconstruction during Interventional Neuroradiology”, in Medical Imaging: Physics of Medical Imaging, R. L. van Metter & J. Beutel (eds.), vol. 2708 of Proceedings of SPIE,The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 351–360, 1996. 154 Bibliography

[88] R. Fahrig, A. J. Fox, S. Lownie, D. W. Holdsworth, “Use of a C-Arm System to Generate True Three-Dimensional Computed Rotational Angiograms: Preliminary In Vitro and In Vivo Results”, American Journal of Neuroradiology, vol. 18, no. 8, pp. 1507–1514, 1997. [89] R. Fahrig & D. W. Holdsworth, “Three-Dimensional Computed Tomographic Reconstruction using a C-Arm Mounted XRII: Image-Based Correction of Gantry Motion Nonidealities”, Medical Physics, vol. 27, no. 1, pp. 30–38, 2000. [90] R. Fahrig, H. Nikolov, A. J. Fox, D. W. Holdsworth, “A Three-Dimensional Cerebrovascular Flow Phantom”, Medical Physics, vol. 26, no. 8, pp. 1589–1599, 1999. [91] L. A. Feldkamp, L. C. Davis, J. W. Kress, “Practical Cone-Beam Algorithm”, Journal of the Optical Society of America. A. Optics and Image Science, vol. 1, no. 6, pp. 612–619, 1984. [92] A. Fernandez Zubillaga, G. Guglielmi, F. Vi˜nuela, G. R. Duckwiler, “Endovascular Occlusion of Intracranial Aneurysms with Electrically Detachable Coils: Correlation of Aneurysm Neck Size and Treatment Results”, American Journal of Neuroradiology, vol. 15, no. 5, pp. 815–820, 1994. [93] C. L. Fink, R. E. Flandry, R. A. Pratt, C. B. Early, “A Comparative Study of Performance Characteristics of Cerebral Aneurysm Clips”, Surgical Neurology, vol. 11, no. 3, pp. 179–186, 1979. [94] U. Fink, S. H. Heywang, T. Hilbertz, K. Fisher, E. Jenner, W. Buchsteiner, “Peripheral DSA with Automated Stepping”, European Journal of Radiology, vol. 13, no. 1, pp. 50–54, 1991. [95] J. M. Fitzpatrick, “The Existence of Geometrical Density-Image Transformations Correspond- ing to Object Motion”, Computer Vision, Graphics and Image Processing, vol. 44, no. 2, pp. 155–174, 1988. [96] J. M. Fitzpatrick & J. J. Grefenstette, “Genetic Algorithms in Noisy Environments”, Machine Learning, vol. 3, pp. 101–120, 1988. [97] J. M. Fitzpatrick, J. J. Grefenstette, D. R. Pickens, M. Mazer, J. M. Perry, “A System for Image Registration in Digital Subtraction Angiography”, in Image Processing in Medical Imaging, C. N. de Graaf & M. A. Viergever (eds.), Plenum Press, New York, USA, pp. 415–435, 1988. [98] J. M. Fitzpatrick & M. R. Leuze, “A Class of One-to-One Two-Dimensional Transformations”, Computer Vision, Graphics and Image Processing, vol. 39, no. 3, pp. 369–382, 1987. [99] J. M. Fitzpatrick, D. R. Pickens, H. Chang, Y. Ge, M. Ozkan,¨ “Geometrical Transforma- tions of Density Images”, in Science and Engineering of Medical Imaging,M.A.Viergever (ed.), vol. 1137 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 12–21, 1989. [100] J. M. Fitzpatrick, D. R. Pickens, J. J Grefenstette, R. R. Price, A. E. James, “Technique for Automatic Motion Correction in Digital Subtraction Angiography”, Optical Engineering, vol. 26, no. 11, pp. 1085–1093, 1987. [101] J. M. Fitzpatrick, D. R. Pickens, V. R. Mandava, J. J. Grefenstette, “The Reduction of Motion Artifacts in Digital Subtraction Angiography by Geometrical Image Transformations”, in Medical Imaging II: Image Formation, Detection, Processing, and Interpretation,R.H. Schneider & S. J. Dwyer III (eds.), vol. 914 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 379–386, 1988. [102] J. M. Fitzpatrick, D. R. Pickens, J. M. Perry, Y. Ge, “Experimental Results of Image Registra- tion in Digital Subtraction Angiography with an In Vivo Phantom”, in Medical Imaging III: Image Processing, R. H. Schneider, S. J. Dwyer III, R. G. Jost (eds.), vol. 1092 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 200–213, 1989. [103] J. L. Fleiss, “Measuring Nominal Scale Agreement amoung Many Raters”, Psychological Bulletin, vol. 76, pp. 378–382, 1971. [104] J. L. Fleiss, Statistical Methods for Rates and Proportions, 2nd ed., Wiley Series in Probability and Mathematical Statistics, Wiley, New York, USA, 1981. Bibliography 155

[105] L. Florack, W. Niessen, M. Nielsen, “The Intrinsic Structure of Optic Flow incorporating Measurement Duality”, International Journal of Computer Vision, vol. 27, no. 3, pp. 263– 286, 1998. [106] L. M. J. Florack, Image Structure,vol.10ofComputational Imaging and Vision,Kluwer Academic Publishers, Dordrecht, the Netherlands, 1997. [107] J. Flusser, “An Adaptive Method for Image Registration”, Pattern Recognition, vol. 25, no. 1, pp. 45–54, 1992. [108] L. J. Fogel, “A Note on the Sampling Theorem”, IRE Transactions on Information Theory, vol. 1, pp. 47–48, 1955. [109] S. V. Fogel, “The Estimation of Velocity Vector Fields from Time-Varying Image Sequences”, CVGIP: Image Understanding, vol. 53, no. 3, pp. 253–287, 1991. [110] J. Foley, A. van Dam, S. K. Feiner, J. F. Hughes, Computer Graphics: Principles and Practice, 2nd ed., Systems Programming Series, Addison-Wesley, Reading, Massachusetts, USA, 1990. [111] W. D. Foley, G. S. Keyes, D. F. Smith, B. Belanger, L. E. Sieb, T. L. Lawson, M. K. Thorsen, E. T. Stewart, “Work in Progress: Temporal Energy Hybrid Subtraction in Intravenous Digital Subtraction Angiography”, Radiology, vol. 148, no. 1, pp. 265–271, 1983. [112] W. D. Foley, E. T. Stewart, J. R. Milbrath, M. SanDretto, M. Milde, “Digital Subtraction Angiography of the Portal Venous System”, American Journal of Roentgenology, vol. 140, no. 3, pp. 497–499, 1983. [113] A. J. Fox, “How to Measure Carotid Stenosis”, Radiology, vol. 186, no. 2, pp. 316–318, 1993. [114] D. A. Francis, J. J. Sheldon, K. Soila, J. Tobias, “Carotid Artery and Aortic Arch Imaging with ECG Gating in DSA”, Radiology, vol. 155, no. 3, p. 827, 1985. [115] A. S. Frangakis & R. Hegerl, “Nonlinear Anisotropic Diffusion in Three-Dimensional Electron Microscopy”, in Scale-Space Theories in Computer Vision,M.Nielsen,P.Johansen,O.F. Olsen, J. Weickert (eds.), vol. 1682 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 386–397, 1999. [116] A. F. Frangi, W. J. Niessen, K. L. Vincken, M. A. Viergever, “Multiscale Vessel Enhancement Filtering”, in Medical Image Computing and Computer-Assisted Intervention (MICCAI’98), W. M. Wells, A. Colchester, S. Delp (eds.), vol. 1496 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 130–137, 1998. [117] G. Gerig, O. K¨ubler, R. Kikinis, F. A. Jolesz, “Nonlinear Anisotropic Filtering of MRI Data”, IEEE Transactions on Medical Imaging, vol. 11, no. 2, pp. 221–232, 1992. [118] J. J. Gibson, The Perception of the Visual World, Houghton Mifflin, Boston, USA, 1950. [119] O. Glasser, Wilhelm Conrad R¨ontgen und die Geschichte der R¨ontgenstrahlen,3rded., Springer-Verlag, Berlin, Germany, 1995. [120] E. Gmelin, H. D. Weiss, F. Buchmann, “Cardiac Gating in Intravenous DSA”, European Journal of Radiology, vol. 6, no. 1, pp. 24–29, 1986. [121] R. C. Gonzalez & P. Wintz, Digital Image Processing, no. 13 in Applied Mathematics and Computation, Addison-Wesley, Reading, Massachusetts, USA, 1977. [122] A. Goshtasby, “Piecewise Linear Mapping Functions for Image Registration”, Pattern Recog- nition, vol. 19, no. 6, pp. 459–466, 1986. [123] A. Goshtasby, “Piecewise Cubic Mapping Functions for Image Registration”, Pattern Recog- nition, vol. 20, no. 5, pp. 525–533, 1987. [124] A. Goshtasby, G. C. Stockman, C. V. Page, “A Region Based Approach to Digital Image Reg- istration with Subpixel Accuracy”, IEEE Transactions on Geoscience and Remote Sensing, vol. 24, no. 3, pp. 390–399, 1986. [125] M. Grass, R. Koppe, E. Klotz, R. Proksa, M. H. Kuhn, H. Aerts, J. op de Beek, R. Kemkers, “3D Reconstruction of High Contrast Objects using C-Arm Image Intensifier Projection Data”, submitted for journal publication, 1999. 156 Bibliography

[126] J. Gregory, “Letter to J. Collins. (St. Andrews, 23 November 1670)”, in James Gregory Tercentenary Memorial Volume, H. W. Turnbull (ed.), G. Bells & Sons, London, UK, pp. 118– 137, 1939. [127] G. J. Grevera & J. K. Udupa, “Shape-Based Interpolation of Multidimensional Grey-Level Images”, IEEE Transactions on Medical Imaging, vol. 15, no. 6, pp. 881–892, 1996. [128] G. J. Grevera & J. K. Udupa, “An Objective Comparison of 3-D Image Interpolation Methods”, IEEE Transactions on Medical Imaging, vol. 17, no. 4, pp. 642–652, 1998. [129] G. Guglielmi, F. Vi˜nuela, G. Duckwiler, J. Dion, P. Lylyk, A. Berenstein, C. Strother, V. Graves, V. Halbach, D. Nichols, N. Hopkins, R. Ferguson, I. Sepetka, “Endovascular Treat- ment of Posterior Circulation Aneurysms by Electrothrombosis using Electrically Detachable Coils”, Journal of Neurosurgery, vol. 77, no. 4, pp. 515–524, 1992. [130] J.-F. Guo, Y.-L. Cai, Y.-P. Wang, “Morphology-Based Interpolation for 3D Medical Image Reconstruction”, Computerized Medical Imaging and Graphics, vol. 19, no. 3, pp. 267–279, 1995. [131] D. F. Guthaner, W. R. Brody, B. D. Lewis, G. S. Keyes, B. F. Belanger, “Clinical Applica- tions of Hybrid Subtraction Digital Angiography: Preliminary Results”, CardioVascular and Interventional Radiology, vol. 6, pp. 290–294, 1983. [132] D. F. Guthaner, L. Wexler, D. R. Enzmann, S. J. Riederer, G. S. Keyes, W. F. Collins, W. R. Brody, “Evaluation of Peripheral Vascular Disease using Digital Subtraction Angiography”, Radiology, vol. 147, no. 2, pp. 393–398, 1983. [133] P. Haaker, E. Klotz, R. Koppe, R. Linda, H. M¨oller, “A New Digital Tomosynthesis Method with Less Artifacts for Angiography”, Medical Physics, vol. 12, no. 4, pp. 431–436, 1985. [134] J. V. Hajnal, N. Saeed, E. J. Soar, A. Oatridge, I. R. Young, G. M. Bydder, “A Registration and Interpolation Procedure for Subvoxel Matching of Serially Acquired MR Images”, Journal of Computer Assisted Tomography, vol. 19, no. 2, pp. 289–296, 1995. [135] R. W. Hamming, Digital Filters, 3rd ed., Signal Processing Series, Prentice-Hall, Engelwood Cliffs, New Jersey, USA, 1989. [136] D. P. Harrington, “Renal Digital Subtraction Angiography”, CardioVascular and Interven- tional Radiology, vol. 6, pp. 214–223, 1983. [137] D. P. Harrington, L. M. Boxt, P. D. Murray, “Digital Subtraction Angiography: Overview of Technical Principles”, American Journal of Roentgenology, vol. 139, no. 4, pp. 781–786, 1982. [138] F. J. Harris, “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform”, Proceedings of the IEEE, vol. 66, no. 1, pp. 51–83, 1978. [139] E. Hascheck & O. Th. Lindenthal, “Ein Beitrag zur praktischen Verwerthung der Photographie nach R¨ontgen”, Wiener klinische Wochenschrift, vol. 9, no. 4, pp. 63–64, 1896. [140] N. Hayashi, T. Sakai, M. Kitagawa, R. Inagaki, N. Sadato, Y. Ishii, Y. Nishimoto, M. Tanaka, T. Fukushima, H. Komuro, H. Ogura, H. Kobayashi, T. Kubota, “Nonlinear Geometric Warp- ing of the Mask Image: A New Method for Reducing Misregistration Artifacts in Digital Sub- traction Angiography”, CardioVascular and Interventional Radiology, vol. 21, no. 2, pp. 138– 141, 1998. [141] J. F. Heautot, E. Chabert, Y. Gandon, S. Croci, R. Romeas, R. Campagnolo, B. Chereul, J. M. Scarabin, M. Carsin, “Analysis of Cerebrovascular Diseases by a New 3-Dimensional Computerised X-Ray Angiography System”, Neuroradiology, vol. 40, no. 4, pp. 203–209, 1998. [142] P. S. Heckbert, “Survey of Texture Mapping”, IEEE Computer Graphics and Applications, vol. 6, no. 11, pp. 56–67, 1986. [143] G. T. Herman, S. W. Rowland, M. Yau, “A Comparitive Study of the Use of Linear and Modified Cubic Spline Interpolationfor Image Reconstruction”, IEEE Transactions on Nuclear Science, vol. 26, no. 2, pp. 2879–2894, 1979. [144] G. T. Herman, J. Zheng, C. A. Bucholtz, “Shape-Based Interpolation”, IEEE Computer Graphics and Applications, vol. 12, no. 3, pp. 69–79, 1992. Bibliography 157

[145] W. E. Higgins, C. Morice, E. L. Ritman, “Shape-Based Interpolation of Tree-Like Structures in Three-Dimensional Images”, IEEE Transactions on Medical Imaging, vol. 12, no. 3, pp. 439– 450, 1993. [146] W. E. Higgins, C. J. Orlick, B. E. Ledell, “Nonlinear Filtering Approach to 3-D Gray-Scale Image Interpolation”, IEEE Transactions on Medical Imaging, vol. 15, no. 4, pp. 580–587, 1996. [147] S. K. Hilal & R. A. Solomon, “Endovascular Treatment of Aneurysms with Coils”, Journal of Neurosurgery, vol. 76, no. 2, pp. 337–339, 1992. [148] F. B. Hildebrand, Introduction to Numerical Analysis, 2nd ed., International Series in Pure and Applied Mathematics, McGraw-Hill, New York, USA, 1956. [149] E. C. Hildreth, “The Detection of Intensity Changes by Computer and Biological Vision Systems”, Computer Vision, Graphics and Image Processing, vol. 22, no. 1, pp. 1–27, 1983. [150] E. C. Hildreth, “The Computation of the Velocity Field”, Proceedings of the Royal Society of London, Series B, vol. 221, pp. 189–220, 1984. [151] E. C. Hildreth, The Measurement of Visual Motion, M.I.T. Press, Cambridge, Massachusetts, USA, 1984. [152] B. J. Hillman, “Digital Radiology of the Kidney”, Radiologic Clinics of North America, vol. 23, no. 2, pp. 211–226, 1985. [153] B. J. Hillman, T. W. Ovitt, S. Nudelman, H. D. Fischer, M. M. Frost, P. Capp, H. Roehrig, G. Seeley, “Digital Video Subtraction Angiography of Renal Vascular Abnormalities”, Radi- ology, vol. 139, no. 2, pp. 277–280, 1981. [154] D. J. Hoff, M. C. Wallace, K. G. terBrugge, F. Gentili, “Rotational Angiography Assessment of Cerebral Aneurysms”, American Journal of Neuroradiology, vol. 15, no. 10, pp. 1945–1948, 1994. [155] M. G. Hoffman, A. S. Gomes, S. O. Pais, “Limitations in the Interpretation of Intravenous Carotid Digital Subtraction Angiography”, American Journal of Roentgenology, vol. 142, no. 2, pp. 261–264, 1984. [156] C. B. Holman & F. E. Bullard, “Application of Closed Circuit Television in Diagnostic Roentgenology”, in Proceedings of the Staff Meetings of the Mayo Clinic, vol. 38, Mayo Foundation for Medical Education and Research, Rochester, Minnesota, USA, p. 67, 1963. [157] J. K. A. Hope, J. V. Byrne, A. J. Molyneux, “Factors Influencing Successful Angiographic Occlusion of Aneurysms Treated by Coil Embolization”, American Journal of Neuroradiology, vol. 20, no. 3, pp. 391–399, 1999. [158] B. K. P. Horn & B. G. Schunck, “Determining Optical Flow”, Artificial Intelligence, vol. 17, pp. 185–203, 1981. [159] H. S. Hou & H. C. Andrews, “Cubic Splines for Image Interpolation and Digital Filtering”, IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26, no. 6, pp. 508–517, 1978. [160] T. L. Houk, R. A. Kruger, C. A. Mistretta, S. J. Riederer, C. G. Shaw, J. C. Lancaster, D. C. Flemming, “Real-Time Digital K-Edge Subtraction Fluoroscopy”, Investigative Radiology, vol. 14, no. 4, pp. 270–278, 1979. [161] P. Hua & I. Fram, “Feature-Based Image Registration for Digital Subtraction Angiography”, in Image Processing, M. H. Loew (ed.), vol. 1898 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 24–31, 1993. [162] N. M. Hylton, I. Simovsky, A. J. Li, J. D. Hale, “Impact of Section Doubling on MR Angiog- raphy”, Radiology, vol. 185, no. 3, pp. 899–902, 1992. [163] T. Inagawa & A. Hirano, “Ruptured Intracranial Aneurysms: An Autopsy Study of 133 Patients”, Surgical Neurology, vol. 33, no. 2, pp. 117–123, 1990. [164] D. L. Jagerman & L. J. Fogel, “Some General Aspects of the Sampling Theorem”, IRE Transactions on Information Theory, vol. 2, pp. 139–146, 1956. 158 Bibliography

[165] B. J¨ahne, Digital Image Processing. Concepts, Algorithms, and Scientific Applications, Springer-Verlag, Berlin, Germany, 1991. [166] A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1989. [167] J. R. Jain & A. K. Jain, “Displacement Measurement and Its Application in Interframe Image Coding”, IEEE Transactions on Communications, vol. 29, no. 12, pp. 1799–1808, 1981. [168] F. James, “Monte Carlo Theory and Practice”, Reports on Progress in Physics, vol. 43, pp. 1173–1189, 1980. [169] W. D. Jeans, “The Development and Use of Digital Subtraction Angiography”, The British Journal of Radiology, vol. 63, no. 747, pp. 161–168, 1990. [170] H. Jeffreys & B. S. Jeffreys, Methods of Mathematical Physics, 3rd ed., Cambridge University Press, Cambridge, UK, 1966. [171] A. J. Jerri, “The Shannon Sampling Theorem — Its Various Extensions and Applications: A Tutorial Review”, Proceedings of the IEEE, vol. 65, no. 11, pp. 1565–1596, 1977. [172] M. Jomin, F. Lesoin, G. Lozes, A. Fawaz, L. Villette, “Surgical Prognosis of Unruptured Intracranial Arterial Aneurysms: Report of 50 Cases”, Acta Neurochirurgica, vol. 84, no. 3/4, pp. 85–88, 1987. [173] S. Juvela, M. Porras, O. Heiskanen, “Natural History of Unruptured Intracranial Aneurysms: A Long-Term Follow-Up Study”, Journal of Neurosurgery, vol. 79, no. 2, pp. 174–182, 1993.

[174] J. F. Kaiser, “Nonrecursive Digital Filter Design using the I0-sinh Window Function”, in Proceedings of the International Symposium on Circuits and Systems, IEEE, New York, USA, pp. 20–23, 1974. [175] E. Karabassis & M. E. Spetsakis, “An Analysis of Image Interpolation, Differentiation, and Reduction using Local Polynomial Fits”, Graphical Models and Image Processing, vol. 57, no. 3, pp. 183–196, 1995. [176] T. Katsuda, T. Nakajima, M. Fujita, N. Hosomi, C. Kuroda, T. Kuwano, Y. Sawai, J. Yoshida, “Reducing Motion Artifacts during Hepatic DSA”, Radiologic Technology, vol. 65, no. 4, pp. 237–239, 1994. [177] B. T. Katzen, “Current Status of Digital Angiography in Vascular Imaging”, Radiologic Clinics of North America, vol. 33, no. 1, pp. 1–14, 1995. [178] W. M. Kelly, R. Gould, D. Norman, M. Brant-Zawadzki, L. Cox, “ECG-Synchronized DSA Ex- posure Control: Improved Cervicothoracic Image Quality”, American Journal of Roentgenol- ogy, vol. 143, no. 4, pp. 857–860, 1984. [179] R. Kemkers, J. op de Beek, H. Aerts, R. Koppe, E. Klotz, M. Grass, J. Moret, “3D-Rotational Angiography: First Clinical Application with Use of a Standard Philips C-Arm System”, in Computer Assisted Radiology and Surgery (CARS’98),H.U.Lemke,M.W.Vannier,K.In- amura, A. G. Farman (eds.), vol. 1165 of International Congress Series, Elsevier Science, Amsterdam, the Netherlands, pp. 182–187, 1998. [180] R. G. Keys, “Cubic Convolution Interpolation for Digital Image Processing”, IEEE Transac- tions on Acoustics, Speech, and Signal Processing, vol. 29, no. 6, pp. 1153–1160, 1981. [181] S. Kirkpatrick, C. D. Gelatt, Jr., M. P. Vecchi, “Optimization by Simulated Annealing”, Science, vol. 220, no. 4598, pp. 671–680, 1983. [182] C.-C. Ko, C.-W. Mao, Y.-N. Sun, “Multiresolution Registration of Coronary Artery Image Sequences”, International Journal of Medical Informatics, vol. 44, no. 2, pp. 93–104, 1997. [183] J. J. Koenderink, “The Structure of Images”, Biological Cybernetics, vol. 50, pp. 363–370, 1984. [184] J. Kollath & H. Riemann, “Pulmonary Digital Subtraction Angiography”, CardioVascular and Interventional Radiology, vol. 6, pp. 233–238, 1983. Bibliography 159

[185] R. Koppe, E. Klotz, H. Aerts, J. op de Beek, R. Kemkers, “3D Reconstruction of Cerebral Vessel Malformations based on Rotational Angiography (RA)”, in Computer Assisted Ra- diology and Surgery (CARS’97),H.U.Lemke,M.W.Vannier,K.Inamura(eds.),Elsevier, Amsterdam, the Netherlands, pp. 145–151, 1997. [186] R. Koppe, E. Klotz, J. op de Beek, H. Aerts, “3D Vessel Reconstruction based on Rotational Angiography”, in Computer Assisted Radiology (CAR’95),H.U.Lemke,K.Inamura,C.C. Jaffe, M. W. Vannier (eds.), Springer-Verlag, Berlin, Germany, pp. 101–107, 1995. [187] D. Kramer, A. Li, I. Simovsky, C. Hawryszko, J. Hale, L. Kaufman, “Applications of Voxel Shifting in Magnetic Resonance Imaging”, Investigative Radiology, vol. 25, no. 12, pp. 1305– 1310, 1990. [188] I. I. Kricheff, N. E. Chase, J. R. Ransdorff, “The Angiographic Investigation of Ruptured Intracranial Aneurysms”, Radiology, vol. 83, pp. 1916–1925, 1964. [189] K. Krissian, G. Malandain, N. Ayache, “Directional Anisotropic Diffusion Applied to Seg- mentation of Vessels in 3D Images”, in Scale-Space Theory in Computer Vision,B.ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever (eds.), vol. 1252 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 345–348, 1997. [190] R. A. Kruger, “A Method for Time Domain Filtering using Computerized Fluoroscopy”, Medical Physics, vol. 8, no. 4, pp. 466–470, 1981. [191] R. A. Kruger & P.-Y. Liu, “Digital Angiography using a Matched Filter”, IEEE Transactions on Medical Imaging, vol. 1, no. 1, pp. 16–21, 1982. [192] R. A. Kruger, F. J. Miller, J. A. Nelson, P. Y. Liu, W. Bateman, “Digital Subtraction Angiog- raphy using a Temporal Bandpass Filter: Associated Patient Motion Properties”, Radiology, vol. 145, no. 2, pp. 315–320, 1982. [193] R. A. Kruger, C. A. Mistretta, A. B. Crummy, J. F. Sackett, M. M. Goodsitt, S. J. Riederer, T. L. Houk, C.-G. Shaw, D. Flemming, “Digital K-Edge Subtraction Radiography”, Radiology, vol. 125, no. 1, pp. 243–245, 1977. [194] R. A. Kruger, C. A. Mistretta, J. Lancester, T. L. Houk, M. Goodsitt, C. G. Shaw, S. J. Riederer, J. Hicks, J. Sackett, A. B. Crummy, D. Flemming, “A Digital Video Image Processor for Real-Time X-Ray Subtraction Imaging”, Optical Engineering, vol. 17, no. 6, pp. 652–657, 1978. [195] R. A. Kruger, C. A. Mistretta, S. J. Riederer, “Physical and Technical Considerations of Com- puterized Fluoroscopy Difference Imaging”, IEEE Transactions on Nuclear Science, vol. 28, no. 1, pp. 205–212, 1981. [196] R. A. Kruger, J. A. Nelson, D. G. Roy, F. J. Miller, R. E. Anderson, P. Liu, “Dynamic Tomographic Digital Subtraction Angiography using Temporal Filtration”, Radiology, vol. 147, no. 3, pp. 863–867, 1983. [197] R. A. Kruger & S. J. Riederer, Basic Concepts of Digital Subtraction Angiography,G.K.Hall Medical Publishers, Boston, Massachusetts, USA, 1984. [198] R. A. Kruger, M. Sedaghati, D. G. Roy, P. Liu, J. A. Nelson, W. Kubal, P. Del Rio, “Tomosyn- thesis Applied to Digital Subtraction Angiography”, Radiology, vol. 152, no. 3, pp. 805–808, 1984. [199] C. D. Kuglin & D. C. Hines, “The Phase Correlation Image Alignment Method”, in Proceedings of the International Conference on Cybernetics and Society, IEEE, New York, USA, pp. 163– 165, 1975. [200] H. Kwakernaak & R. Sivan, Modern Signals and Systems, Information and System Sciences Series, Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1991. [201] J. L. Lagrange, “Le¸cons El´´ ementaires sur les Math´ematiques Donn´ees a l’Ecole´ Normale (1795)”, in Oeuvres de Lagrange, J.-A. Serret (ed.), vol. 7, Gauthier-Villars, Paris, France, pp. 183–287, 1877. [202] C. Lanczos, Discourse on Fourier Series, Oliver & Boyd, London, UK, 1966. 160 Bibliography

[203] J. R. Landis & G. G. Koch, “The Measurement of Observer Agreement for Categorical Data”, Biometrics, vol. 33, no. 1, pp. 159–174, 1977. [204] S. Lanzavecchia & P. L. Bellon, “A Moving Window Shannon Reconstruction Algorithm for Image Interpolation”, Journal of Visual Communication and Image Representation,vol.5, no. 3, pp. 255–264, 1994. [205] C. L. Lawson, “Software for C1 Surface Interpolation”, in Mathematical Software III,J.R. Rice (ed.), Academic Press, New York, USA, pp. 161–194, 1977. [206] V. Leclerc & C. Benchimol, “Automatic Elastic Registration of DSA Images”, in Computer Assisted Radiology (CAR’87), H. U. Lemke (ed.), Springer-Verlag, Berlin, Germany, pp. 719– 723, 1987. [207] C.-H. Lee, “Restoring Spline Interpolation of CT Images”, IEEE Transactions on Medical Imaging, vol. 2, no. 3, pp. 142–149, 1983. [208] D.-J. Lee, T. F. Krile, S. Mitra, “Power Cepstrum and Spectrum Techniques applied to Image Registration”, Applied Optics, vol. 27, no. 6, pp. 1099–1106, 1988. [209] D. T. Lee & B. J. Schachter, “Two Algorithms for Constructing a Delaunay Triangulation”, International Journal of Computer and Information Sciences, vol. 9, no. 3, pp. 219–242, 1980. [210] L. A. Lehmann, R. E. Alvarez, A. Macovski, W. R. Brody, N. J. Pelc, S. J. Riederer, A. L. Hall, “Generalized Image Combinations in Dual KVP Digital Radiography”, Medical Physics, vol. 8, no. 5, pp. 659–667, 1981. [211] T. Lehmann, W. Schmitt, R. Repges, A. Sovakar, “Mathematical Quality Standards for Digital Free-Hand Subtraction Radiography”, Dentomaxillofacial Radiology, vol. 24, no. 2, p. 98, 1995. [212] T. Lehmann, A. Sovakar, W. Schmitt, R. Repges, “A Comparison of Similarity Measures for Digital Subtraction Radiography”, Computers in Biology and Medicine, vol. 27, no. 2, pp. 151–167, 1997. [213] T. M. Lehmann, C. G¨onner, K. Spitzer, “Survey: Interpolation Methods in Medical Image Processing”, IEEE Transactions on Medical Imaging, vol. 18, no. 11, pp. 1049–1075, 1999. [214] D. C. Levin, “Digital Subtraction Angiography: Myths and Reality”, Radiology, vol. 151, no. 3, p. 803, 1984. [215] D. C. Levin, R. M. Shapiro, L. M. Boxt, L. Dunham, D. P. Harrington, D. L. Ergun, “Dig- ital Subtraction Angiography: Principles and Pitfalls of Image Enhancement Techniques”, American Journal of Roentgenology, vol. 143, no. 3, pp. 447–454, 1984. [216] C. S. Lewis, Miracles, Geoffrey Bles, London, UK, 1947. [217] T. Lindeberg, Scale-Space Theory in Computer Vision, The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, Dordrecht, the Netherlands, 1994. [218] D. A. Linden, “A Discussion of Sampling Theorems”, Proceedings of the IRE, vol. 47, pp. 1219– 1226, 1959. [219] D. A. Linden & N. M. Abramson, “A Generalization of the Sampling Theorem”, Information and Control, vol. 3, pp. 26–31, 1960. See also the errata in vol. 4, pp. 95-96, 1961. [220] J. Liu, D. Nishimura, A. Macovski, “Vessel Imaging using Dual-Energy Tomosynthesis”, Medical Physics, vol. 14, no. 6, pp. 950–955, 1987. [221] P. Liu, R. A. Kruger, J. A. Nelson, F. J. Miller, A. G. Osborn, M. Wojtowycz, “Digital Angiography: Matched Filtration versus Mask-Mode Subtraction”, Radiology, vol. 154, no. 1, pp. 217–220, 1985. [222] M. Loew, J. Rosenman, J. Chen, “A Clinical Tool for Enhancement of Portal Images”, in Medical Imaging: Image Processing, M. H. Loew (ed.), vol. 2167 of Proceedings of SPIE,The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 543–550, 1994. [223] S. L¨ofstedt, “Intracranial Arterial Aneurysms”, Acta Radiologica, vol. 34, pp. 339–349, 1950. Bibliography 161

[224] C. Lorenz, I.-C. Carlsen, T. M. Buzug, C. Fassnacht, J. Weese, “Multi-Scale Line Segmentation with Automatic Estimation of Width, Contrast and Tangential Direction in 2D and 3D Medical Images”, in CVRMed-MRCAS’97, J. Troccaz, E. Grimson, R. M¨osges (eds.), vol. 1205 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 233–242, 1997. [225] J. W. Ludwig, L. A. Verhoeven, J. J. Kersbergen, T. T. Overtoom, “Digital Subtraction Angiography of the Pulmonary Arteries for the Diagnosis of Pulmonary ”, Radiology, vol. 147, no. 3, pp. 639–645, 1983. [226] M. S. Van Lysel, J. T. Dobbins III, W. W. Peppler, B. H. Hasegawa, C.-S. Lee, C. A. Mistretta, W. C. Zarnstorff, A. B. Crummy, W. Kubal, B. Begsjordet, C. M. Strother, J. F. Sackett, “Work in Progress: Hybrid Temporal-Energy Subtraction in Digital Fluoroscopy”, Radiology, vol. 147, no. 3, pp. 869–874, 1983. [227] R. Machiraju & R. Yagel, “Reconstruction Error Characterization and Control: A Sampling Theory Approach”, IEEE Transactions on Visualization and Computer Graphics, vol. 2, no. 4, pp. 364–378, 1996. [228] E. Maeland, “On the Comparison of Interpolation Methods”, IEEE Transactions on Medical Imaging, vol. 7, no. 3, pp. 213–217, 1988. [229] J. B. A. Maintz & M. A. Viergever, “A Survey of Medical Image Registration”, Medical Image Analysis, vol. 2, no. 1, pp. 1–36, 1998. [230] V. R. Mandava, J. M. Fitzpatrick, D. R. Pickens, “Adaptive Search Space Scaling in Digital Image Registration”, IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 251–262, 1989. [231] K. R. Maravilla, R. C. Murry, Jr., J. Diehl, R. Suss, L. Allen, K. Chang, J. Crawford, R. McCoy, “Digital Tomosynthesis: Technique Modifications and Clinical Applications for Neurovascular Anatomy”, Radiology, vol. 152, no. 3, pp. 719–724, 1984. [232] K. R. Maravilla, R. C. Murry, Jr., S. Horner, “Digital Tomosynthesis: Technique for Electronic Reconstructive Tomography”, American Journal of Roentgenology, vol. 141, no. 3, pp. 497– 502, 1983. [233] D. Marr, Vision: A Computational Investigation into the Human Representation and Pro- cessing of Visual Information, Freeman, San Francisco, USA, 1982. [234] D. Marr & E. C. Hildreth, “Theory of Edge Detection”, Proceedings of the Royal Society of London, Series B, vol. 207, pp. 187–217, 1980. [235] S. R. Marschner & R. J. Lobb, “An Evaluation of Reconstruction Filters for Volume Ren- dering”, in Proceedings of the IEEE Conference on Visualization (Visualization ’94),R.D. Bergerson & A. E. Kaufman (eds.), IEEE Computer Society Press, Los Alamitos, California, USA, pp. 100–107, 1994. [236] G. A. Mastin, “Adaptive Filters for Digital Image Noise Smoothing: An Evaluation”, Com- puter Vision, Graphics and Image Processing, vol. 31, no. 1, pp. 103–121, 1985. [237] J. C. Maxwell, “Introductory Lecture on Experimental Physics (October 1871)”, in The Scientific Papers of James Clerk Maxwell, W. D. Niven (ed.), vol. II, Dover Publications, Inc., New York, USA, pp. 241–255, 1965. [238] J. E. McGuire, “Newton’s “Principles of Philosophy”: An Intended Preface for the 1704 Opticks and a Related Draft Fragment”, The British Journal for the History of Science, vol. 5, no. 18, pp. 178–186, 1970. [239] T. F. Meaney, M. A. Weinstein, E. Buonocore, W. Pavlicek, G. P. Borkowski, J. H. Gallagher, B. Sufka, W. J. MacIntyre, “Digital Subtraction Angiography of the Human Cardiovascular System”, American Journal of Roentgenology, vol. 135, no. 6, pp. 1153–1160, 1980. [240] E. H. W. Meijering, “Spline Interpolation in Medical Imaging: Comparison with Other Convo- lution-Based Approaches”, in European Signal Processing Conference (EUSIPCO – 2000), accepted for presentation, 2000. [241] E. H. W. Meijering, W. J. Niessen, J. Bakker, A. J. van der Molen, G. A. P. de Kort, R. T. H. Lo, W. P. Th. M. Mali, M. A. Viergever, “Reduction of Patient Motion Artifacts in Digital Subtraction Angiography: Evaluation of a Fast and Fully Automatic Technique”, Radiology, accepted for publication, 2000. 162 Bibliography

[242] E. H. W. Meijering, W. J. Niessen, J. P. W. Pluim, M. A. Viergever, “Quantitative Comparison of Sinc-Approximating Kernels for Medical Image Interpolation”, in Medical Image Computing and Computer-Assisted Intervention (MICCAI’99), C. Taylor & A. Colchester (eds.), vol. 1679 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 210–217, 1999. [243] E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “Piecewise Polynomial Kernels for Image Interpolation: A Generalization of Cubic Convolution”, in IEEE International Conference on Image Processing (ICIP’99), vol. III, IEEE Computer Society Press, Los Alamitos, California, USA, pp. 647–651, 1999. [244] E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “Retrospective Motion Correction in Digital Subtraction Angiography: A Review”, IEEE Transactions on Medical Imaging, vol. 18, no. 1, pp. 2–21, 1999. [245] E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “The Sinc-Approximating Kernels of Classical Polynomial Interpolation”, in IEEE International Conference on Image Processing (ICIP’99), vol. III, IEEE Computer Society Press, Los Alamitos, California, USA, pp. 652–656, 1999. [246] E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “Quantitative Evaluation of Convolution- Based Methods for Medical Image Interpolation”, Medical Image Analysis, accepted for pub- lication, 2000. [247] E. H. W. Meijering, K. J. Zuiderveld, W. J. Niessen, M. A. Viergever, “A Fast Image Registra- tion Technique for Motion Artifact Reduction in DSA”, in IEEE International Conference on Image Processing (ICIP’99), vol. III, IEEE Computer Society Press, Los Alamitos, California, USA, pp. 435–439, 1999. [248] E. H. W. Meijering, K. J. Zuiderveld, M. A. Viergever, “A Fast Technique for Motion Cor- rection in DSA using a Feature-Based, Irregular Grid”, in Medical Image Computing and Computer-Assisted Intervention (MICCAI’98),W.M.Wells,A.Colchester,S.Delp(eds.), vol. 1496 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 590– 597, 1998. [249] E. H. W. Meijering, K. J. Zuiderveld, M. A. Viergever, “Image Reconstruction by Convolution with Symmetrical Piecewise nth-Order Polynomial Kernels”, IEEE Transactions on Image Processing, vol. 8, no. 2, pp. 192–201, 1999. [250] E. H. W. Meijering, K. J. Zuiderveld, M. A. Viergever, “Image Registration for Digital Subtrac- tion Angiography”, International Journal of Computer Vision, vol. 31, no. 2/3, pp. 227–246, 1999. [251] J. Meunier, M. G. Bourassa, M. Bertrand, G. E. Mailloux, “Analysis of Cardiac Motion from Coronary Cineangiograms: Velocity Field Computation and Decomposition”, in Medical Imaging III: Image Formation, vol. 1090 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 440–447, 1989. [252] R. E. Miller, S. M. Chernish, G. F. Greenman, D. D. Maglinte, B. D. Rosenak, R. L. Brunelle, “Gastrointestinal Response to Minute Doses of Glucagon”, Radiology, vol. 143, no. 2, pp. 317– 320, 1982. [253] C. A. Mistretta, A. B. Crummy, C. M. Strother, “Digital Angiography: A Perspective”, Radiology, vol. 139, no. 2, pp. 273–276, 1981. [254] C. A. Mistretta, M. G. Ort, F. Kelcz, J. R. Cameron, M. P. Sieband, A. B. Crummy, “Ab- sorption Edge Fluoroscopy using Quasimonoenergetic X-Ray Beams”, Investigative Radiology, vol. 8, no. 6, pp. 402–412, 1973. [255] D. P. Mitchell & A. N. Netravali, “Reconstruction Filters in Computer Graphics”, Computer Graphics (SIGGRAPH’88 Conference Proceedings), vol. 22, no. 4, pp. 221–228, 1988. [256] K. Mizoi, T. Yoshimoto, Y. Nagamine, T. Kayama, K. Koshu, “How to Treat Incidental Cerebral Aneurysms: A Review of 139 Consecutive Cases”, Surgical Neurology, vol. 44, no. 8, pp. 114–121, 1995. [257] T. M¨oller, R. Machiraju, K. Mueller, R. Yagel, “Evaluation and Design of Filters using a Taylor Series Expansion”, IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 2, pp. 184–199, 1997. Bibliography 163

[258] J. Moret, R. Kemkers, J. op de Beek, R. Koppe, E. Klotz, M. Grass, “3D Rotational Angio- graphy: Clinical Value in Endovascular Treatment”, Medicamundi, vol. 42, no. 3, pp. 8–14, 1998. [259] K. Morishita & T. Yokoyama, “Image Registration Method using Adaptive Nonlinear Filter”, Systems and Computers in Japan, vol. 19, no. 9, pp. 41–50, 1988. [260] H.-H. Nagel, “On the Estimation of Optical Flow: Relations between Different Approaches and Some New Results”, Artificial Intelligence, vol. 33, pp. 299–324, 1987. [261] J. Neider, T. Davis, M. Woo, OpenGL Programming Guide, Addison-Wesley, Reading, Mas- sachusetts, USA, 1995. [262] J. A. Nelder & R. Mead, “A Simplex Method for Function Minimization”, The Computer Journal, vol. 7, pp. 308–313, 1965. [263] J. A. Nelson, F. J. Miller, Jr., R. A. Kruger, P. Y. Liu, W. Bateman, “Digital Subtraction Angiography using a Temporal Bandpass Filter: Initial Clinical Results”, Radiology, vol. 145, no. 2, pp. 309–313, 1982. [264] E. Netto, Die vier Gauss’schen Beweise f¨ur die Zerlegung Ganzer Algebraischer Functionen in Reelle Factoren Ersten oder Zweiten Grades (1799-1849), no. 14 in Ostwald’s Klassiker der exakten Wissenschaften, Engelmann, Leipzig, Germany, 1890. [265] N. Nevab, A. Bani-Hashemi, M. S. Nadar, K. Wiesent, P. Durlak, T. Brunner, K. Barth, R. Graumann, “3D Reconstruction from Projection Matrices in a C-Arm Based 3D- Angiography System”, in Medical Image Computing and Computer-Assisted Intervention (MICCAI’98), W. M. Wells, A. Colchester, S. Delp (eds.), vol. 1496 of Lecture Notes in Com- puter Science, Springer-Verlag, Berlin, Germany, pp. 119–129, 1998. [266] I. Newton, Philosophiæ Naturalis Principia Mathematica, London, UK, 1687. Newton’s inter- polation formula appears in Book III, Lemma V. For a comprehensive treatise, see S. Chan- drasekhar, Newton’s Principia for the Common Reader, Oxford University Press, New York, USA, pp. 481-500, 1995. [267] I. Newton, “Methodus Differentialis”, in Analysis per Quantitates, Series, Fluxiones, ac Dif- ferentias: cum Enumeratione Linearum Tertii Ordinis, W. Jones (ed.), London, UK, pp. 93– 101, 1711. Can also be found in The Mathematical Papers of Isaac Newton Volume VIII (1697-1722), D. T. Whiteside (ed.), Cambridge University Press, Cambridge, UK, Part I, Ch. 4, “The ‘Method of [Finite] Differences’ ”, pp. 236-257, 1981. [268] I. Newton, “Letter to Henry Oldenburg (24 October 1676)”, in The Correspondence of Isaac Newton, H. W. Turnbull (ed.), vol. II, Cambridge University Press, UK, pp. 110–161, 1960. [269] R. Ning, H. Hu, R. A. Kruger, “Image Intensifier-Based CT Volume Imager for Angiography”, in Medical Imaging II: Image Formation, Detection, Processing, and Interpretation,R.H. Schneider & S. J. Dwyer III (eds.), vol. 914 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 282–287, 1988. [270] R. Ning & R. A. Kruger, “Image Intensifier-Based Computed Tomography Volume Scanner for Angiography”, Academic Radiology, vol. 3, no. 4, pp. 344–350, 1996. [271] M. Nitzberg & T. Shiota, “Nonlinear Image Filtering with Edge and Corner Enhancement”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 8, pp. 826–833, 1992. [272] North American Symptomatic Carotid Endarterectomy Trial Collaborators, “Beneficial Effect of Carotid Endarterectomy in Symptomatic Patients with High-Grade Carotid Stenosis”, The New England Journal of Medicine, vol. 325, no. 7, pp. 445–453, 1991. [273] North American Symptomatic Carotid Endarterectomy Trial (NASCET) Steering Committee, “North American Symptomatic Carotid Endarterectomy Trial. Methods, Patient Characteris- tics, and Progress”, Stroke, vol. 22, no. 5, pp. 711–720, 1991. [274] H. Nyquist, “Certain Topics in Telegraph Transmission Theory”, Transactions of the American Institute of Electrical Engineers, vol. 47, pp. 617–644, 1928. 164 Bibliography

[275] W. J. Oosterkamp, Th. G. Schut, A. Druppers, “R¨ontgenbeeldsubtractie met Behulp van een Magnetisch Beeldgeheugen”, Nederlands Tijdschrift voor Geneeskunde, vol. 108, pp. 2051– 2054, 1964. [276] A. V. Oppenheim & R. W. Schafer, DigitalSignalProcessing, Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1975. [277] M. M. Orkisz, C. Bresson, I. E. Magnin, O. Champin, P. C. Douek, “Improved Vessel Vi- sualization in MR Angiography by Nonlinear Anisotropic Filtering”, Magnetic Resonance in Medicine, vol. 37, no. 6, pp. 914–919, 1997. [278] J. L. Ostuni, A. K. S. Santha, V. S. Mattay, D. R. Weinberger, R. L. Levin, J. A. Frank, “Analysis of Interpolation Effects in the Reslicing of Functional MR Images”, Journal of Computer Assisted Tomography, vol. 21, no. 5, pp. 803–810, 1997. [279] H. Oung & A. M. Smith, “Real Time Motion Detection in Digital Subtraction Angiography”, in Proceedings of the International Symposium on Medical Images and Icons, A. Deurinckx, M. H. Loew, J. M. S Prewitt (eds.), IEEE Computer Society Press, Silver Spring, USA, pp. 336–339, 1984. [280] J. K. Ousterhout, Tcl and the Tk Toolkit, Professional Computing Series, Addison-Wesley, Reading, Massachusetts, USA, 1994. [281] T. W. Ovitt & J. D. Newell II, “Digital Subtraction Angiography: Technology, Equipment, and Techniques”, Radiologic Clinics of North America, vol. 23, no. 2, pp. 177–184, 1985. [282] M. Paciaroni, M. Eliasziw, L. J. Kappelle, J. W. Finan, G. G. Ferguson, H. J. Barnett, “Medical Complications Associated with Carotid Endarterectomy”, Stroke, vol. 30, no. 9, pp. 1759–1763, 1999. [283] A. Papoulis, “Generalized Sampling Expansion”, IEEE Transactions on Circuits and Systems, vol. 24, no. 11, pp. 652–654, 1977. [284] S. K. Park & R. A. Schowengerdt, “Image Sampling, Reconstruction, and the Effect of Sample- Scene Phasing”, Applied Optics, vol. 21, no. 17, pp. 3142–3151, 1982. [285] S. K. Park & R. A. Schowengerdt, “Image Reconstruction by Parametric Cubic Convolution”, Computer Vision, Graphics and Image Processing, vol. 23, no. 3, pp. 258–272, 1983. [286] J. A. Parker, R. V. Kenyon, D. E. Troxel, “Comparison of Interpolating Methods for Image Resampling”, IEEE Transactions on Medical Imaging, vol. 2, no. 1, pp. 31–39, 1983. [287] L. Parlea, R. Fahrig, D. W. Holdsworth, S. P. Lownie, “An Analysis of the Geometry of Sac- cular Intracranial Aneurysms”, American Journal of Neuroradiology, vol. 20, no. 6, pp. 1079– 1089, 1999. [288] B. Pascal, Pens´ees sur la Religion, Chez Guillaume Desprez, Paris, France, 1670. Dutch translation (concise Brunschvicg edition): Gedachten, 3rd ed., Het Spectrum, Utrecht, the Netherlands, 1964. [289] D. D. Patton, “Roentgen and the “New Light” — Roentgen’s Moment of Discovery”, Inves- tigative Radiology, Part I: vol. 27, no. 6, pp. 408–414, 1992, Part II: vol. 28, no. 1, pp. 51–58, 1993, Part III: vol. 28, no. 10, pp. 954–961, 1993. [290] E. Payot, R. Guillemaud, Y. Trousset, F. Preteux, “An Adaptive and Constrained Model for 3D X-Ray Vascular Reconstruction”, in Three-Dimensional Image Reconstruction in Radiation and Nuclear Medicine, P. Grangeat & J.-L. Amans (eds.), vol. 4 of Computational Imaging and Vision, Kluwer Academic Publishers, Dordrecht, the Netherlands, pp. 47–57, 1996. [291] D. M. Pelz, A. J. Fox, F. Vinuela, “Digital Subtraction Angiography: Current Clinical Appli- cations”, Stroke, vol. 16, no. 3, pp. 528–536, 1985. [292] P. Perona & J. Malik, “Scale-Space and Edge Detection using Anisotropic Diffusion”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629–639, 1990. [293] L. V. Perrett & J. W. D. Bull, “The Accuracy of Radiology in Demonstrating Ruptured Intracranial Aneurysms”, The British Journal of Radiology, vol. 32, pp. 85–92, 1959. Bibliography 165

[294] J. H. W. Pexman, C. H. R. Wriedt, T. C. Richard, A. C. MacDonald, “Improvement of Cer- vicocranial IV Digital Subtraction Angiography with Pixel Remasking and Cardiac Gating”, American Journal of Neuroradiology, vol. 10, no. 5, pp. S86–S87, 1989. [295] B.-T. Phong, “Illumination for Computer Generated Pictures”, Communications of the ACM, vol. 18, no. 6, pp. 311–317, 1975. [296] D. R. Pickens & J. M. Fitzpatrick, “Phantom Design to Evaluate a Three-Dimensional Motion Correction Algorithm in DSA of the Coronary Arteries”, in Medical Imaging II: Image For- mation, Detection, Processing, and Interpretation,R.H.Schneider&S.J.DwyerIII(eds.), vol. 914 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 707–714, 1988. [297] D. R. Pickens, J. M. Fitzpatrick, J. J. Grefenstette, R. R. Price, A. E. James, “A Technique for Automatic Motion Correction in DSA”, in Application of Optical Instrumentation in Medicine XIV and Picture Archiving and Communication Systems (PACS IV) for Medical Applications, R. H. Schneider & S. J. Dwyer (eds.), vol. 626 of Proceedings of SPIE,The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 268–274, 1986. [298] D. R. Pickens, R. R. Price, J. J. Erickson, A. E. James, “Digital Image Motion Correction by Spatial Warp Methods”, Medical Physics, vol. 14, no. 1, pp. 56–61, 1987. [299] J. P. W. Pluim, J. B. A. Maintz, M. A. Viergever, “Interpolation Artefacts in Mutual Information-Based Image Registration”, Computer Vision and Image Understanding, vol. 77, no. 2, pp. 211–232, 2000. [300] R. Poli & G. Valli, “An Algorithm for Real-Time Vessel Enhancement and Detection”, Com- puter Methods and Programs in Biomedicine, vol. 52, no. 1, pp. 1–22, 1997. [301] M. J. Potel & D. E. Gustafson, “Motion Correction for Digital Subtraction Angiography”, in Frontiers of Engineering and Computing in Health Care: Proceedings of the 5th Annual International Conference of the IEEE Engineering in Medicine and Biology Society,G.C. Gerhard & W. T. Miller (eds.), pp. 166–169, 1983. [302] M. J. D. Powell, “An Efficient Method for Finding the Minimum of a Function of Several Variables without Calculating Derivatives”, The Computer Journal, vol. 7, pp. 155–162, 1964. [303] W. K. Pratt, “Correlation Techniques of Image Registration”, IEEE transactions on Aerospace and Electronic Systems, vol. 10, pp. 353–358, 1974. [304] W. K. Pratt, Digital Image Processing, John Wiley & Sons, New York, USA, 1978. [305] W. H. Press, B. P. Flannery, S. A. Teukolsky, W. T. Vetterling, Numerical Recipes in C; The Art of Scientific Computing, Cambridge University Press, Cambridge, USA, 1988. [306] F. E. Rabe, H. Y. Yune, E. C. Klatte, R. E. Miller, “Efficacy of Glucagon for Abdominal Digital Angiography”, American Journal of Roentgenology, vol. 139, no. 3, pp. 618–619, 1982. [307] A. R. Rao & B. G. Schunck, “Computing Oriented Texture Fields”, CVGIP: Graphical Models and Image Processing, vol. 53, no. 2, pp. 157–185, 1991. [308] S. P. Raya & J. K. Udupa, “Shape-Based Interpolation of Multidimensional Objects”, IEEE Transactions on Medical Imaging, vol. 9, no. 1, pp. 32–42, 1990. [309] R. F. Reilley, C. W. Smith, R. R. Price, J. A. Patton, J. Diggs, “Digital Subtraction Angiog- raphy: Limitations for the Detection of Pulmonary Embolism”, Radiology, vol. 149, no. 2, pp. 379–382, 1983. [310] S. J. Riederer, W. R. Brody, D. R. Enzmann, A. L. Hall, J. K. Maier, “Work in Progress: The Application of Temporal Filtering Techniques to Hybrid Subtraction in Digital Subtraction Angiography”, Radiology, vol. 147, no. 3, pp. 859–862, 1983. [311] S. J. Riederer, D. R. Enzmann, W. R. Brody, A. L. Hall, “The Application of Matched Filtering to Contrast Dose Reduction in Digital Subtraction Angiography”, Radiology, vol. 147, no. 3, pp. 853–858, 1983. [312] S. J. Riederer, D. R. Enzmann, A. L. Hall, N. J. Pelc, W. T. Djang, “The Application of Matched Filtering to X-Ray Exposure Reduction in Digital Subtraction Angiography: Clinical Results”, Radiology, vol. 146, no. 2, pp. 349–354, 1983. 166 Bibliography

[313] S. J. Riederer, A. L. Hall, J. K. Maier, N. J. Pelc, D. R. Enzmann, “The Technical Charac- teristics of Matched Filtering in Digital Subtraction Angiography”, Medical Physics, vol. 10, no. 2, pp. 209–217, 1983. [314] S. J. Riederer & R. A. Kruger, “Intravenous Digital Subtraction: A Summary of Recent Developments”, Radiology, vol. 147, no. 3, pp. 633–638, 1983. [315] S. S. Rifman, “Digital Rectification of ERTS Multispectral Imagery”, in Proceedings of the Symposium on Significant Results Obtained from the Earth Resources Technology Satellite-1, vol. 1, section B, NASA SP-327, pp. 1131–1142, 1973. [316] D. W. Ro, L. Axel, G. T. Herman, R. F. LeVeen, “Computed Masks in Coronary Subtraction Imaging”, IEEE Transactions on Medical Imaging, vol. 6, no. 4, pp. 297–300, 1987. [317] W. C. R¨ontgen, “Ueber eine Neue Art von Strahlen”, Sitzungsberichte der Physikalisch- Medicinischen Gesellschaft zu W¨urzburg, pp. 132–141, 1895. [318] P. Roos, Reversible Compression of Medical Images, Ph.D. thesis, Delft University of Tech- nology, Faculty of Technical Mathematics and Informatics, the Netherlands, 1991. [319] P. Roos & M. A. Viergever, “Registration and Reversible Compression of Angiographic Image Sequences”, in Medical Imaging III: Image Processing,S.J.Dwyer,R.G.Jost,R.H.Schneider (eds.), vol. 1092 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 383–391, 1989. [320] P. Roos & M. A. Viergever, “Reversible Interframe Compression of Medical Images: A Com- parison of Decorrelation Methods”, IEEE Transactions on Medical Imaging, vol. 10, no. 4, pp. 538–547, 1991. [321] A. Rosenfeld & A. C. Kak, Digital Picture Processing, Academic Press, New York, USA, 1976. [322] J. Rosenørn & V. Eskesen, “Does a Safe Size-Limit Exist for Unruptured Intracranial Aneurysms?”, Acta Neurochirurgica, vol. 121, no. 3/4, pp. 113–118, 1993. [323] P. M. Rothwell, R. J. Gibson, J. Slattery, R. J. Sellar, C. P. Warlow, “Equivalence of Mea- surements of Carotid Stenosis. A Comparison of Three Methods on 1001 Angiograms”, Stroke, vol. 25, no. 12, pp. 2435–2439, 1994. [324] A. Roug´ee, C. Picard, C. Ponchut, Y. Trousset, “Geometrical Calibration of X-Ray imaging Chains for Three-Dimensional Reconstruction”, Computerized Medical Imaging and Graphics, vol. 17, no. 4/5, pp. 295–300, 1993. [325] D. Ruprecht, Geometrische Deformationen als Werkzeug in der Graphischen Datenver- arbeitung, Ph.D. thesis, Universit¨at Dortmund, Fachbereich Informatik, Lehrstuhl VII — Graphische Systeme, Germany, 1994. [326] D. Ruprecht & H. M¨uller, “Image Warping with Scattered Data Interpolation”, IEEE Com- puter Graphics and Applications, vol. 15, no. 2, pp. 37–43, 1995. [327] D. Saint-F´elix,Y.Trousset,C.Picard,C.Ponchut,R.Rom´eas, A. Roug´ee, “In Vivo Evaluation of a New System for 3D Computerized Angiography”, Physics in Medicine and Biology, vol. 39, no. 3, pp. 583–595, 1994. [328] D. M. Saint-F´elix, C. L. Picard, C. Ponchut, R. Rom´eas, A. Roug´ee, Y. L. Trousset, “Three- Dimensional X-Ray Angiography: First In-Vivo Results with a New System”, in Medical Imaging 1993: Image Capture, Formatting, and Display, Y. Kim (ed.), vol. 1897 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 90–98, 1993. [329] Y. Sato, S. Nakajima, N. Shiraga, H. Atsumi, S. Yoshida, T. Koller, G. Gerig, R. Kikinis, “Three-Dimensional Multi-Scale Line Filter for Segmentation and Visualization of Curvilinear Structures in Medical Images”, Medical Image Analysis, vol. 2, no. 2, pp. 143–168, 1998. [330] R. W. Schafer & L. R. Rabiner, “A Digital Signal Processing Approach to Interpolation”, Proceedings of the IEEE, vol. 61, no. 6, pp. 692–702, 1973. [331] T. Schanze, “Sinc Interpolation of Discrete Periodic Signals”, IEEE Transactions on Signal Processing, vol. 43, no. 6, pp. 1502–1503, 1995. Bibliography 167

[332] A. Schaum, “Theory and Design of Local Interpolators”, CVGIP: Graphical Models and Image Processing, vol. 55, no. 6, pp. 464–481, 1993. [333] W. I. Schievink, D. G. Piepgras, F. P. Wirth, “Rupture of Previously Documented Small Asymptomatic Saccular Intracranial Aneurysms. Report of Three Cases”, Journal of Neuro- surgery, vol. 76, no. 6, pp. 1019–1024, 1992. [334] C. Schlick, “A Fast Alternative to Phong’s Specular Model”, in Graphics Gems,P.S.Heckbert (ed.), vol. IV of The Graphics Gems Series, Academic Press, Boston, USA, Ch. VI.1, pp. 385– 387, 1994. [335] I. J. Schoenberg, “Contributions to the Problem of Approximation of Equidistant Data by Analytic Functions”, Quarterly of Applied Mathematics, vol. 4, no. 1 & 2, pp. 45–99 & 112–141, 1946. [336] I. J. Schoenberg, “Cardinal Interpolation and Spline Functions”, Journal of Approximation Theory, vol. 2, no. 2, pp. 167–206, 1969. [337] I. J. Schoenberg, “Notes on Spline Functions III: On the Convergence of the Interpolating Cardinal Splines as Their Degree tends to Infinity”, Israel Journal of Mathematics, vol. 16, pp. 87–93, 1973. [338] W. F. Schreiber & D. E. Troxel, “Transformation Between Continuous and Discrete Repre- sentations of Images: A Perceptual Approach”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 7, no. 2, pp. 178–186, 1985. [339] S. Schreiner, C. B. Paschal, R. L. Galloway, “Comparison of Projection Algorithms used for the Construction of Maximum Intensity Projection Images”, Journal of Computer Assisted Tomography, vol. 20, no. 1, pp. 56–67, 1996. [340] B. A. Schueler & X. Hu, “Correction of Image Intensifier Distortion for 3D X-Ray Angiogra- phy”, in Medical Imaging 1995: Physics of Medical Imaging, R. L. van Metter & J. Beutel (eds.), vol. 2432 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 272–279, 1995. [341] B. A. Schueler, A. Sen, H.-H. Hsiung, R. E. Latchaw, X. Hu, “Three-Dimensional Vascular Reconstruction with a Clinical X-Ray Angiography System”, Academic Radiology,vol.4, no. 10, pp. 693–699, 1997. [342] M. Schumacher, K. Kutluk, D. Ott, “Digital Rotational Radiography in Neuroradiology”, American Journal of Neuroradiology, vol. 10, no. 3, pp. 644–649, 1989. [343] J. F. Seeger & R. F. Carmody, “Digital Subtraction Angiography of the Arteries of the Head and Neck”, Radiologic Clinics of North America, vol. 23, no. 2, pp. 193–210, 1985. [344] J. F. Seeger, J. R. L. Smith, R. F. Carmody, “Head Immobilizer for Digital Video Subtraction Angiography”, American Journal of Neuroradiology, vol. 3, no. 3, pp. 352–353, 1982. [345] J. A. Seibert, B. M. T. Lantz, J. Brock, “Internal Densitometric Gating for Digital Subtraction Angiography”, Investigative Radiology, vol. 24, no. 5, pp. 350–360, 1989. [346] R. S. Seigel & A. G. Williams, “Efficacy of Prone Positioning for Intravenous Digital Angiog- raphy of the Abdomen”, Radiology, vol. 148, no. 1, p. 295, 1983. [347] W. Seyferth, G. Dilbat, E. Zeitler, “Efficacy and Safety of Digital Subtraction Angiography with Special Reference to Contrast Agents”, CardioVascular and Interventional Radiology, vol. 6, pp. 265–270, 1983. [348] C. E. Shannon, “Communication in the Presence of Noise”, Proceedings of the Institution of Radio Engineers, vol. 37, no. 1, pp. 10–21, 1949. [349] J. Shi & C. Tomasi, “Good Features to Track”, in IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600, 1994. [350] J. Sijbers, A. J. den Dekker, A. van der Linden, T. M. Verhoye, D. van Dyck, “Adaptive Anisotropic Noise Filtering for Magnitude MR Data”, Magnetic Resonance Imaging, vol. 17, no. 10, pp. 1533–1539, 1999. 168 Bibliography

[351] J. Sijbers, P. Scheunders, M. Verhoye, A. van der Linden, D. van Dyck, E. Raman, “Watershed- Based Segmentation of 3D MR Data for Volume Quantization”, Magnetic Resonance Imaging, vol. 15, no. 6, pp. 679–688, 1997. [352] K. W. Simon, “Digital Image Reconstruction and Resampling for Geometric Manipulation”, in Symposium on Machine Processing of Remotely Sensed Data, C. D. McGillem & D. B. Morrison (eds.), IEEE Press, New York, USA, pp. 3A–1–3A–11, 1975. [353] R. F. Smith, B. K. Rutt, A. J. Fox, R. N. Rankin, D. W. Holdsworth, “Geometric Characteri- zation of Stenosed Human Carotid Arteries”, Academic Radiology, vol. 3, no. 11, pp. 898–911, 1996. [354] J. Sporring, M. Nielsen, L. Florack, P. Johansen (eds.), Gaussian Scale-Space Theory, vol. 8 of Computational Imaging and Vision, Kluwer Academic Publishers, Dordrecht, the Netherlands, 1997. [355] E. Steen & B. Olstad, “Scale-Space and Boundary Detection in Ultrasonic Imaging using Nonlinear Signal-Adaptive Anisotropic Diffusion”, in Medical Imaging: Image Processing, M. H. Loew (ed.), vol. 2167 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 116–127, 1994. [356] G. C. Stockman, S. Kopstein, S. Benett, “Matching Images to Models for Registration and Object Detection via Clustering”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 4, no. 3, pp. 229–241, 1982. [357] R. Stokking, Integrated Visualization of Functional and Anatomical Brain Images,Ph.D. thesis, Utrecht University, Faculty of Medicine, Image Sciences Institute, the Netherlands, 1998. [358] B. Stroustrup, The C++ Programming Language, 2nd ed., Addison-Wesley, Reading, Mas- sachusetts, USA, 1991. [359] M. Svedlow, C. D. McGillem, P. E. Anuta, “Image Registration: Similarity Measure and Pre- processing Method Comparisons”, IEEE Transactions on Aerospace and Electronic Systems, vol. 14, no. 1, pp. 141–149, 1978. [360] R. Szeliski & J. Coughlan, “Spline-Based Image Registration”, International Journal of Computer Vision, vol. 22, no. 3, pp. 199–218, 1997. [361] M. Takahashi, Y. Koga, H. Bussaka, M. Miyawaki, “The Value of Digital Subtraction Angiog- raphy in Peripheral Vascular Diseases”, The British Journal of Radiology, vol. 57, no. 674, pp. 123–132, 1984. [362] M. Takahashi, N. Sato, K. Fukui, Y. Kohrogi, Y. Yamashita, J. Shinzato, R. Saito, Y. Hi- gashida, “Hybrid Digital Subtraction Angiography: Initial Clinical Experience”, Computerized Radiology, vol. 10, no. 4, pp. 147–154, 1986. [363] M. Takahashi, J. Shinzato, Y. Korogi, K. Fukui, S. Ueno, I. Horiba, N. Suzumura, “Automatic Reregistration for Correction of Localized Misregistration Artifacts in Digital Subtraction An- giography of the Head and Neck”, Acta Radiologica (Supplementum), vol. 369, pp. 281–284, 1986. [364] A. S. Talukdar, A. Krishnan, D. L. Wilson, “Optimization of Image Quality for DSA Warping Registration”, in Medical Imaging: Image Processing, K. M. Hanson (ed.), vol. 3661 of Pro- ceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 819–827, 1999. [365] A. S. Talukdar & D. L. Wilson, “Modeling and Optimization of Rotational C-Arm Stereoscopic X-Ray Angiography”, IEEE Transactions on Medical Imaging, vol. 18, no. 7, pp. 604–616, 1999. [366] B. M. ter Haar Romeny (ed.), Geometry-Driven Diffusion in Computer Vision, vol. 1 of Computational Imaging and Vision, Kluwer Academic Publishers, Dordrecht, the Netherlands, 1994. [367] P. Th´evenaz, T. Blu, M. Unser, “Image Interpolation and Resampling”, in Handbook of Medical Image Processing, in press, 1999. Bibliography 169

[368] A. Thron & K. Voigt, “Rotational : Procedure and Value”, American Journal of Neuroradiology, vol. 4, no. 3, pp. 289–291, 1983. [369] Z. Tianxu, L. Weixue, P. Jiaxiong, “Adaptive Image Matching via Spatial Varying Grey-Level Correction”, IEEE Transactions on Communications, vol. 43, no. 5, pp. 1970–1981, 1995. [370] B. C. S. Tom, S. N. Efstratiadis, A. K. Katsaggelos, “Motion Estimation of Skeletonized Angiographic Images using Elastic Registration”, IEEE transactions on Medical Imaging, vol. 13, no. 3, pp. 450–460, 1994. [371] C. Tomasi & T. Kanade, “Shape and Motion from Image Streams: A Factorization Method — Part 3. Detection and Tracking of Point Features”, Tech. Rep. CMU-CS-91-132, Carnegie Mellon University, School of Computer Science, Pittsburgh, USA, 1991. [372] G. Tommasini, A. Camerini, A. Gatti, G. Derchi, A. Bruzzone, C. Vecchio, “Panoramic Coronary Angiography”, Journal of the American College of Cardiology, vol. 31, no. 4, pp. 871– 877, 1998. [373] Y. Trousset, R. Vaillant, L. Launay, J.-M. Obadia, N. Pivet, R. Anxionnat, L. Picard, “A Fully Automated System for Three-Dimensional X-Ray Angiography”, in Computer Assisted Radiology and Surgery (CARS’99),H.U.Lemke,M.W.Vannier,K.Inamura,A.G.Farman (eds.), vol. 1191 of International Congress Series, Elsevier, Amsterdam, the Netherlands, pp. 39–43, 1999. [374] R. K. Tu, W. A. Cohen, K. R. Maravilla, W. H. Bush, N. H. Patel, J. Eskridge, H. R. Winn, “Digital Subtraction Rotational Angiography for Aneurysms of the Intracranial Anterior Cir- culation: Injection Method and Optimization”, American Journal of Neuroradiology, vol. 17, no. 6, pp. 1127–1136, 1996. [375] P. A. Turski, W. J. Zwiebel, C. M. Strother, A. B. Crummy, G. G. Celesia, J. F. Sackett, “Limitations of Intravenous Digital Subtraction Angiography”, American Journal of Neuro- radiology, vol. 4, no. 3, pp. 271–273, 1983. [376] H. Ujiie, K. Sato, H. Onda, A. Oikawa, M. Kagawa, K. Takakura, N. Kobayashi, “Clinical Analysis of Incidentally Discovered Unruptured Aneurysms”, Stroke, vol. 24, no. 12, pp. 1850– 1856, 1993. [377] H. Ujiie, H. Tachibana, O. Hiramatsu, A. L. Hazel, T. Matsumoto, Y. Ogasawara, H. Naka- jima, T. Hori, K. Takakura, F. Kajiya, “Effects of Size and Shape (Aspect Ratio) on the Hemodynamics of Saccular Aneurysms: A Possible Index for Surgical Treatment of Intracra- nial Aneurysms”, Neurosurgery, vol. 45, no. 1, pp. 119–129, 1999. [378] M. Unser, “Splines: A Perfect Fit for Signal and Image Processing”, IEEE Signal Processing Magazine, vol. 16, no. 6, pp. 22–38, 1999. [379] M. Unser, A. Aldroubi, M. Eden, “Fast B-Spline Transforms for Continuous Image Represen- tation and Interpolation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 3, pp. 277–285, 1991. [380] M. Unser, A. Aldroubi, M. Eden, “B-Spline Signal Processing: Part I — Theory”, IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 821–833, 1993. [381] M. Unser, A. Aldroubi, M. Eden, “B-Spline Signal Processing: Part II — Efficient Design and Applications”, IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 834–848, 1993. [382] M. Unser, P. Th´evenaz, L. Yaroslavsky, “Convolution-Based Interpolation for Fast, High- Quality Rotation of Images”, IEEE Transactions on Image Processing, vol. 4, no. 10, pp. 1371– 1381, 1995. [383] S. Uras, F. Girosi, A. Verri, V. Torre, “A Computational Approach to Motion Perception”, Biological Cybernetics, vol. 60, pp. 79–87, 1988. [384] P. A. van den Elsen, E.-J. D. Pol, M. A. Viergever, “Medical Image Matching — A Review with Classification”, IEEE Engineering in Medicine and Biology, vol. 12, no. 1, pp. 26–39, 1993. [385] P. F. van der Stelt, U. E. Ruttimann, R. L. Webber, “Determination of Projections for Subtrac- tion Radiography based on Image Similarity Measurements”, Dentomaxillofacial Radiology, vol. 18, pp. 113–117, 1989. 170 Bibliography

[386] R. van der Weide, K. J. Zuiderveld, W. P. Th. M. Mali, M. A. Viergever, “CTA-Based Angle Selection for Diagnostic and Interventional Angiography of Saccular Intracranial Aneurysms”, IEEE Transactions on Medical Imaging, vol. 17, no. 5, pp. 831–841, 1998. [387] L. van Tran & J. Sklansky, “Flexible Mask Subtraction for Digital Angiography”, in Hybrid Image and Signal Processing, D. P. Casasent & A. G. Tescher (eds.), vol. 939 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 203–211, 1988. [388] L. van Tran & J. Sklansky, “Flexible Mask Subtraction for Digital Angiography”, IEEE Transactions on Medical Imaging, vol. 11, no. 3, pp. 407–415, 1992. [389] A. Venot, J. L. Golmard, J. F. Lebruchec, L. Pronzato, E. Walter, G. Frij, J. C. Roucayrol, “Digital Methods for Change Detection in Medical Images”, in Information Processing in Medical Imaging, F. Deconinck (ed.), Martinus Nijhoff Publishers, Dordrecht, the Netherlands, pp. 1–16, 1984. [390] A. Venot, J. F. Lebruchec, J. L. Golmard, J. C. Roucayrol, “An Automated Method for the Normalization of Scintigraphic Images”, Journal of Nuclear Medicine, vol. 24, no. 6, pp. 529– 531, 1983. [391] A. Venot, J. F. Lebruchec, J. L. Golmard, J. C. Roucayrol, “Digital Methods for Change Detection in ”, Journal of Nuclear Medicine, vol. 24, no. 5, p. P67, 1983. [392] A. Venot, J. F. Lebruchec, J. C. Roucayrol, “A New Class of Similarity Measures for Robust Image Registration”, Computer Vision, Graphics and Image Processing, vol. 28, no. 2, pp. 176– 184, 1984. [393] A. Venot & V. Leclerc, “Automated Correction of Patient Motion and Gray Values Prior to Subtraction in Digitized Angiography”, IEEE Transactions on Medical Imaging, vol. 3, no. 4, pp. 179–186, 1984. [394] A. Venot, L. Pronzato, E. Walter, “Comments about the Coincident Bit Counting (CBC) Criterion for Image Registration”, IEEE Transactions on Medical Imaging, vol. 13, no. 3, pp. 565–566, 1994. [395] L. A. J. Verhoeven, Digital Subtraction Angiography. The Technique and an Analysis of the Physical Factors Influencing the Image Quality, Ph.D. thesis, Delft University of Technology, the Netherlands, 1985. [396] F. Vi˜nuela, G. Duckwiler, M. Mawad, “Guglielmi Detachable Coil Embolization of Acute Intracranial Aneurysm: Perioperative Anatomical and Clinical Outcome in 403 Patients”, Journal of Neurosurgery, vol. 86, no. 3, pp. 475–482, 1997. [397] K. Voigt & P. Stoeter, “Experimental Basis and Clinical Results of an Isocentric Rotational Roentgenographic Technique in Diagnostic Neuroradiology”, Medicamundi, vol. 21, no. 1, pp. 7–20, 1976. [398] K. Voigt, P. Stoeter, D. Petersen, “Rotational Cerebral Roentgenography. I. Evaluation of the Technical Procedure and Diagnostic Application with Model Studies”, Neuroradiology, vol. 10, no. 2, pp. 95–100, 1975. [399] E. Waring, “Problems Concerning Interpolations”, Philosophical Transactions of the Royal Society of London, vol. 69, pp. 59–67, 1779. [400] D. F. Watson, “Computing the n-Dimensional Delaunay Tessellation with Application to Voronoi Polytopes”, The Computer Journal, vol. 24, no. 2, pp. 167–172, 1981. [401] D. F. Watson & G. M. Philip, “Survey: Systematic Triangulations”, Computer Vision, Graphics and Image Processing, vol. 26, no. 2, pp. 217–223, 1984. [402] J. Weickert, “A Review of Nonlinear Diffusion Filtering”, in Scale-Space Theory in Computer Vision, B. ter Haar Romeny, L. Florack, J. Koenderink, M. Viergever (eds.), vol. 1252 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 3–28, 1997. [403] J. Weickert, Anisotropic Diffusion in Image Processing, B. G. Teubner, Stuttgart, Germany, 1998. Bibliography 171

[404] J. Weickert, “Coherence-Enhancing Diffusion Filtering”, International Journal of Computer Vision, vol. 31, no. 2/3, pp. 111–127, 1999. [405] J. Weickert, “Nonlinear Diffusion Filtering”, in Handbook on Computer Vision and Applica- tions, vol. 2: Signal Processing and Pattern Recognition,B.J¨ahne, H. Haußecker, P. Geißler (eds.), Academic Press, San Diego, USA, Ch. 15, pp. 423–450, 1999. [406] J. Weickert, B. M. ter Haar Romeny, M. A. Viergever, “Efficient and Reliable Schemes for Nonlinear Diffusion Filtering”, IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 398– 410, 1998. [407] P. D. Welch, “The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method based on Time Averaging over Short, Modified Periodograms”, IEEE Transactions on Audio and Electroacoustics, vol. 15, no. 2, pp. 70–73, 1967. [408] A. Wenzel, “Effect of Manual compared with Reference Point Superimposition on Image Quality in Digital Subtraction Radiography”, Dentomaxillofacial Radiology, vol. 18, pp. 145– 150, 1989. [409] J. West, J. M. Fitzpatrick, M. Y. Wang, B. M. Dawant, C. R. Maurer, Jr., R. M. Kessler, R. J. Maciunas, C. Barillot, D. Lemoine, A. Collignon, F. Maes, P. Suetens, D. Vandermeulen, P. A. van den Elsen, S. Napel, T. S. Sumanaweera, B. Harkness, P. F. Hemler, D. L. G. Hill, D. J. Hawkes, C. Studholme, J. B. A. Maintz, M. A. Viergever, G. Malandain, X. Pennec, M. E. Noz, G. Q. Maguire, Jr., M. Pollack, C. A. Pelizzari, R. A. Robb, D. Hanson, R. P. Woods, “Comparison and Evaluation of Retrospective Intermodality Brain Image Registration Techniques”, Journal of Computer Assisted Tomography, vol. 21, no. 4, pp. 554–566, 1997. [410] E. T. Whittaker, “On the Functions which are Represented by the Expansions of Interpolation- Theory”, Proceedings of the Royal Society of Edinburgh, vol. 35, pp. 181–194, 1915. [411] E. T. Whittaker & G. Robinson, A Short Course in Interpolation, Blackie and Son Limited, London, UK, 1924. [412] J. M. Whittaker, Interpolatory Function Theory, no. 33 in Cambridge Tracts in Mathematics and Mathematical Physics, Cambridge University Press, Cambridge, UK, 1935. [413] M. H. Wholey, “Cardiovascular Applications of Digital Subtraction Angiography”, Radiologic Clinics of North America, vol. 23, no. 4, pp. 627–639, 1985. [414] D. O. Wiebers, J. P. Whisnant, T. M. Sundt, W. M. O’Fallon, “The Significance of Unruptured Intracranial Saccular Aneurysms”, Journal of Neurosurgery, vol. 66, no. 1, pp. 23–29, 1987. [415] M. A. Williams & A. N. Nicolaides, “Predicting the Normal Dimensions of the Internal and External Carotid Arteries from the Diameter of the Common Carotid”, European Journal of , vol. 1, no. 2, pp. 91–96, 1987. [416] D. L. Wilson, D. D. Royston, J. A. Noble, J. V. Byrne, “Determining X-Ray Projections for Coil Treatments of Intracranial Aneurysms”, IEEE Transactions on Medical Imaging, vol. 18, no. 10, pp. 973–980, 1999. [417] D. L. Wilson, L. R. Tarbox, D. B. Cist, D. D. Faul, “Image Processing of Images from Peripheral-Artery Digital Subtraction Angiography (DSA) Studies”, in Medical Imaging II: Image Formation, Detection, Processing, and Interpretation,R.H.Schneider&S.J.Dwyer III (eds.), vol. 914 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 765–771, 1988. [418] H. R. Winn, W. S. Almaani, S. L. Berga, J. A. Jane, A. E. Richardson, “The Long-Term Outcome in Patients with Multiple Aneurysms. Incidence of Late Hemorrhage and Implications for Treatment of Incidental Aneurysms”, Journal of Neurosurgery, vol. 59, no. 4, pp. 642–651, 1983. [419] D. J. Withers & R. J. Ashleigh, “Technical Note: Inspiration or Expiration? Reducing Motion Artefact in Digital Subtraction Arch Angiography of the Extracranial Carotid Arteries”, The British Journal of Radiology, vol. 68, no. 813, pp. 1017–1020, 1995. [420] G. Wolberg, Digital Image Warping, IEEE Computer Society Press, Washington, USA, 1990. 172 Bibliography

[421] Q. X. Wu, P. J. Bones, R. H. T. Bates, “Translational Motion Compensation for Coronary Angiogram Sequences”, IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 276–282, 1989. [422] W.-Y. Wu, M.-J. J. Wang, C.-M. Liu, “Performance Evaluation of Some Noise Reduction Techniques”, CVGIP: Graphical Models and Image Processing, vol. 54, no. 2, pp. 134–146, 1992. [423] M. Yanagisawa, S. Shigemitsu, T. Akatsuka, “Registration of Locally Distorted Images by Multiwindow Pattern Matching and Displacement Interpolation: The Proposal of an Algo- rithm and Its Application to Digital Subtraction Angiography”, in Proceedings of the Seventh International Conference on Pattern Recognition, M. D. Levine (ed.), vol. 2, IEEE Publishing Services, New York, USA, pp. 1288–1291, 1984. [424] N. Yasui, S. Magarisawa, A. Suzuki, H. Nishimura, T. Okudera, T. Abe, “ Caused by Previously Diagnosed, Previously Unruptured Intracranial Aneurysms: A Retrospective Analysis of 25 Cases”, Neurosurgery, vol. 39, no. 6, pp. 1096–1100, 1996. [425] B. G. Ziedses des Plantes, “Een Methode om Bepaalde Onderdeelen van het R¨ontgenologisch te Onderzoeken Voorwerp Afzonderlijk in Beeld te Brengen”, Nederlands Tijdschrift voor Geneeskunde, vol. 78, no. 7, pp. 762–769, 1934. [426] B. G. Ziedses des Plantes, “Subtraktion. Eine R¨ontgenographische Methode zur Separaten Abbildung Bestimmter Teile des Objekts”, Fortschritte auf dem Gebiete der R¨ontgenstrahlen, vol. 52, pp. 69–79, 1935. [427] K. J. Zuiderveld, B. M. ter Haar Romeny, W. ten Hove, “Fast Techniques for Automatic Local Pixel Shift and Rubber Sheet Masking in Digital Subtraction Angiography”, in Medical Images: Formation, Handling and Evaluation,A.E.Todd-Pokropek&M.A.Viergever(eds.),vol.98 of NATO ASI Series F: Computer and Systems Sciences, Springer-Verlag, Berlin, Germany, pp. 667–685, 1992. [428] K. J. Zuiderveld, B. M. ter Haar Romeny, M. A. Viergever, “Fast Rubber Sheet Masking for Digital Subtraction Angiography”, in Science and Engineering of Medical Imaging,M.A. Viergever (ed.), vol. 1137 of Proceedings of SPIE, The International Society for Optical Engi- neering, Bellingham, Washington, USA, pp. 22–30, 1989. Samenvatting

et thema van dit proefschrift is verbetering van digitale R¨ontgenangiografie- beelden. In tegenstelling tot eerdere ontwikkelingen op dit gebied wordt de Hnadruk niet gelegd op de verdere verbetering van beeldacquisitietechnieken, maar op de ontwikkeling en evaluatie van digitale beeldbewerkingstechnieken voor retrospectieve verbetering van beelden verkregen met bestaande acquisitietechnieken. In het kader van dit proefschrift dient de term “verbetering” ruim te worden opgevat. Er wordt hiermee niet alleen gedoeld op verhoging van beeldkwaliteit door middel van reductie van storende artefacten en ruis, maar ook op minimalisatie van mogelijke verslechtering van beeldkwaliteit en verlies van kwantitatieve informatie, veroorzaakt door het uitvoeren van niet te vermijden beeldbewerkingsoperaties. De eerste drie hoofdstukken van dit proefschrift handelen over beeldverbetering in digitale subtractie-angiografie (DSA). Met deze afbeeldingstechniek wordt een reeks tweedimensionale (2D) R¨ontgendoorlichtingsbeelden geacquireerd, met een snelheid van bijvoorbeeld twee beelden per seconde, na injectie van contrastvloeistof ine´ ´en van de voedende bloedvaten van het te diagnostiseren gedeelte van de vasculatuur. De beeldacquisitie start meestal ´e´ena ` twee seconden v´o´or aankomst van het con- trastmateriaal in de af te beelden vaten, zodat die vaten in de eerste beelden van de reeks niet zichtbaar zijn. In de nabewerkingsstap wordt vervolgens ´e´en van deze zo- genaamde maskerbeelden afgetrokken van alle daaropvolgende contrast beelden in de reeks, teneinde achtergrondstructuren zoals schaduwen van botten en zacht weefsel uit de laatst genoemde beelden te verwijderen. Volledige verwijdering van achter- grondstructuren is alleen mogelijk wanneer de pati¨ent niet bewogen heeft tijdens de opnamen. Aangezien de meeste pati¨enten fysiek reageren op contrastvloeistof, is dit vrijwel nooit het geval. Als gevolg hiervan vertonen DSA beelden vaak beweging- sartefacten (zie bijvoorbeeld het plaatje linksonder in Fig. 1.1 op Pag. 3), welke een negatieve invloed kunnen hebben op de te stellen diagnose. Sinds de introductie van DSA in de klinische praktijk, in het begin van de ja- ren tachtig, zijn vele methodieken ter vermindering van dit probleem voorgesteld. In Hoofdstuk 2 wordt een overzicht gegeven van de in de literatuur genoemde soorten bewegingsartefacten en technieken om deze artefacten te voorkomen. Het belan- grijkste doel van dat hoofdstuk is echter om een overzicht en bespreking te geven van de in de afgelopen twee decennia ontwikkelde digitale beeldbewerkingstechnieken voor retrospectieve correctie voor pati¨entbewegingen. Er wordt ingegaan op funda- mentele vragen, zoals of het mogelijk is om een 2D geometrische transformatie te construeren die exact de effecten van een lijnintegraalprojectie van een driedimen- 174 Samenvatting sionale (3D) transformatie beschrijft, alsmede praktische vragen, zoals hoe een 2D transformatie tussen het maskerbeeld en de contrastbeelden moet worden achterhaald wanneer alleen gebruik kan worden gemaakt van de in de beelden vervatte grijswaarde- informatie, en hoe vervolgens de beelden op een rekentechnisch effici¨ente wijze met elkaar in overeenstemming kunnen worden gebracht, gegeven die transformatie. Het literatuuroverzicht van Hoofdstuk 2 laat zien dat er reeds veel onderzoek is ge- daan op het gebied van (semi-)automatische reductie van pati¨entbewegingsartefacten in DSA beelden. Dit heeft echter voor zover bekend niet eerder geleid tot technieken welke voldoende snel en robuust zijn voor routinematig gebruik in de klinische prak- tijk. In Hoofdstuk 3 wordt een nieuwe, volledig automatische techniek beschreven, welke gebaseerd is op de conclusies van Hoofdstuk 2. Naast een beschrijving van de functionaliteit van de verschillende onderdelen van het algoritme, wordt tevens speciale aandacht besteed aan de rekentechnisch effici¨ente implementatie ervan. De resultaten van initi¨ele experimenten geven aan dat de techniek sneller en tegelijk ook effectiever is dan andere, tot nu toe gepubliceerde technieken. Uit de resultaten volgt verder dat de techniek het meest effectief is in cerebrale en perifere DSA. Een demon- stratie van beeldkwaliteitsverbetering verkregen door toepassing van de techniek in het geval van een cerebraal DSA beeld is gegeven in Fig. 1.1 op Pag. 3 (vergelijk het originele DSA beeld linksonder en de verbeterde versie rechtsonder). Een klinische evaluatie van de in Hoofdstuk 3 gepresenteerde volledig automati- sche bewegingscorrectietechniek wordt besproken in Hoofdstuk 4. De evaluatie betrof 104 cerebrale DSA beelden, welke werden gecorrigeerd door zowel de automatische techniek, als door middel van pixel shifting — een momenteel in de praktijk toege- paste handmatige correctietechniek. De kwaliteit van de door de twee technieken verbeterde DSA beelden werd getaxeerd door vier beoordelaars, die de beelden zowel met hun corresponderende originelen als onderling vergeleken. De resultaten van de evaluatie zoals beschreven in Hoofdstuk 4 geven aan dat het verschil tussen de te behalen mate van beeldkwaliteitsverbetering met de twee technieken statistisch signi- ficant is. Uit de resultaten van de laatst genoemde, onderlinge vergelijking volgt dat de automatische techniek in gemiddeld 95% van de gevallen vergelijkbaar, beter, of zelfs veel beter presteert dan de nu toegepaste handmatige techniek. In de overige 5% van de gevallen blijken de resterende artefacten zich te bevinden aan de rand van het beeld, welke gebieden meestal niet diagnostisch relevant zijn. Tenslotte laten de resul- taten zien dat de voor de automatische techniek benodigde rekentijd (gemiddeld ´e´en seconde per DSA beeld) aanzienlijk minder is dan de tijd die de handmatige techniek doorgaans vereist (gemiddeld zo’n 12 seconden per DSA beeld). Hoofdstuk 5 handelt over een recent ontwikkelde afbeeldingstechniek ten behoeve van visualisatie en kwantificatie van afwijkingen aan bloedvaten, namelijk driedi- mensionale rotatie-angiografie (3DRA). Hierbij wordt net als in het geval van DSA eerst een reeks 2D R¨ontgendoorlichtingsbeelden geacquireerd, na injectie van con- trastvloeistof. In tegenstelling tot DSA echter, vindt de acquisitie plaats tijdens een rotatie over 180◦ van de C-boog waarop de R¨ontgenbron en -detector zijn bevestigd, met het af te beelden volume van interesse gepositioneerd in het iso-centrum. De rotatie-acquisitie neemt doorgaans acht seconden in beslag en levert zo’n 100 door- lichtingsbeelden op, waaruit het uiteindelijke 3DRA beeld wordt berekend door toe- passing van een 3D-reconstructie algoritme. In vergelijking met de meeste andere Samenvatting 175 afbeeldingstechnieken levert 3DRA hoogresolute, isotrope datasets op. De beelden bevatten evenwel vrij veel ruis, alsmede ongewenste achtergrondstructuren veroor- zaakt door omliggend weefsel. Om te komen tot bevredigende visualisaties, is toepas- sing van ruisreductietechnieken onvermijdelijk (vergelijk de volume-visualisatie van een origineel 3DRA beeld in het linker plaatje van Fig 1.2 op Pag. 5, met de vi- sualisatie van een gefilterde versie van datzelfde beeld in het rechter plaatje). In Hoofdstuk 5 worden de effecten bestudeerd van verschillende lineaire en niet-lineaire filtertechnieken op de visualisatie en daaropvolgende kwantificatie van afwijkingen aan de bloedvaten op basis van 3DRA. De studie is gericht op veel voorkomende af- wijkingen, zoals vernauwing (stenose) van de halsslagader (arteria carotis interna) en verwijding (aneurysma) van intracraniale arterie¨en. De resultaten van experimenten op antropomorfe vasculaire fantomen tonen aan dat niet-lineaire anisotrope diffusie van grijswaarden over het algemeen de grootste beeldkwaliteitsverbetering oplevert, zonder daarbij noemenswaardig afbreuk te doen aan de nauwkeurigheid van kwantita- tieve metingen. De resultaten geven echter ook aan dat de praktische toepasbaarheid van deze techniek op dit moment nog wordt beperkt door de benodigde rekentijd en het vereiste werkgeheugen. In Hoofdstuk 6, tenslotte, wordt aandacht besteed aan het probleem van inter- polatie van bemonsterde data, dat zich voordoet wanneer men geometrische trans- formaties op beelddata wil uitvoeren ten einde spati¨ele overeenstemming of visuali- satie te bewerkstelligen. In de meeste praktische situaties heeft interpolatie, gevolgd door herbemonstering op een getransformeerd co¨ordinatenstelsel, tot gevolg dat er grijswaarde-informatie verloren gaat. De mate van beeldkwaliteitsverslechtering of verlies van kwantitatieve informatie is afhankelijk van de beeldinhoud, maar ook van de toegepaste interpolatietechniek (zie bijvoorbeeld het rechter plaatje in Fig 1.3 op Pag. 6, dat het verschil toont tussen het linker en middelste plaatje, die een plak uit een 3DRA beeld laten zien na rotatie over 5.0◦ met behulp van, respectievelijk, lineaire en derdegraads spline-interpolatie). De keuze van interpolatietechniek kan derhalve van invloed zijn op de uitkomst van kwalitatieve en kwantitatieve analyses, gebaseerd op de in getransformeerde beelden vervatte grijswaarde-informatie. Hoewel er in de afgelopen decennia veel interpolatietechnieken zijn ontwikkeld, zoekt men in de literatuur tevergeefs naar grondige evaluaties en vergelijkingen van deze technieken voor het genoemde doel van geometrische transformatie van medische beelddata. Een dergelijke vergelijkende evaluatiestudie wordt beschreven in Hoofdstuk 6. De studie is beperkt tot interpolatietechnieken gebaseerd op het principe van convolutie, aangezien die het meest worden gebruikt voor het genoemde doel. Vanwege het veelvuldig voor- komen van het interpolatieprobleem in medische beeldbewerkings en -analyse taken, betreft de evaluatie naast R¨ontgendoorlichtings- en 3DRA beelden tevens beelden van een groot aantal andere medische beeldmodaliteiten. Uit de resultaten van de studie volgt dat spline-interpolatie de beste afweging vormt tussen nauwkeurigheid en ver- eiste rekentijd, ongeacht de beeldmodaliteit, en dat deze techniek daarom geprefereerd dient te worden boven alle andere technieken. Samenvattend handelt dit proefschrift over de verbetering van beeldkwaliteit en de vermindering van verlies van zowel kwalitatieve als kwantitatieve beeldinforma- tie. De achtereenvolgende hoofdstukken beschrijven technieken voor de reductie van pati¨entbewegingsartefacten in DSA beelden, ruisreductietechnieken ten behoeve van 176 Samenvatting verbeterde visualisatie en kwantificatie van afwijkingen aan de bloedvaten op basis van 3DRA, en interpolatietechnieken voor het nauwkeurig uitvoeren van geometri- sche transformaties op medische beelddata. De resultaten en conclusies van de in dit proefschrift beschreven evaluaties geven algemene richtlijnen voor de toepasbaarheid en het praktische gebruik van deze technieken. Publications

Publications in International Journals:

• E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “Retrospective Motion Correction in Digital Subtraction Angiography: A Review”, invited paper, IEEE Transactions on Medical Imaging, vol. 18, no. 1, pp. 2–21, January 1999. • E. H. W. Meijering, K. J. Zuiderveld, M. A. Viergever, “Image Reconstruction by Con- volution with Symmetrical Piecewise nth-Order Polynomial Kernels”, IEEE Transac- tions on Image Processing, vol. 8, no. 2, pp. 192–201, February 1999. • E. H. W. Meijering, K. J. Zuiderveld, M. A. Viergever, “Image Registration for Digital Subtraction Angiography”, International Journal of Computer Vision, vol. 31, no. 2/3, pp. 227–246, April 1999. • E.H.W.Meijering,W.J.Niessen,J.Bakker,A.J.vanderMolen,G.A.P.deKort, R. T. H. Lo, W. P. Th. M. Mali, M. A. Viergever, “Reduction of Patient Motion Ar- tifacts in Digital Subtraction Angiography: Evaluation of a Fast and Fully Automatic Technique”, Radiology, accepted for publication, May 2000. • E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “Quantitative Evaluation of Convolution-Based Methods for Medical Image Interpolation”, Medical Image Analy- sis, accepted for publication, June 2000.

Publications in International Conference Proceedings:

• E. H. W. Meijering, K. J. Zuiderveld, M. A. Viergever, “A Fast Technique for Motion Correction in DSA using a Feature-Based, Irregular Grid”, in Medical Image Com- puting and Computer-Assisted Intervention — MICCAI’98 (first international confer- ence, held in Cambridge, MA, USA, October 11–13, 1998), W. M. Wells, A. Colchester, S. Delp (eds.), vol. 1496 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 590–597, 1998. • E. H. W. Meijering, W. J. Niessen, J. P. W. Pluim, M. A. Viergever, “Quantitative Comparison of Sinc-Approximating Kernels for Medical Image Interpolation”, in Med- ical Image Computing and Computer Assisted Intervention — MICCAI’99 (second in- ternational conference, held in Cambridge, UK, September 19–22, 1999), C. Taylor & A. Colchester (eds.), vol. 1679 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, Germany, pp. 210–217, 1999. • E. H. W. Meijering, K. J. Zuiderveld, W. J. Niessen, M. A. Viergever, “A Fast Image Registration Technique for Motion Artifact Reduction in DSA”, in IEEE International 178 Publications

Conference on Image Processing — ICIP’99 (sixth international conference, held in Kobe, Japan, October 24–28, 1999), IEEE Computer Society Press, Los Alamitos, California, USA, vol. III, pp. 435–439 (CD-ROM: paper 27AP4.10), 1999. • E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “Piecewise Polynomial Kernels for Image Interpolation: A Generalization of Cubic Convolution”, in IEEE International Conference on Image Processing — ICIP’99 (sixth international conference, held in Kobe, Japan, October 24–28, 1999), IEEE Computer Society Press, Los Alamitos, California, USA, vol. III, pp. 647–651 (CD-ROM: paper 27PO4.1), 1999. • E. H. W. Meijering, W. J. Niessen, M. A. Viergever, “The Sinc-Approximating Kernels of Classical Polynomial Interpolation”, in IEEE International Conference on Image Processing — ICIP’99 (sixth international conference, held in Kobe, Japan, October 24–28, 1999), IEEE Computer Society Press, Los Alamitos, California, USA, vol. III, pp. 652–656 (CD-ROM: paper 27PO4.2), 1999. • E. H. W. Meijering, “Spline Interpolation in Medical Imaging: Comparison with Other Convolution-Based Approaches”, invited paper for the European Signal Processing Conference — EUSIPCO 2000 (10th international conference, held in Tampere, Fin- land, September 5–8, 2000). • J.B.A.Maintz,E.H.W.Meijering,M.A.Viergever,“GeneralMultimodalElastic Registration based on Mutual Information”, in Medical Imaging 1998: Image Process- ing (international conference, held in San Diego, CA, USA, February 23–26, 1998), K. M. Hanson (ed.), vol. 3338 of Proceedings of SPIE, The International Society for Optical Engineering, Bellingham, Washington, USA, pp. 144–154, 1998. • S. A. M. Baert, W. J. Niessen, E. H. W. Meijering, A. F. Frangi, M. A. Viergever, “Guide Tracking in Interventional Radiology”, in Computer Assisted Radiology and Surgery — CARS 2000 (14th international conference, held in San Francisco, USA, June 28 – July 1, 2000), H. U. Lemke, M. W. Vannier, K. Inamura, A. G. Farman, K. Doi (eds.), vol. 1214 of International Congress Series, Elsevier Science, Amsterdam, the Netherlands, pp. 537–542, 2000. • S. A. M. Baert, W. J. Niessen, E. H. W. Meijering, A. F. Frangi, M. A. Viergever, “Guide Wire Tracking during Endovascular Interventions”, in Medical Image Comput- ing and Computer Assisted Intervention — MICCAI 2000 (third international confer- ence, to be held in Pittsburgh, USA, October 11–14, 2000). Curriculum Vitae

The author was born in Heemskerk, the Netherlands, on July 31, 1971. At the age of four, he moved with his family to Alkmaar. After having received general secondary education (MAVO, Bloemendaal), he went to an intermediate technical school (MTS, Alkmaar) from which he obtained a diploma in 1990. In the subsequent year, he successfully passed the propaedeutic exam at the college of technology (HTS, Alkmaar), whereupon he moved to Delft.

In June 1996, he received a Master of Science degree (cum laude) in Electrical En- gineering from Delft University of Technology. His graduation project concerned the segmentation of 3D medical image data by means of volume-growing techniques and was carried out at the Laboratory for Clinical and Experimental Image Processing (LKEB), Leiden University Medical Center.

In the subsequent month, he started as a Ph.D. student (“AiO”) at the Image Sciences Institute, University Medical Center Utrecht, on a research project concerning the optimization of information extraction from X-ray image sequences. The project was carried out in cooperation with the Department of XRD Predevelopment of Philips Medical Systems (Best, the Netherlands). The results are described in this thesis.

He was awarded a fellowship by the Netherlands Organization for Scientific Research (NWO), for research on adaptive interpolation, to be carried out at the Biomedical Imaging Group, Swiss Federal Institute of Technology (Lausanne, Switzerland). This project will start in November 2000.

During his Ph.D. study, there were also joyful events in his private life. He married Greetje Kruyswijk, and they got two children: Marjolein and Dani¨el.