THE RECONSTRUCTION AND

MANIPULATION OF OBJECT-BASED 3D

X-RAY IMAGES

by

Simant Prakoonwit

This thesisis submittedin partial fulfilment of the requirementsfor the Degree of Doctor of Philosophy (Ph. D. ) and the Diploma of Imperial College (D. I. C)

Departmentof Electricaland Electronic Engineering hnperial Collegeof Science,Technology and Medicine

University of London

May 1995

ýy4y. AKSTRAICT

Computersand graphic peripherals have come to play a significant part in scientific visualisation.In medical applications, many non-invasivetechniques have beendevelopedfor visualising the inner entities of the human body. A major interest has arisen in representationof the entities by digital objects that can be manipulated and displayedusing methods developed in ComputerGraphics.

A novel methodin 3D reconstructionis presentedusing the assumptionthat all the entities of interest can be representedas a set of discrete objects. An object is characterisedby its commoncharacteristic. Each object is reconstructedfrom the projections of its surface curvesin about 10 conventional2D X-ray images taken at suitableprojection angles.Yhe method automatically generates an optimumnumber of the object's surface points that are appropriately distributed Yhe method also determines the object's closed surface ftom these surface points. Each solid reconstructedobject is thenrepresented by a Boundary-representation(B-rep) scheme, which is compatiblewith any standarddisplay and manipulationtechnique, and can be manipulatedseparately.

Experimentswere performed on both computer-generatedobjects and physical objects.Results show that the methodcan be usedin a wide range of applications as an aMtional to existing techniques,and can allow a &amatic decreasein radiation exposureand econon;y in data manipulation.

2 plrlý- my mum, my, dad M.. S.ds. fer L ST

Page ABSTRACT 2 DEDICATION 3 CONTENTS LIST 4 FIGURE LIST 10 TABLE LIST 20 GLOSSARY AND NOTATION 21 ACKNOWLEDGEMENTS 23

CHAPTER 1: INTRODUCTION 24 1.1 3D Representation 24 1.1.1Wire-Frames 25 1.1.2Constructive Solid Geometry(CSG) 26 1.1.3Volumetric Representation 26 1.1.4Boundary Representation (B-rep) 27 1.1.5Discussion 27 1.2 3D Reconstruction 28 1.2.1Penetrating imaging modalities 32 1.2.2 Non-penetratingimaging modalities 33 1.2.33D reconstructionfrom penetrating 33 - imagingmodalities 1.2.43D reconstructionfrom non-penetrating 35 imagingmodalities 1.3 X-ray basedimaging 37

4 Contentslist

1.4 Motivation for this research 40 1.4.1Discussion on the previousapproaches on 40 3D reconstruction 1.4.2Motivation 44

1.5 The scopeof this research 45 1.6 Organisationof this thesis 46 1.7 References 51

PARTLCONCEPT

CHAPTER 2: CONCEPT OF OBJECT-BASED 3D X-RAY HVIAGING 60 2.1 Introduction 60

2.2 Topologicalproperties of an object 62 2.3 Geometricalproperties of an object 64 2.3.1 Smoothand non-smooth object 64 2.3.2 Convexand non-convex ob j ect 65 2.4 Projectionof an object 66 2.4.1 Projection 66 2.4.2 Curvesin X-ray imaging 67 2.4.3 Relationsbetween the contrastcurves in 2D 71 andthe surfacein 3D 2.5 Intuitive conceptof object-Based3D X-ray Imaging 73 2.5.1 Tangentcurves 73 2.5.2 Singularcurves 74 2.5.3 3D reconstruction 74 2.5.4 Multiple objectsand object identification 77 2.6 Descriptionsand definitions 78 2.7 References 95

5 Contentslist

PART 2: IMPLEMENTATION

CHAPTER 3: DATA ACQUISITION 97 3.1 Introduction 97

3.2 Projection system 98 3.2.1 Projection directions 98

3.2.2 Co-ordinate and projection systems 102 3.3 X-ray systems 105 3.3.1 Systemoverview 106 3.3.2 Systemused in the project 108 3.4 Phantoms 109

3.4.1 Computer-generatedobjects 109 3.4.2 Physical objects 118 3.5 References 124

CHAPTER 4: CURVE REPRESENTATION 125 4.1 Introduction 125 4.2 Contrastcurve representation 126 4.3 Contrastcurve determination 127 4.4 Occludingcontrast curve determination 131 4.5 References 135

CHAPTER 5: OBJECT SURFACE CURVE RECONSTRUCTION 137 5.1 Introduction 137

5.2 Common tangent plane 140 5.2.1 Parallel projection 141 5.2.2 Conical projection 143 5.3 Determination of tangent points 145 5.3.1 Parallel projection 146 5.3.2 Conical projection 152

6 Contentslist

5.4 Determinationof commontangent points 158 5.4.1 Tangentpoint matching 159 5.4.2 Determinationof commontangent points 160 5.5 Verification of a commontangent point 164 5.6 Distribution of tangentpoints 171 5.6.1 Uniformly distributedprojection directions 172 5.6.2Non uniformly distributedprojection directions 179 5.7 Reconstructionof tangentand singularcurves 179 5.7.1 Reconstructionof a tangentor singularcurve 179 5.7.2 Tangentor singularcurves linking 191 5.8 Multiple objectsand objectidentification 194 5.8.1 Objectidentification 194 5.8.2 Curveidentification 202 5.9 References 206

CHAPTER 6: OBJECT SURFACE RECONSTRUCTION 208 6.1 Introduction 208

6.2 General primary surface reconstruction 212 6.2.1 Background principles 212 6.3 Implementation 216

6.3.1 Main connectednetwork 217 6.3.2 Referenceplane 218 6.3.3 Definition of the first primary polyhedron's face 219 6.3.4 Definition of the rest of the primary faces 223 6.4 Bounding primary surface reconstruction 225 6.4.1 Definition of the first primary face 226 6.4.2 Definition of the rest of the primary faces 226 6.5 Face triangulation 227 6.6 References 233

7 Contentslist

CHAPTER 7: MANIEPULATION AND VISUALISATION 236 7.1 Introduction 236 7.2 Objectmanipulation and visualisation 237 7.3 Someessential tools for manipulationand visualisation 239 7.3.1 Viewing functions 240

7.3.2 Surfaceand materials 240 7.3.3 Lighting and shading 241 7.3.4 Rendering 243

7.4 Manipulationand visualisation software 246 7.5 References 247

CHAPTER 8: RESULTS 249 8.1 Introduction 249 8.2 Singleobjects 249 8.2.1 Ellipsoid 250

8.2.2Dimple-shaped object 252 8.2.3 Bone-shapeobject 254 8.3 Multiple objects 257

CHAPTER 9: DISCUSSION AND CONCLUSIONS 266 9.1 Discussion 266 9.1.1 General discussion 266

9.1.2 Data acquisition 268 9.1.3 Tangent and singular curve reconstruction 270 9.1.4 Surface reconstruction 271 9.1.5 Incomplete objects 272 9.2 Conclusions 273

9.3 Further work 273 9.3.1 Data acquisition 273 9.3.2 Incomplete objects 274

8 Contentslist

9.3.3 Processautomation and systemintegration 275 9.3.4 Surface formation 276

APPENDICES

APPENDIX A: TOPOLOGICAL PROPERTIES OF SURFACES 277 APPENDIX B: SURFACESAND CONTRAST CURVES 283 APPENDIX C: PROJECTION DIRECTIONS 286 APPENDIX D: GEOMETRICAL TOOLS 292 APPENDIX E: OVERVIEW ON COMPUTER SYSTEMS 300

9 FIG'(7R,W LIST

Page Fig 1.2a Digital 3D imaging. 29

Fig 1.2b 2D-2D imageprocessing (low-level). 31 Fig 1.2c 2D-2D imageprocessing (high-level). 31 Fig 1.2d 3D reconstruction. 31 Fig 1.2e2D imagingmodalities. 31 Fig 1.5a The scopeof this research. 47 Fig 1.6a Flow of operationsin this thesis. so Relevantchapter numbers are in parentheses.

Fig 2.2a Sphereand bone are topologicallyequivalent, 64 torus and coffeemug is topologicallyequivalent. Fig 2.3a Non-smoothobjects. 64 Fig 2.3b (1) Convexobject (2) non-convexob j ect. 65 Fig 2.4a Contrastcurve from a tangentcurve. 67 Fig 2.4b Contrastcurve from tangentcurve and discontinuity. 68 Fig 2.4c Contrastcurve and occludingcontrast curve. 70 Fig 2.4d Curveswith branchesand junction pointsA, B. 70 Fig 2.4e Curveswith swallowtailsand crossing point A. 71 Fig 2.4f Curve with a butterfly andcrossing point B and 71 junction points A, C. Fig 2.4g Singularpoints. 72

10 Figure list

Fig 2.5a (1) Contrastcurves on a spherefrom two projectiondirections. 74 (2) Contrastcurves from more projectiondirections. (3) Contrastcurves as a wire-framerepresentation of the sphere. Fig 2.5b Contrastcurves A, D from tangentcurve B, C respectivelyand 75 points o andrn arethe projectionsof point n. Fig 2.5c Tangentplane. 76 Fig 2.5d Singularcurve and tangent plane. 77 Fig 2.5e Identifyingobject's contrastcurves by two tangentplanes. 78 Fig 2.6a 1) Cartesianco-ordinate system in E'. 79 2) Right-handedtriad Cartesianco-ordinate system in V. Fig 2.6b World co-ordinatesystem (xyz) in E' and 80

3D projectionplane co-ordinate system (xp, yp, z. ) in E',' and 1P 2D projectionplane co-ordinate system (xp, yp) in EP. P Fig 2.6c Projectioncone and projection cylinder, case the projectionplanes 81 are boundedby rectangularclosed curves. Fig 2.6d The set of projectionplanes in the example. 82 Fig 2.6e Geometricprojection in parallelprojection and conicalprojection. 83 Fig 2.6f Commontangent plane in parallelprojection. 84 Fig 2.6g Commontangent plane in conicalprojection. 84 Fig 2.6h (1) Onetangent curve, one singularcurve and two contrastcurves. 86 (2) Onetangent curve, one isolatedsingular point in E', one contrast 2 curve and one isolatedsingular point in E;'.

(3) Onetangent curve with one singularcurve and contrastcurves. (4) Onetangent curve, one singularpoint in E' and one contrast

curvewith one singularpoint in E2P

Fig 2.6i (1) Onetangent point in one commontangent plane. 87 (2) Two tangentpoints in one commontangent plane. Fig 2.6j Commontangent point andtangent points. 88 Fig 2.6k Point P is outsidesphere C andits projections. 89

11 Figure list

Fig 2.61Distribution of tangentand commontangent points. 92 Fig 2.6m Pair (xV 92 0,y 1). Fig 2.6n Pair (yo, yj). 92 Fig 2.6o Distribution of planecommon tangent points. 93

Fig Ma Flow of operations in the data acquisition. 98 Relevant section numbersare in parentheses. Fig 3.2a Five regular polyhedra: (1) cube (2) octahedron 100 (3) tetrahedron (4) icosahedron(5) dodecahedron.

Fig 3.2b Tetrahedron and the four projection directions. 101

Fig 3.2c (1) World co-ordinate system ft, y, z). 103

(2) 3D projection plane co-ordinate system (x,,, y..., z,. ).

(3) 2D projection plane co-ordinate system (x,,, ypd.

Fig 3.3a Digital x-ray system overview. 106 Fig 3.3b Transfering data from the X-ray machine into a workstation. 108

Fig 3.4a Computer-generatedobjects, (1) a dimple-shapedobject 110 (2) a bone-shapedobject. Fig 3.4b Process to extract some information from a DXF file. II Fig 3.4c X-ray image intensity of an object. 112 Fig 3.4d Intersection of a ray and a triangular face. 115 Fig Me Two caseswhen a ray intersectsmore than one triangular face. 116 Fig 3.4f A ray always has an even number of intersection points. 117 Fig Mg A ray arrives at a projection plane with an angle 0. 117 Fig 3.4h Co-ordinate systemsin the experiment. 119 Fig 3.4i Phantom's shell and the World co-ordinate system. 120 Fig 3.4j Phantom and base. 121 Fig 3.4k Position of the phantom. 122 Fig 3.41 X-ray image dimension. 123

Fig 3.4m Dimensions of the X-ray machineused in this project. 123

12 Figure list

Fig 4.1a Flow of operationsin this chapter. 126 Relevantsection numbers are in parentheses. Fig 4.3a A graph. 128 Fig 4.3b (1) Two contrastcurves pass across each other with a node. 129 (2) Two contrastcurves pass across each other without a node. Fig 4.3c (1) Non planargraph. (2) Planargraph. (3) Occludingcurve. 129 Fig 4.4a (1) Two bonesand their contrastcurves. 134 (2) Densityprofile of the two bones. Fig 4.4b All possibleclosed contrast curves of the two bonesin Fig 4.4a. 135

Fig 5.1a Flow of operationsin this chapter. 139 Relevantsection numbers are in parentheses. Fig 5.2a Commontangent plane 4 in parallelprojection. 141 Fig 5.2b Tangentplanes touch an object'ssurface (in parallelprojection). 142 Fig 5.2c Touchingpoint and its projections(in conicalprojection). 144 Fig 5.2d Commontangent planes in conicalprojection. 145 Fig 5.3a Tangentcurves and contrastcurves of a shericalobject. 146 Fig 5.3b Determinationof an object'stangent and 147 commontangent points by usinga commontangent plane. Fig 5.3c Commonintersection line andtangent angle in parallelprojection. 152 Fig 5.3d Commonintersection line I'in conicalprojection. 154 Fig 5.3e Tangentpoint in conicalprojection. 156 Fig 5.4a Commontangent planes and tangent points. 159 Fig 5.4b One commontangent point on a commontangent plane. 163 Fig 5.4c More than one commontangent point on a commontangent plane. 164 Fig 5.5a Projectionof a point in parallelprojection. 166 Fig 5.5b Projectionof a point in conicalprojection. 167 Fig 5.5c Point locationproblem: a point is on a curve. 170 Fig 5.5d Point locationproblem: P' is not outsideC and 171 no tangentpoint involved.

13 Figure list

Fig 5.5e Point locationproblem: a tangentpoint involved. 171 Fig 5.6a Characteristicsof tangentlines on a projectionplane, when N. 3 173

(1) Set of projectionplane planes. (2) Pair yo, xVIand tangent line 4. (3) PairyOIXV2 tangentline 4. and . (4) Tangentlines on yo. (5) Distribution of the tangentlines. Fig 5.6b Distribution of tangentlines when N. = 4. 173

Fig 5.6c Distribution of planetangent points. 174 Fig 5.6d Distribution of planetangent points in Fig 5.6 with 175 different object'sposition. Fig 5.6e Graphshown the relationshipbetween distances 175 (betweentwo planetangent points) and the inversecurvature. Fig 5.6f Contrastcurve with a singularpoint. 176 Fig 5.6g Contrastcurve with a reflectionpoint. 176 Fig 5.6h Distribution of commontangent points when 177 the tangentcurve is parallelto a projectionplane. Fig 5.6i Geometryof conicalprojection and the deviationof the tangentangle. 178 Fig 5.6j Planetangent points distributionsin parallelprojection and 178 in conicalprojection. Fig 5.7a Planetangent points on contrastcurve C and 180 the crrespondingcommon tangent points on tangentcurve C'. Fig 5.7b Contrastcurve segmentTPITP2 and the corresponding 183 tangentcurve segmentCTPCTP2. Fig 5.7c Three-lineapproximation of a curve. 183 Fig 5.7d Points and projectorlines. 185 Fig 5.7e Reconstructionof a tangentcurve segment in parallelprojection. 185 Fig 5.7f Reconstructionof a tangentcurve segmentin conicalprojection. 185 Fig 5.7g The first tangentcurve segmentreconstruction. 186

14 Figure list

Fig 5.7h Reconstructionof a curvewith a reflectionpoint. 187 Fig 5.7i Order of the points. 188 Fig 5.7j Two tangentcurves poject onto the samecontrast curve. 190 Fig 5.7k The tangentand commontangent points. 190 Fig 5.71Two reconstructedtangent curves share the samecommon 191 tangentpoint CTý,,, wherethe conditionsare: maxM, = 3, Idi. d,.,.,I> Id,,, d,,,.,I Jd,,, di+1,21> Id, d, - - and - +1.21' 0 ', - Fig 5.7m Reconstructedtangent or singularcurve linking. 192 Fig 5.7n Tangentand contrastcurves of a bean-shapedobject. 193 Fig 5.7o First reconstructedtangent curve. 193 Fig 5.7p Secondreconstructed tangent curve. 194 Fig 5.8a Objectidentification by usingcommon tangent planes. 196 Fig 5.8b Tree of occludingcurves. ' 197 Fig 5.8c (1) Many objectscan generate the sameoccluding contrast curve. 198 (2) and (3) Exampleshows that two bonescan share the samecontrast curves at a projectionangle. Fig 5.8d A part in an occludingcontrast curve tree. 198 Fig 5.8e An exampleof occludingcontrast curve tree. 200 Fig 5.8f Invalid occludingcontrast curve. 201 Fig 5.8g Dimple-shapedobject with sharpedges. 202 Fig 5.8h Invalid curveis matched. 202 Fig 5.8i Contrastcurves of two bean-shapedobjects. 205 Fig 5.8j Contrastcurves of two bean-shapedobjects when 206 there is a connectedpoint. Fig 5.8k Contrastcurves of two bean-shapedobjects when 206 one curveis completelyinside the other.

15 Figure list

Fig 6.1a Flow of operationsin this chapter. 211 Relevantsection numbers are in parentheses. Fig 6.2a A sphererepresented by four curvedtriangular faces. 213 Fig 6.2b Planerepresentation of the four curvedtriangular faces in Fig 6.2a. 213 Fig 6.2c Modified planerepresentation of the four curvedtriangular 213 facesin Fig 6.2a. Fig 6.2d U in E3 is diffeomorphicto U'. 216 Fig 6.3a To createa main connectednetwork, the pointsthat havedegree 217 lessthan 3 are removed. Fig 6.3b To createa main connectednetwork, the segmentthat hasdegree 218 1 end point is removed. Fig 6.3c (1) First edgeAB of face FO(2) FaceFO and the adjacentfaces. 221 Fig 6.3d Projectionsof Nbdp and Nbde. 222 Fig 6.3eDetermination of the next edgein a face. 223 Fig 6.4a Two crossingreconstructed tangent curves. 225 Fig 6.5a Primaryface. 228 Fig 6.5b Secondaryface. 229

Fig 6.5c Cutting edgeat comer point P. 231 Fig 6.5d Comer point P removing. 231 Fig 6.5e (1) Secondaryface SE 233 (2) Cutting edgesat primarypoints used to establishtriangular faces. (3) Cutting edgesat a new set of comer points. (4) Shortestcutting edgeis usedto establisha new triangularface.

Fig 8.2a (0)-(9) SimulatedX-ray imagesand contrast curves of the 251 ellipsoidat projectiondirections 0-9 in TableC4 in Appendix C.

16 Figure list

Fig 8.2b The 90 commontangent points on the surfaceof 252 the ellipsoidreconstructed from the contrastcurves in Fig 8.2a: (1) The commontangent points. (2) Top view. (3) Sideview. (4) Front view. (5) The commontangent points superimposedon the ellipsoid's surface. (6) Top view. (7) Sideview. (8) Front view. Fig 8.2c (0)-(3) SimulatedX-ray imagesand contrastcurves of 253 the dimpleat projectiondirections 0-3 in TableC2 in AppendixC. Fig 8.2d The reconstructeddimple-shaped object, reconstructed by using 253 the contrastcurves in Fig 8.2c. Fig 8.2e (0)-(9) SimulatedX-ray imagesand contrastcurves of 255 the bone-shapedobject at projectiondirections 0-9 in Table C4 in AppendixC.

Fig 8.2f The reconstructedbone-shaped object 256 (total vertices:315, total triangularfaces: 626), reconstructedfrom the contrastcurves in Fig 8.2e. Fig 8.2g The reconstructedbone-shaped object 257 (total vertices:301, total triangularfaces: 620), reconstructedfrom 10 misalignedX-ray images. Fig 8.3a (0)-(9) X-ray imagesand contrastcurves of the syntheticknee-joint 258 phantomtaken at projectiondirections 0-9 in TableC4 in Appendix C. Fig 8.3b Tangentpoint C, on contrastcurve AB on X-ray imagenumber 5 260 (projectionplaney5)in Fig 8.3a, andtangent point F, on contrastcurve DE on X-ray imagenumber 7 (projectionplane Y7) in Fig 8.3a, are determinedby commontangent plane 4. Fig 8.3c Distribution of tangentpoints on contrastcurve GH. 261

17 Figure list

Fig 8.3d Left: the reconstructedouter surfaceof the Femur 261 (total vertices: 529, total triangular faces: 1054). Right: the inner surface of the Femur (total vertices: 490, total triangular faces: 980) reconstructed from the contrast curves in Fig 8.3a. Fig 8.3e Top: the reconstructedouter surfaceof the patella 262 (total vertices: 34, total triangular faces: 64). Left: the reconstructed outer surface of the Tibia and Fibula (total vertices: 833, total triangular faces: 1699 ). Right: the reconstructed inner surface of the Tibia and Fibula (total vertices 750, total triangular faces: 1499). All of these objects were reconstructed from the contrast curves in Fig 8.3a. Fig 8.3f All the reconstructedouter surfacein one scene, 263 viewed from two angles. Fig 8.3g Renderedscene of the left kneejoint in Fig 8.3f. 264 Fig 8.3h Renderedscene of the right kneejoint in Fig 8.3f 265 Fig 8.3i Opacityof the objectscan be changedto visualisea hiding object 265 (the Patellain this figure).

Fig 9.1a Rangeof 10 uniformly distributedprojection directions: 270 (1) Exampleof projectiondirections (d) which cannotbe used for a cylindricalvolume (a leg in this case). (2) Rangeof 0 is from -6.471,to 46.70and range of ý is from 00to 360". (3) X-ray sources(CP) needangular space of only 53.170(the rangeof 0 ). This allowsus to copewith a cylindricalvolume such as a leg. Fig 9.3a Exampleof a data-acquisitionmachine for 274 the Object-Based3D X-ray Imagingshowing some X-ray sourcesand receptorspositioned on the inner surfaceof the exposurechamber.

18 Figure list

Fig Al Surface classification. 278

Fig A2 Point and open disk. 279 Fig A3 Simple polygon. 279, Fig A4 Non-2-manifold surface. 280

Fig Cl Cube. 286 Fig C2 Octahedron. 287 Fig C3 Tetrahedron. 288 Fig C4 Icosahedron. 289 Fig C5 Dodecahedron. 290

Fig El Computersystems used in this project. 301

19 B E, L IS,"T , 47A- -,". -.,

Page

Table BI (1)-(18) Zero-genus surfaces. 284 Table B2 (1)-(18) Contrast curves of the surfacesin Table B 1.284 Table B3 (1)-(18) Occluding contrast curves of the sufacesin Table B 1.285

Table C1 Cube'svertices' positions,faces and projection directions 287 derivedftom the cube. Table C2 Octahedron'svertices' positions, faces and projection directions 288 derivedfrom the octihedron. Table C3 Tetrahedron'svertices' positions, faces and projectiondirections 289 derivedfrom the tetrahedron. Table C4 Icosahedron'svertices' positions, faces and projectiondirections - 290 derivedfrom the icosahedron. Table C5 Dodecahedron'svertices' positions. - faces and projection directions 291 defivedfrom the dodecahedron.

20 G, N D, N-,,O, TATTQN' "LOSSARYA - -,,

Object 2-manifold,closed and orientablesurface in 3D. Contrastsurface Discontinuitiesof the densityfunction in volume of interestin 3D. If this surfaceis 2-manifold,closed and orientablethen it is calledobject. Tangentcurve Curveon a contrastsurface where the surfaceis tangentialto X-rays in 3D. Singularcurve Discontinuityon a contrastsurface where the radiusof curvatureis equalto zero in 3D. Projectionplane 2D X-ray image. Contrastcurve Proj ectionof a tangentcurve or a singularcurve in 2D projectionplane. Commontangent plane Planethat is tangentialto a contrastsurface and contains the two centresof projection(X-ray sources)in a pair of projectionplanes. Tangentpoint Point wherea commontangent plane touches a contrast curve. Commontangent point Point wherea commontangent plane touches a contrast surface. *d d-dimensionalEuclidean space. *3 3D world space. 3p * 3D projectionplane space.

2p 2D projectionplane space.

T Set of projectionplanes. XV Projectionplane.

21 Glossaryand notation

F Setof centresof projectionplanes. OP Centreof a projectionplane. E) Set of projectiondirections. d Projectiondirection. r Set of centresof projections. S CP Centerof projection. II Geometricprojection. Set of commontangent planes. Commontangent plane. Set of tangent and singular curves. CTS Tangent or singular curve. CO Set of contrastcurves. cc Contrastcurve. CTP Set of commontangent points. CTP Commontangent point. TP Set of tangentpoints. TP Tangentpoint. PTP Set of planetangent points. d Vector d in homogeneousco-ordinates. FV Set of primarypolygonal faces (vertices). FE Set of primarypolygonal faces (edges). FV Set of verticesof a primaryface. FE Set of edgesof a primaryface. SFV Set of secondarypolygonal faces (vertices). SFE Set of secondarypolygonal faces (edges). SFV Set of verticesof a secondaryface. SFE Set of edgesof a secondaryface.

22 MEN

I would like to thank my supervisor,Prof RalphBenjamin. He hasgiven me enormoussupport in this projectand mademe fully enjoy doing research.I consider myselfas a lucky oneto havesuch a supervisor.Having known him for almostsix years,I am surethat he is absolutelyone of my realgurus. Moreover he and his family mademy Christmastime in Englandso meaningfuland memorable.

I wish to acknowledgethe profoundhelp throughout the project of Prof. Richard1. Kitney, my supervisor.My thanksto all my colleaguesin the BiomedicalSystems Group, Electricaland Electronic Engineering Department at Imperial College,especially Mr. loannisMatalas, for their fiiendshipand helpful assistance.

I am gratefulto Dr. JohnStevens of St. Mary's Hospital,Paddington, for generouslyproviding someexperimental facilities and to Dr. David M. Marsh of St. James'sHospital, Dublin, for very usefuladvice about data transfer. I cannotforget to mentionthe staff at Maida Vale Hospitaland the NationalHospital at QueensSquare, who spenttheir valuabletime helpingme to do sometedious experiments at the early stagesof my research.I am alsohappy to recognisethe assistancegiven by some engineersfrom the PhilipsMedical Systems,London, and my colleague,Mr. Joao Batista,who helpedme completecrucial experiments at St. Mary's Hospital. The phantomsused in this projectwere marvelouslybuilt by Mr. Ray Thomsonat Imperial College'sEEE workshop.

I must acknowledgethe British Councilfor giving me a great opportunity to study in Britain and for wonderfulfinancial support.

Finally, I must thankMr. Pakoon,my belovedfather, Mrs Preeda,my beloved mother,Miss Thitikant, my belovedsister and all my relativesback homein Thailand for their enormoussupport, understanding and particularly their patienceduring my long absence.

23 CTiA P TF R1 IN"TWODUCT Q "',,

1.1 3D representation

In this thesisthe word representationstands for representationof information concerningthe structureof physicalobjects. In imaging,a physicalobject is perceived. Then the information of this physicalobject may be processedto be in a form more appropriate for a specific purpose.This information then can be representedby a suitablemethod. In eachcase we haveto considerwhat is the most appropriatetype of information,how to handlethis informationmore effectively,and what tools shouldbe usedto implementthe whole process?

Digital computersprovide the most promisingapproach for handlingthis kind of information, for example, in robotic applications, image compression and enhancement,medical applications, and image understanding, etc.

24 Chapter 1: Introduction

In 3D imagingwhich includesreconstruction, manipulation, and representation of a physicalobject, computers offer manyadvantages:

" Large computationalpower is requiredin 3D imaging. " The computerscan be easilyprogrammed to suit a particular task such as a reconstructionalgorithm, a manipulationalgorithm, etc. " Most of the tools which are important to 3D imaging have already been developedin computers. " The computer technology has been developedrapidly resulting in higher performanceat reducedcost.

Hencethis project is basedon a digital computerapproach and all information is handleddigitally. In this sectiononly representationof physicalobjects is considered, becausethe reconstructionalgorithms and manipulationalgorithms very much depend on the representationscheme selected.

For physicalobjects which are 3D, the four dominantrepresentation schemes in use today are Wire-Frames, Constructive Solid Geometry (CSG), Volurnetric Representation,Boundary Representation (B-rep). Although other methodsalso exist [1], they can be regardedas variationson thesefundamental methods. Each of these hasits advantagesand disadvantageswhen compared to the others.

1.1.1 Wire-frames

A wire-framerepresentation of a 3D object consistsof a finite set of points and connecting edges which define the object adequately and facilitate subsequent visualisation.An edgemay well be an arc of a circle or any other well-defined space curve which is required for a good wire-frame representation. A computer representation of a wire-frame structure consists essentially of two types of information. The ýfirst is termed geometric data which relate to the 3D co-ordinate

25 Chapter 1: Introduction positions of the wire-framenode points in space.The secondis concernedwith the connectivity or topological data which relate pairs of points together as edges.This representationdoes not contain surfaceinformation, and therefore is not complete, since the solid portion of the wire-frameis not defined in this structure. Details on Wire-framescan be found in [2] et al.

1.1.2 Constructive Solid Geometry (CSG)

CSG is a method of representation,a design methodology, and a certain standardset of primitive objects.So, a CSG representsa solid object in a computeras a combinationof simplersolid objectscalled standard primitives such as right circular cylinders,spheres, and boxes.These primitives often are themselvescombinations of even simplerentities known as half-spaces.The typesof combinationavailable are the (Boolean) set operationsof union, intersection,and difference,and the completesolid object is constructedfrom the solid primitivesusing these operations. A CSG model is stored as a tree, with leaf nodes representingthe primitive solids, internal nodes representingregularised Boolean operations, and arcsenforcing precedence among the operations.There is a substantialliterature on CSG, suchas [3], etc.

1.1.3 Volumetric representation

One way to representan object is to place it in some referenceco-ordinate systemand subdividethe volume it occupiesinto small primitive volume elementsor voxels. Most volumetric descriptions employ box-shaped primitives (cubes or parallelepipeds)as voxels. Thesevoxels - short for volume elements- can be seenas an extensionto 3D spaceof the 2D digital image elementcalled pixels - short for picture elements.If only one sizeof voxel is usedsuch descriptions can occupy a large amount of storage space, since a large number of voxels might be required to approximatea large object with a complicatedboundary. To make this schememore effective, the family of superquadricsor superellipsoidalsolids has been addedto the

26 Chapter 1: Introduction

set of modellingprimitives [4]. But this methodhas not yet been widely used. Most existingwork with superquadricprimitives has dealt with fitting problemsrather than the developmentof systemsemploying superquadricsas an object representation scheme.

1.1.4 Boundary representation (B-rep)

A B-rep representsa solid directly through a representationof its bounding surface.Ifistorically, boundarymodels emerged from the polyhedral models used in computer graphics for representingobjects and scenesfor hidden line and surface removal.They can be viewed as enhancedwire-frames, that attempt to overcomethe problems of graphicalmodels by including a complete description of the bounding surfacesof the object. Its data structure contains the elementswhich describe its boundary.These elements are divided into two categories:topological and geometric. The topological elementsare linked together in a network or graph which represent their interconnectionor connectivityin terms of vertices,edges and faces.This face- edge-vertexgraph containsno geometricinformation about an object. The geometric elements(points, curves and surfaces)which give it form and fix it in space are separate.

A B-rep representsfaces in terms of explicit nodes of a boundary data structure.After that, manyalternatives for representingthe geometryand the topology of a boundary model are possible.Reference [5] discussesfurther alternativesand provides information on the data structuresused by a numberof researchers.Further detailson B-rep canbe found in [5] et al.

1.1.5 Discussion

A CSG schemeis suitablewhen man-madeor geometrical3D objectsare to be represented.However, for general physical objects including the objects that have

27 Chapter 1: Introduction ambiguous,complicated surfaces this schemeis not appropriate.These objects require a large numberof primitivesand parametersto be properlyrepresented.

Both volumetric representationand B-rep are widely used in representing3D physicalobjects in many applications.Volumetric representationis different from B- rep in that the internal structurewithin an object is representednot the surfaceof the object. One metaphorthat maybe helpU is that of a physicalobject beingviewed as a sealedplastic bag filled with liquid. The volumetric representationrepresents only the liquid. The B-rep representsonly the plasticbag. Most CAD/CAM systemscurrently availablemake use of B-rep in order to define the solid object. Historically, B-rep modelswere the first to be implementedin computersand consequentlymost of the theory and computertechniques developed for handlingand viewing the 3D models were initially developedusing a B-rep.

Four methodsfor 3D representationhave been presented.The Object-Based 3D X-ray Images,the subjectof this thesis,is basedon a surfacerepresentation, a wire-frameor B-rep scheme.A wire-framescheme is usedin the previousstage of this project [6]. This representationscheme requires small data storage and can be accessed,manipulated and displayedquickly. But the main disadvantageis there may be ambiguitiesin interpretingthe representation[3] et al. This schemecannot present a perfect 3D object.

1.2 3D reconstruction

Normally imaging techniquesmap objects onto images. An object may be definedby a property which is distributedin multidimensionalspace and whose extent we require to measure.An imagecan be statedas the measurement(also, in general, multidimensional)which hasbeen made and which is regardedas best representingthe object distribution [7] according to our purpose. In general the object can be mathematicallyrepresented by the symbolT andthe imageby the symbol'g. Normally

28 Chapter 1: Introduction the best mappingwill give, at all locationsin the space,f=g. In reality most imaging techniquesmap 3D objectsin the real world into 2D images.This resultsin ambiguities in locationswhich leadsto non-optimuminterpretation. To understandthese images one has to understandhow they relate to the actual 3D object. These relationships dependon the imagingmodalities used.

The task of understanding3D objects from mapped2D images very much depends on mental gymnasticsby the users. The problem occurs when the 3D information is neededto be transformedinto any other format, in order to aid in comprehensionof the object's structure for other users with less experience,or to facilitate its automaticmanipulation and analysis.To solve this problem, a method to recover 3D information from 2D imagesis required. As mentionedin the previous section, at the present,the most convenientapproach to handlethis task is a digital approach.Fig 1.2a illustrates a digital 3D imaging block diagram. It is useful to decomposethe 3D imaging task into a series of subtasksthat are usually tackled sequentiallyand separatelyin someorder.

B %.Ur , I&Q O'Umc .2D digital Of infbrmation)ýmmý terest processing f ...... > mapping 3D description or interpretation

digital descriptionof the original interest f volumeof -=g' presentation& further manipulation processingss

Fig 1.2aDigital 3D imaging

29 Chapter 1: Introduction

The purposeof the 3D imagingis to recover or reconstruct3D information which was mappedinto 2D informationin the imagingprocesses. From the diagram, the original 3D information or objectsin the real world will be mappedby an input device A which is usually some form of transducer such as a digital camera, computerisedtomography (CT), digital X-ray machine,etc. The selectingof the type of transducersor in another word the imaging modality dependson what kind of propertieswe want to perceivefrom the object. The 2D information obtainedat this stagemay be usedwith or without further processing.The next stageis to processthe 2D information. This stagecan be classifiedinto three subprocessas shown in Fig 1.2b-d.The first subprocess,Fig 1.2b,is a low-level subprocess.The 2D information will be processedin order to be in a more appropriateform. For example,to remove noise of one form or another,to determineedges of objects in the image, etc. This processed2D image data is then passedthrough a high-level subprocess,Fig 1.2c, which concernedwith interpretingor describingthe processed2D image data for the subsequenttasks, e.g. to distinguishan object of interest from all objects in a 2D image.The final subprocessas shownin Fig 1.2dis to reconstructa 3D reconstruction close to the original object in the real world from this processed2D information. The 3D descriptionthen canbe representedand manipulated by computergraphics or other methods.If needed,this descriptioncan also be transferredto further processingfor other purposes.

The applicationsof the 3D imaging can be found in many disciplinesfrom industrial applicationsto medical applications.For example, in industries, such as robot vision, in industrial inspection or the interpretation of aerial images. These applicationsneed 3D imaging. In medical applications,for example,physicians are interestedin the 3D visualisationof the structuraland morphologicalcharacteristics of organs,surgical planning or radiationtreatment planning.

30 Chapter 1: Introduction

raw 2D imagedata processingalgorithm processed2D imagedata

b) e.g. enhancement,edge extraction

ý> processed2D imagedata analysingalgorithm 2D descriptionor interpretation

C)

2D descriptionor reconstructionalgorithm 3D descriptionor interpretation interpretation d)

Fig 1.2b 2D-2D imageprocessing (low-level) Fig 1.2c2D-2D imageprocessing (high-level) Fig 1.2d 3D reconstruction

To study 3D reconstructionmethods systematically, first let us pay attentionto the 2D imagingmodalities because the methodsvery much dependon the type of the 2D informationrequired in reconstructionprocesses. The 2D imagingmodalities which map objectsin the real world into an imageworld can be broadly classifiedinto two main categories,Fig 1.2e,penetrating imaging modalities and non-penetratingimaging modalities.

2D imaging

penetrating non-pcnctrating

projectional II crossectional

Fig 1.2e2D imaging modalities

31 Chapter 1: Introduction

1.2.1 Penetrating imaging modalities

This type of imaginguses penetrating energy to imageobjects of interestwhich are concealedor coveredby other objects.Penetrating imaging modality consistsof two types:

Projection imaging

Each element of the 2D image is formed by the joint contribution of the propertiesof all elementof the spatialobject which lie alonga line whosedirection and shapedepend on the physicalprinciple of the imagingmodality used.For instance,in projection radiographyor conventionalradiography, a pixel on the film is formed by a ray which is continuouslyattenuated as it penetratesall elementsof the object along its path. Therefore, in this type of imaging each pixel on the 2D image representsthe integral of the object's propertiesalong the line of the penetratingenergy used in imaging. ConventionalX-ray, , et al., are includedin this type of imaging.

Cross-section imaging

In this type, the domainof the mappingis restrictedwith no changein the third dimensionso that the pixel valuesare also the true voxel values.The propertiesat any pixel in 2D imagecan be directly relatedto a singlepoint of a 3D object, Le. an image can be associatedwith a planewhich intersectsthe body and every point on the image can, therefore, be associatedwith a point on this same plane. Computerised Tomography(C MagneticResonance Imaging (MRI), images this , etc., generate of type.

32 Chapter 1: Introduction

1.2.2 Non-penetrating imaging modalities

In principle,this type of imagingis quite closelyrelated to projection imaging mentionedpreviously. But this imaging modality collects information from visible surfacesonly. Each pixel in the 2D imagerepresents mapping of the information of an object'ssurface in 3D spacewhich is not obscuredby any other object. This type of imagecan be generatedby, for example,digital camera,range camera.

In this thesis, previous works on static 3D reconstruction are classified accordingto the 2D imagingmodalities used in reconstructionprocesses.

1.2.3 3D reconstruction from penetrating imaging modalities

Most of the methodsin thesetypes are appliedto medicalpurposes. In the late 70s, 3D medicalimaging began to reachroutine clinical usageas a diagnosticaid for severe orthoPaedicand neurological disorders of the spine. Many reconstruction techniqueswere developedto determinestructure descriptions for the automatic manufactureof anatomicmodels and custom prostheses.Later 3D was also extendedto applicationsin generaldiagnosis, surgical planning and modelling of anatomicalobjects.

3D reconstruction from projection imaging data

The volume-basedapproach 3D reconstruction of an object from its 2D projection images is very common, especially in medical imaging. In general, reconstructionmethods can be roughly classifiedinto two categories:

o Seriesexpansion methods, such as the Algebraic ReconstructionTechniques (ART) and its manyvariants [8] et al.

33 Chapter 1: Introduction

9 Transform methods,which are usually basedon a discretisedversion of the Radoninversion fonnulae [8], [10].

Basedon thesetwo categories,many methods have been purposed[I 1]-[ 181. A basis-functionmethod was presentedby [12] et al. The methodfor twin-cone beam geometryreconstruction was introducedby [13]. In [14], the basis-functionis applied for this twin-conebeam geometry reconstruction. Some other aspectsare: the filtered- backprojectionand backprojection-filtering method by [ 17] and [ 18]. In [II]a true 3D cone-beamreconstruction (TTCR) algorithm for direct volume-basedapproach 3D reconstructionis developedfor the completesphere geometry as an extensionof the true 3D reconstruction(TTR) algorithm which uses a parallel beam [19]-[23]. To improve the efficiency of this method, the implementation of the filtered- backprojectionmethod on hypercubecomputers is presentedin [9]. Implementations of cone-beamreconstruction methods are investigated and discussedin detail in [24].

For surface-based3D reconstruction,many methods are similar to those categorisedin non-penetratingimaging which will be presentedlater in this section. Recentwork appliedto medicalpurposes is reportedin [25]. This work is basedon the method describedin [26). To reconstructa single3D boundarydescription of a bone, projection 2D X-ray imagesfrom two views (lateral and posteroanteriorprojection) are required.

3D reconstruction from cross-section imaging

In manyapplications, a 3D objectmust be reconstructedfrom a seriesof cross- section images.Major applicationsof this techniquecan be found in medical imaging [28]-[34] and in other fields suchas seismicinterpretation [35]. The main task of this type of reconstructionis to form surfacesor volumesby interpolation between the successiveslices. There are two distinct major approachesfor this reconstruction method.

34 Chapter 1: Introduction

In volume-basedapproaches, generally, grey-scene interpolation methods are employedto, recover information in the gaps betweenthe cross-sectionimages. All voxels between these two consecutivecross-section images are determined by interpolatingbetween 2D pixel informationin the cross-sectionimages. The published grey-sceneinterpolation methods [38]-[41] all estimatethe valueto be associatedwith a new voxel v by linearly interpolatingthe values associatedwith the voxels in the given cross-sectionimages. These methods were improved by carefully selecting correspondingpoints involvedin the interpolationprocess as presentedin [42]. These reconstructedvolumes then can be visualiseddirectly from the voxel representation (see [58]) or anotheralternative is to first reconstructa geometric representationof those surfaces and then to display them with conventional computer graphics techniques[71]. Another approach[36] is to definean object of interest in 2D cross- section imagesbefore performing interpolationonly on the object of interest. This methodis quite closelyrelated to the surface-basedapproaches previously mentioned.

A new technique,called geometrictomography (GT) to processtomographic projectionsin order to reconstructthe externaland internal boundariesof objects is presentedin [69]. This methoddirectly reconstructsthe boundariesof the objectsfrom the set of ID or 2D projections, without using intermediate pixel or voxel representationof the objectsin object space.The main conceptis to detect the edges and to trace the externalor internalboundaries of the object directly into the original set of projections(the sinogram).

1.2.4 3D reconstruction from non-penetrating imaging modalities

This approachfor reconstructionhas many major applicationsin computer visions and robotics. Recent works in computer vision gave a new insight to the geometricalproblem of reconstructionof an object from it projection images.Those researchersare basedon a continuousrepresentation of the objects among a set of

35 Chapter 1: Introduction projections.Giblin and Weiss presentedsome interesting mathematical properties of the projectionsof object in [74], and demonstratedhow to directly computethe new Guassand meancurvatures of the surfaces.Vaillant and Faugeras[68] studiedhow to distinguishan extremecontour from a fixed surfacefeature among a set of edges extracted from images. According to their results, one can further fuse multiple measurementof a fixed featureon a surfaceor definethe differential propertiesof a shapefrom contour. The main idea is only to check whether the 3D positions of an edge estimatedfrom several views are coincident when at least three views are available.

The work on obtaining3D modelsfrom 2D image data is also presentedin [44]. The methodused range data to obtain a polyhedraldescription of a scene.Plane equationsfor determiningthe polyhedral'sfaces were also estimatedby using light patterns. Later, [45] described a new approach for reconstructing boundary representationof objectsby using light striping and generalisedcylinders. Range data was usedin anotherscheme described in [46] basedon a singleview. This schemecan extract simple blocks from a 2D scene.The method to obtain solid models from occluding contours or silhouettes,from multiple projection angleswas mentionedin [47] and also in [48], [49] et al. Bolessand Baker [72] tried to reconstructsurfaces from a set of different cameraviews by using linear cameramotion. This method was extendedby Marimont [73] to arbitarymotion.

In [50], a matching process between different views was introduced to construct a single surfaceto describethe object. Methods for constructingboundary representationsof objects directly from multiple imagestaken at different projection angles were presentedin [43]. These methods are applicableto both range and intensity data. The principle used is not much different from those describedin [51] and [25]. The 3D object is reconstructedby projecting a bounded area or contour within each 2D image along the projecting vector. Mutual intersection of these projectedareas or contoursgenerates an approximated3D descriptionof the object. In

36 Chapter 1: Introduction

[67], this 3D descriptionof the object from the mutualintersection is convertedinto a 3D flexible surfacerepresentation. Then this surfacerepresentation is refinedbased on the texture information of the object surface.A recent work on reconstructing3D models from sequencesof contours was done by Zheng [70]. In this method, a continuoussequence of imagesis taken as an object rotates.A smooth convex shape then can be directly estimatedinstantaneously from its contour and by the first derivativeof contour movement.

1.3 X-ray based imaging

X-rays have beenwidely usedin both medicaland industrial applicationsever since their discoveryby Wilhelm Roentgenin 1895. In medical applications,X-rays havebeen one of the most useful medicalimaging techniques for diagnosisdespite the introduction of other imagingmodalities, which employother physicalphenomena, e. g. magneticresonance imaging (MRI), ultrasonicimaging, etc. In United Kingdom alone, every year there are approximately644 medicaland dental radiographicexaminations per 1000population [7] et al.

Conventional X-ray imaging is a projection imaging which creates,2D projection of the 3D distributionof the X-ray attenuatingproperties of the volume of interest.X-rays are emittedby an approximatepoint source.The image is createdby the interactionof X-ray photonswith a photon detector,e. g. film, intensifyingscreen or fluorescentscreen. The photonspenetrate through a volume of interest which may be representedas a set of voxels, each with an approximatelyconstant absorbing power or attenuatingproperty. The unabsorbedphotons, i. e. the residual flux, are recordedby the detector.These photons can either be principalpholons, which have penetratedthe volume of interest without interacting, or scatteredphotons, which result from an interactionin the volumeof interest.The scatteredphotons usually will be deflected from their original direction and convey little useful information. The principal photonspropagate along straightlines and do carry useful information. The

37 Chapter 1: Introduction volume of interestcan be statedas a density(absorbing power) function definedin 3D. The processof creatingthe X-ray imagecan be presentedin mathematicalform [7] by Eq 1.3a.

fu(x, I(x, y) = Ne(EO)Eexp(- y, z)dz) +f c(E, 6)ES(x, yEfl)dOdE (1.3a)

Where I(x, y)tb* is the energyabsorbed in area4kdy of the detector,the line integral is over all the volume of interestalong the path of the principal photonsreaching the point (xy) and p(x, yz) is the linear attenuationcoefficient. The scatter distribution function S is defined so that S(x,yEf))dEdDdrdy gives the number of scattered photonsin the energyrange from E to E+ dE and the solid angle range from C1to LI + dil which pass through area 45* of the detector. The energy absorption efficiency e of the receptoris a function of both the photon energyand the angle 0 betweenthe photon direction and the z axis. The effectsof the anti-scatterdevice can be easilyadded to this equationif necessary.

Eq 1.3a determinesthe mannerin which the density function or the linear attenuation coefficient in 3D is transformedinto an optical density or grey scale function defined on the 2D image. Contrast curves on the 2D image identify boundaries between regions of different 2D optical densities resulting from the discontinuitiesof the densityfunction in 3D at physicalboundaries of smooth surfaces in the volume of interest.The combinationof theseboundaries is called the contrast surface [52] et al. In a conventionalmethod the users have to extract featureslike contrastcurves from the 2D imageto be ableto understandthe 3D physicalsurface.

It is possibleto obtain a 2D slice from the 3D distribution of attenuating properties using the techniques of either classical tomography or computed tomography [7] et al. However, in this thesis we concernedsolely with projection imaging.

38 Chapter 1: Introduction

In medicalapplications, all the diagnosticX-rays are classifiedas low energy (mainly 20-100keV) [53] et al. As mentionedearlier, the interactionbetween the X- rays and the volume of interestproduces principal photons and scatteredphotons. The scatteredphotons are in fact the noise in normal diagnosticradiology, becausethey degradethe contrastin the 2D image.In generalterms, the effectsof X-ray voltage on the interaction of the X-ray and the volume of interest can be summarisedin the following:

a lower voltagewill producethe more contrastbetween the two volumesof interest with different attenuation coefficients, e.g. the more contrast betweensoft tissueand bone, but X-rays are lesspenetrating a higher voltage will make the X-rays more penetratingbut may cause more problemdue to scatteredphotons.

To visualise the 2D image, when the image does not change rapidly, photographicfilm is used extensively.Films have the advantagethat it provides a permanentrecord. The contrast of the image can often be increasedby bringing an intensifier,[7], [53] et al., screenin contactwith the film emulsion.

In certainradiological procedures, such as when a catheteris threadedthrough the blood vesselsinto the heart, a direct observationof the image is necessary.This was originally achievedby directly watchingthe faint image on a fluorescentscreen. This direct is seldom used nowadays,for it required a high radiation intensity and an exposurefor extendedtime intervals.Where a true time presentation of the imageis necessary,the radiationintensity, and thereforethe radiation dose, can be decreasedsubstantially by the useof imageintensifier.

Over the last decade,digital radiographicimaging technology has played a significant role in diagnostic techniques.Digital X-ray imaging has introduced improvementsin imagequality and in making more effective use of radiation. Digital

39 Chapter 1: Introduction

X-ray imagingalso hasadvantages in processing,communication and archival storage of the image.There are a largenumber of digital X-ray imagingsystems such as digital fluoroscopyand photo-stimulable phosphor computed [54] et al.

X-ray imagingis an ionisingradiation that effectsliving cells, e.g. the cells are directly killed or are preventedfrom functioningproperly by over-doseradiation. The internationalCommission on RadiologicalProtection recommendsa systemof dose limitation, the main purposeof which is to ensurethe proper use of ionising radiation. The detailson the systemof doselimitation canbe found in [53] et al.

1.4 Motivation for this research

1.4.1 Discussionon the previous approacheson 3D reconstruction

Two aspectsare discussedin this section: 9 Surface-basereconstruction and volume-base reconstruction * 3D reconstructionfrom projection images and 3D reconstruction from cross-sectionimages

Surface-basedreconstruction and volume-basedreconstruction

From the object-otiented point of view, we note that the first stage in the developmentof humaWsperception is the perception of the volume of interest as objects describedby their shapesor surface [55]. If a volume of interest has a homogeneousproperty, we will interpret it as one object boundedby its surface.In analysingthe complexshapes of naturalobjects, one of the most important problemsis how to describeefficiently the shapeor surfaceof the obj ect [56], [57]. In a surface- based 3D reconstruction,objects' surfacesare obviously defined in the process of reconstruction. But in volume-based3D reconstruction,objects' surfaces are not

40 Chapter 1: Introduction explicitly computed.This is a disadvantageof volume-basedapproach, from this point of view. Volumetric data can only be used mostly in visualisation but not in quantitativemesurement or 3D physicalmodelling, etc. To conquerthis disadvantage, manyschemes have been proposed to derivethe surfacesof structuresfrom volumetric data [58], [59]. However,these methods also havelimitations, mainly in the coherency and completenessof the surfacesreconstructed. Even when the completenessof the surface problem has been solved in [57], the complexity of the method is still significant.The surfaceformations have to be extractedin 3D space,while in surface- basedreconstruction the boundariesof objectsare extractedand definedin 2D space prior to the reconstructionprocess.

From the computationaland data structureview point, the main problemsthat volume-based3D reconstruction algorithms present is the long processing time required. This processingtime is associatedto the huge number of data to be processedor to be stored by the algorithms. For example, as stated in [9], 3D reconstruction of the Chaperonin started from 258 2D projection images. The dimensionsof the projection imageswere 32 by 32 pixels, while reconstruction a volume of dimensionsof 323 voxels.If we needhigher resolution, 1000by 1000pixels projection imagesmay be required,while the reconstructedvolume becomes10003 voxels. Advancein computerarchitecture is speciallyneeded to conquerthis problem as introduced in [9] et al. However, it still looses the senseof object. The data representingthe surface,in surface-basedreconstruction, has far fewer elementsand requirestypically an order of magnitudeless processingtime and storagethan those used in volume-basedreconstruction. Moreover, because a volume approach is independentof the shapeof the volume of interest,we have to pay the sameprice for reconstructingsimple objects as for reconstructingcomplex objects in the volume.

From the 3D display and manipulationmethodologies point of view, the volumes of interest reconstructedfrom surface-based3D reconstruction methods provide the most efficient data for display and manipulationby most existing 3D

41 Chapter 1: Introduction

display methods[61], [63]. In addition, for the surface-basedreconstructed objects, typically muchless computational work needsto be doneby a displaysystem.

Each of the two approachesto 3D reconstruction has advantagesand limitations. The choiceof any particulartype dependsmainly on factors particular to the intended application.According to the discussionabove, we can conclude that surface-based3D reconstructionis more suitablein the senseof object, computational load, data storageand compatibilitywith displayand manipulation methods.

3D reconstruction from projection images and 3D reconstruction from cross- section images.

According to the discussionin the previous section, we will consider only surface-basedmethods.

Cross-sectionimages

In much of the aforementionedwork, quite a large numberof 2D cross-section imagesis requiredin the reconstructionprocesses. For example,[57] shows that, for reconstructinga pelvis, 105 CT cross-sectionsare required at 2 mm. intervals to produce 468,212 surface'striangular faces and in [61], 724 CT cross-sectionsare neededto reconstructhand's bones with 54,214surface points. Another difficulty that this approachsuffers is how to selectvertices on the surfaceproperly for complex objects [62]. Moreover, the 3D reconstructionfrom cross-sectionsis dependenton size and shapeof the object of interest.The size and shapeof the object will indicate the intervals between cross-sectionsand the number of cross-sectionsrequired. In many casesa prior knowledgeon the object is necessary.Otherwise we cannot define an appropriateset of cross-sectionsto avoid the loss of relevant3D information.

42 Chapter 1: Introduction

Projection images

In 3D reconstructionfrom projections,most of the works have been done by usingnon-penetrating imaging modalities. The mainproblem in existingmethods is the surface'scompleteness in the reconstructedobject. Many methods,e. g. [60] et al., fail to produce closed solid objects.Whether for visualisationand analysisor for later manipulations,it is vital to have proper solid objects rather than disembodiedfree- floating faces. Although this problem is elucidated in [43], there are still other limitations in the method presented.Only polyhedracan be reconstructed.Severe errors will occur when this algorithmis appliedto curved objects.The effects on the 3D reconstructionof a set of projectionangles are not well considered. I

Liedtke's approach[67] producesa reasonablygood result by using about 12 photographicimages. But the object to be reconstructedmust be roughly convex.The method does not directly reconstructan object'ssurfacebut reconstructsthe object's bounding volume first then fit a flexible surfacemodel to this volume. Zheng [70] reconstructed 3D models from sequencesof contours. There are still some disadvantagessuch as a set of large number of photographicimages, e. g. 200-300 images,is required,and the surfacesthat do not satisfythe conditionsof smoothness and visibility still can not be reconstructed.

The methodsdiscussed so far only usednon-penetrating imaging modalitiesto acquirethe images.All theseapproaches suffer the sameproblems: the occlusion of objects by other objects (or by themselvesfor concave objects) prevents us from reconstructingsome parts of the scene.These problems disappearif we deal with imagesfrom penetratingimaging modalities. Thirion [69] claimedto be the first one who systematicallystudied 3D reconstructionfrom projectionimages applied to X-ray data or imagesfrom penetratingimaging modalities. But in this paper, only objects' boundariesin 2D are reconstructed.The work on 3D surfacereconstruction was not

43 Chapter 1: Introduction completedand was not fully reportedin the paper.The work presentedin [25], can only representvery rough occludingsurfaces without any detail.

1.4.2 Motivation

According to the discussionof previouswork, there are many disadvantages and limitations in the existingmethods. This project is motivated by these problems. We do not intend to achievea rather impossibleobjective that is to develop a 3D reconstructionmethod perfect in all aspects.

Thereforeour ob ectiveis: - 1. To develop an object-oriented 3D reconstruction method: The reason why we need this kind of 3D reconstruction is presented in the discussion in the previous section. We wanted to develop a method that reconstruct'%nd manipulateFeach object separately. Using penetrating images gives us information of the overlapping objects in the volume of interest unlike when we use non-penetrating images. 2. To develop a method where computational complexity, computational cost and data storage automatically adapt to the complexity of the surface of an object: This is to achieve an optimum 3D reconstruction. The existing methods do not seem to meet this demand. For example, in 3D reconstruction using cross-section images, the user has to specify intervals between cross sections and the number of cross sections required depending on the complexity, shape and size of the object. The existing methods S cannot automatically define an appropriate set of cross sections to avoid the loss of important 3D information between slides. These approaches do not reconstruct the surface such that the surface point distribution automatically and optimally adapts to the size, shape and complexity of an object's surface.

44 Chapter1: Introduction

3. To develop a method thatproduces the accurate and complete description of an object suitable for display and manipulation by standard techniques:Some methods such as the onesdescribed in [67], [43] and [70] can produce completesolid objects,but the accuracyof the reconstructed ob,j ects is still very limited. Thesemethods cannot be easilyapplied to X-ray imageswhich contain not only the object's occluding contours but all the obj ect's contours. 4. To developa method that does not require any prior knowledge of the object to be reconstructed:We want to reconstructan unknownand make the methodas automaticas possible.The methodshould be generalenough to copewith any kind of objectswithout any prior knowledge. 5. To reduce the number of 2D images required: As previously mentioned, the number of 2D imagesrequired by some method, e.g. the methods reportedin [57] and [70], is consideredto be too large. Therefore,we want to develop a method that can provide a good result but requires a small numberof 2D images,e. g. 10 images.The numberof 2D imagesrequired in X-ray imagingis crucial becauseX-ray imagingis an ionising radiation that harmsliving cells.

To achievethis objective,we havedeveloped a new methodcalled The object- based 3D X-ray imaging which is the topic of this thesis. The method is used to deducean object's surfacefrom its penetratingprojection images,e. g. X-ray images. The discussionon how well we achievethis objective by using this new method is presentedin Chapter9.

1.5 The scope of this research

The whole processof 3D imagingis shownin Fig 1.2a.This researchproject focusesmainly on the 3D reconstructionalgorithm which is a part of processB, digital processing.We assumethat 2D-2D imageprocessing (low-level) algorithm,Fig 1.2d,

45 Chapter 1: Introduction

suchas imageenhancement and edgeextraction etc. canbe performedon relevant imagesby usingexisting methods. The 2D imagesused in this project are conventional X-ray images.Some analysis algorithms, e. g. 2D-2D imageprocessing (high-level) in Fig 1.2c,to defineall contrastcurves in eachX-ray imageof a volume of interestcan alsobe completedby existingmethods

The scopeof this researchis statedin Fig 1.5a.We startedwith the analysis algorithm(high-level) to matchand identify all contrastcurves, in all X-ray images,for eachobject. The full 3D descriptionof eachobject is reconstructedfrom this set of matchedcontrast curves. Both parallelprojection and cone-beamprojection are also considered.The B-rep descriptionof thesereconstructed objects is in a standardform readyfor any further desiredprocessing, for displayand manipulation,by any standard 3D graphicprogram. Those techniques for displayand manipulation,e. g. shadingor rendering,can be found in manyreferences, [64], [65] et al., which can be doneby any standardcommercial 3D graphicsoftware, are beyond the boundaryof this research, but will be mentionedin Chapter7.

The test object used in this project consistof both real physical objects and mathematicallydefined objets.

1.6 Organisation of this thesis

This thesisis roughly dividedinto two parts.The first part explainsthe general conceptof the Object-Based3D X-ray Imaging.The secondpart describesa practical implementation,of the conceptdescribed in the first part.

46 Chapter 1: Introduction

Part 1: CONCEPT

Chapter 2: Concept of Object-Based3D X-ray Imaging

The chapterdescribes the generalconcept in generalof this project. No detail or algorithmsare presentedin this chapter.It will give the readeran overview of the project as a necessarybackground for the detailand refinementsin the next part.

processed2D imagedata

all contrastcurves in all projectionplanes analysis algorithm

2D descriptionor interpretation ; setof iiýched contrastmirves for eachobject

SCOPEOF TfUS ...... reconstruction RESEARCH algorithm ...... r ......

.,; 3D descriptionor .0 interpretation B-rcp of each object

Fig 1.5a7be scopeof this research

Part 2: IMPLEMENTATION

The presentationof this part is matchedto the basicflow of operationsin the project as illustratedin Fig 1.6a.A chapteris devotedto relevantblocks in the figure. The detailsfrom practicalpoint of view, pertainingto eachblock, will be presented,

47 Chapter 1: Introduction and someinteresting properties of the conceptdescribed in chapter2 will subsequently be explored.In eachchapter, a diagramshowing a flow of operationsis presented.We wrote most of the main computerprograms used in this project. Only smallnumber of the programsused were written by other researchersof were commercialprograms. The programswhich were not written by us aremarked clearly in everydiagram. The computersystems employed in this projectare explainedin AppendixE. The chapters to be presentedin this thesiswill now be outlined.

Chapter 3: Data acquisition

This chaptershows how to determinethe mathematicallydefined objects, and presentsdetails on the real physicalobjects used as phantomsin this project. The X- ray systemsand the acquisitionof the X-ray or projectionimages at a specificset of projection anglesare alsodescribed. For the mathematicallydefined objects, the descriptionfor eachobject is given.For the real physicalobjects, the processesof imageprocessing such as imageenhancement and edgeextraction will be briefly described.

Chapter 4: Curve representation

All the contrastcurves found in everyimage must be representedin an appropriateform. Somerules must be developedto identify which curve is to be used in 3D reconstructionof which object.The methodof determiningthe analytic presentationof the contrastcurves will be stated,but not in full detailsbecause, as mentionedearlier in the scopeof this research,it is beyondthe boundaryof this project. The methodis basedmainly on work by other researchers.The main purpose of this chapteris to explainin full detail how to identify and how to definea set of contrastcurves acquired at differentprojection angles.

48 Chapter 1: Introduction

Chapter 5: Object surface curve reconstruction

This chapterand chapter6 arethe mainpart of this thesis.In this chapter,the methodof identifyingan objectand determining an object'ssurface points in 3D is explained.The surfacepoints are found by pairingcontrast curves in the set of images. Then a network of surfacecurves are derivedfrom thesesurface points. Both parallel projection and cone-beamprojection are considered.

Chapter 6: Object surface reconstruction

The objectiveof this chapteris to statethe methodof defininga solid description(B-rep) for eachobject derived from the set of surfacepoints. The first stageis to generatea primarysurface. The subsequentstage is to improve the representationby approximatingthe primaryfaces by secondaryfaces and then by sets of triangularfaces.

Chapter 7: Manipulation and visualisation

The main object of this chapteris to describesome useful techniques in computergraphics, applicable to this project.Most of them are well-known algorithms and are widely usedin displayand manipulation of a solid object.

Chapter 8: Results and Chapter 9: Discussion and Conclusions

Thesechapters present the resulting3D reconstructedobjects with discussionand conclusionsincluding further work neededto be donein this project.

49 Chapter 1: Introduction

1[3] mathematically- realphysical objects definedobjects I

X-ray system [3] X-ray images simulation [3] vp projectionimages Pn-0 Pn-I at differentprojection angles 2D-2D imageprocessing ...... (low-level) [4] ssingalgorithm

...... Np processedimages

contrastcurve identification [4] - 2D-2D imageprocessing (high-level) singalgorith

setof all contrast curvesin eachprojection image

objectidectification analysingalgorithm process

(5] found vo objects p Fp,17P-11 if object no 0 objectno No-i J, ...... --ýo ...... 1 I objectsurface curve objectsurface curve [5] ...... reconstruction reconstruction

objectsurface objectsurface [6] ...... reconstruction reconstruction I- &splay 1, 13D and [7] objectl ...... manipulafion

Fig 1.6a Flow of operationsin this thesis.Relevant chapter numbers are in parentheses.

50 Chapter 1: Introduction

1.7 References

[I] L. d. A. Moura Jr., A systemfor the reconstruction,handling and display of three dimensionalmedical structures, PhD dissertation,Imperial Collegeof Science, Technology& Medicine,London, 1988.

[2] J. Rooneyand P. Steadmaned., Principle ofcomputer-aideddesign, Open University& Pitman,London, 1987.

[3] C. M. Hoffinann,Geometric & solid modelling,Morgan Kaufmann,California, 1989.

[4] P. J. Flynn and A. K. Jain,CAD-based computer vision: from CAD models to relational graph, IEEE TransPattern Analysis andMachine Intelligence, Vol. 13, No. 2,1991,114-132.

[5] M. Mantyla, Solid modelling,Computer Science Press, 1988.

[6] S. Prakoonwit,Object-orientedX-ray tomography, MSc dissertation,Imperial Collegeof Science,Technology & Medicine,London, 1990.

[7] S. Webb ed., Ae physicsofmedical imaging,Adam Hilger, Bristol, 1990.

[8] G. T. Herman,Image reconstructionfromprojections, AcademicsPress, London, 1980.

[9] E. L. Zapata,L Benavides,F. F. Rivera,J. D. Brugueraand J. M. Cerazo,Image reconstruction on hyper cube computers,Proceedings of 3rd Symposiumon the Frontiers ofMassively Parallel Computation,College Park, MD, USA, Oct. 1990, 127-134.

51 Chapter 1: Introduction

[10] D. A. Heynerand W. K. Jenkins,The missing cone problem in computer tomography, in T. S. Huanged., Advances in ComputerVision and Image Processing, JAI Press,London, 1984,83-144.

[11] S. Z. Lee, J. B. Ra, S. K. I-IiIaland Z. H. Cho, True three-dimensional cone- beam reconstruction (TTCR) algorithm, IEEE Transactionson Medical Imaging, Vol. 8,,Iss. 4,1989,304-312.

[12] M. D. Altschuler,G. T. Hermanand A. Lent, Fully three-dimensional reconstruction from cone-beamsources, in Proceedingsof Conferenceon Pallern Recognitionand ImageProcessing, IEEE, ComputerSociety, May 1978,194-199.

[13] M. Schlindwein,Iterative three-dimensionalreconstruction from twin cone- beam projections, IEEE Transactionson Nuclear Science,Vol. NS-25,1974,113 5- 1143.

[14] G. Kowalski, Multislice reconstruction from twin cone-beamscanning, IEF-E Transactions on Nuclear Science, Vol. NS-26, No. 2,1979,2895-2903.

[15] L. A. Feldkamp,L. C. Davis andJ. W. Kress,Practical cone-beamalgorithm, J Opt. Soc.Amer. A, Vol. Iv 1984o612-619.

[ 16] S. Webb,J. Sutcliffe,L. Burkinshawand A. Horsman,Tomographic reconstruction from experimentally obtained cone-beamprojection, IEEE Trans. Med Imaging, Vol. MI-6, Mar 1987,67-73.

[17] R. V. Denton,B. Friedlanderand A. J. Rockmore,Direct three dimensional image reconstruction from divergent rays, IEEE Trans.Nucl. Sci., Vol. NS-26, 1979,4695-4703.

52 Chapter 1: Introduction

[18] F. C. Peyrin,The generalizedback-projection theorem for cone-beam reconstruction, IEEE Trans.Nud. Sci., Vol. NS-32,1985,1512-1519.

[19] 0. Nalcioglu and Z. H. Cho, Reconstruction of 3-D objects from cone-beam projections, Proc. IEEE, Vol. 66,1978,1584-1585.

[20] J. B. Ra andZ. H. Cho, Generalizedtrue 3-D reconstruction algorithm, Proc. IEEE, Vol. 69,1981,668-679.

[21] J. B. Ra, C. B. Lim, Z. H. Cho, S. K. I-IiIaland J. Correl, A true 3-D reconstruction algorithm for the spherical positron emissiontomography, Phys. Med Bid. Vol. 27,1982,37-50.

[22] Z. H. Cho, J. B. Ra and S. K. Hilal, True 3-D reconstruction (TTR) - Application of algorithm toward full utilisation of oblique rays, IEEE Trans.Med Imaging, Vol. NU-2,1983,6-18.

[23] Z. H. Cho, Computerise tomography, in Encyclopaedia OfPhysical Science and Technology,Vol. 3, New York: Academic,1987,507-544.

[24] B. D. Smith andJ. Chen,Implementation, investigation and improvement of a novel cone-beamreconstruction method, IEEE Transon Med Imaging, Vol. 11, No. 2,1992,260-266.

[25] L. Caponettiand M. Fanelli,Computer-aided simulation for bone surgery, 1= ComputerGraphics & Applications,November 1993,86-92.

[26] W. N. Martin andJ. K. Aggarwal,Volumetric descriptions of objects from multiple views, IEEE Trans.PAMI, Vol. 5, No. 2,1983,150-158.

53 Chapter 1: Introduction

[27] W. C. Lin, C. C. Liang and C. T. Chen,Dynamic elastic Interpolation for 3-1) medical image reconstruction from serial crosssectionsJEEE Trans. Med Imaging, Vol. 7, No. 3,1988,225-232.

[28] H. H. Ip, Automaticdefection and reconstructionof three-dimensionalobject using a cellular arrayprocessor,PhD dissertation,University College, London, 1983.

[29] H. N. Christeansenand T. W. Sederberg,Conversion of complex contour line definitions into polygonal elementmosaics, Comput. Graphics, Vol. 2, Aug 1987, 187-192.

[30] P. G. Selfridge,Automatic 3D reconstruction from serial section electron micrographs, in Proc. SPIE Conf. on Appl. Artificial IntelligenceIff, Orlando,FL., Apr. 1986,521-528.

[3 1] 1. Soble,C. Levinthaland E. R. Macagno,Special techniques for the automatic computer reconstruction of neuronal structures, Ann. Rev.Biophys. Bioeng., Vol. 9,1980,347-362.

[32] E. R. Macagno,C. Levinthaland I. Sobel,Three-dimensional computer reconstruction of neuronesand neuronal assemblies,Ann. Rev.Bioeng., Vol. 8, 1979,323-351.

[33] E. R. Macagno,V. Lopresti andC. Leventhal,Structure and development of neuronal connectionsin isogenicorganisms: Variation and similarities in the optic systemof daphnia magna,Proc. Nat. Acad Sci., Vol. 7, Jan 1973,57-61.

[34] P. R. Burton, Computer-assistedthree-dimensional reconstruction of the ultra structure of the frog olfactory axon, Norelco Rep., Vol. 32, May 1985,1-10.

54 Chapter 1: Introduction

[35] L. R. Denham,Seismic interpretation, Proc. IEEE, Oct 1984,1255-1265.

[36] S. P. Rayaand J. K. Udupa,Shape-based interpolation of multidimensional objects, IEEE Trans.Med. Imaging, Vol. 9, No. 1, March 1990,32-42.

[37] S. S. Rrivedi, G. T. Hermanand J. K. Udupa,Segmentation into three classes using gradients, IEEE Trans.Med Imaging, Vol. NH-5,1986,116-119.

[38] J. K. Udupa,G. T. Hennan,P. S. Margasahayam,L. S. Chenand C. R. Meyer, 3D98: a turnkey systemfor the display and analysis of 3D medical objects, SPIE Proc., Vol. 671,1986,154-168.

[39] S. M. Goldwasser,R. A. Reynolds,D. Talton andE. Walsh,Real time display and manipulation of 3D CT, PET and NMR data, SHE Proc., Vol. 671,1986, 139-149.

[40] S. L. Wood, Surface derinition of 3D objects from CT images,in Proc. 8th Ann. Conf. IEEE Eng. Med. BioL Soc.,Vol. 2, Dallas-FortWorth, TX, 1986,1118- 1121.

[4 1]M. Levoy, Display of surfacesfrom volume data, IEEE Comput.Graphics Appl, May 1988,29-37.

[42] C. C. Liang, W. C. Lin and C. T. Chen,Intensity interpolation for reconstructing 3-D medical imagesfrom serial cross-sections,in Proc. Ann. Inter. Conf. IE.EEEng. Med andBioL Soc.,Vol. 3, Nov 1988,1389-1390.

[43] J. R. Sternstromand C. I. Connolly,Constructing object models images, International Journal of ComputerVision, Vol. 9, No. 3,1992,185-212.

55 Chapter 1: Introduction

[44] Y. Shiraiand M. Suwa,Recognition of polyhedronswith a range finder, Proc. 2ndlntem Joint Conf.Artif. Intell., 1971,80-87.

[45] G. J. Agin and T. 0. Binford, Computer description of curved objects, 3rd Intern. Joint Conff.Arlif. Intell., Stanford,CA., 1973,629-63 5.

[46] R. J. Popplestone,C. M. Brown, A. P. Ambler and G. F. Crawford, Forming models of plane-and-cylinder faceted bodiesfrom light stripes, Proc. 41hIntem Joint Conf. Arlif. Intell., September1975,664-668.

[47] B. G. Baumgart,A polyhedral representationfor computer vision, Proc. AFIPSNal. Comput.Conf., 1975,44:589-596.

[48] C. I. Connolly,Cumulative generation of octree models from range data, IEEE Proc. 1984Intern. Conf. Robot.A utom.,March 1984,25.

[49] C. H. Chienand J. K. Aggarwal,A volume/surfaceoctree representation, Proc. 71hIntem. Conf. Palt. Recog., Montreal, August 1984,2: 817-820.

[50] M. Potmesil,Generating modelsof solid objects by matching 3D surface segments,Proc. 81hIntern. Joint Conf.Artif. Intell., Karlsruhe, 1983,1809-1093.

[51] P. Srinivasan,P. Liang and S. Hackwood,Computational geometric methods in volumetric intersection for 3D reconstruction, IEEE Proc. Intem Conf. Robot. Autom., May 1989,Vol. 1,190-195.

[52] Y. L. Kergosien,Generic sign systemsin medical imaging, IEEE Computer Graphics& Applications, Sept. 1991,46-65.

56 Chapter 1: Introduction

[53]E. G. A. Aird, Basic Physicsfor Medical ImagingHeinernann,Oxford, 1988.

[54] A. R. Cowen,Digital X-ray imaging, IEE Colloquiumon Medical.,Image Processing andAnalysis, Mar. 1992,8/1-3.

[55] M. D. Vernon, YhePsychology ofperception, Penguin, Middlesex, 1965.

[56] K. Yamamoto, Future directions in computer vision and image understanding; ETL perspectives,IEEE Proc. 101hIntern. Conf. Patt. Recog.,NJ., USA., 1990,Vol. 1,32-37.

[57] A. Wallin, Constructing isosurfacesfrom CT data, IEEE ComputerGraphics &Applications, Nov. 1991,28-33.

[58] W. E. Lorensenand H. E. Clines,Marching cubes: a high resolution 3D surface construction algorithm, ComputerGraphics (Prog. Sig-graph), Vol. 21, No. 4, Aug 1987,163-169.

[59] G. Wyvill, G. McPheetersand B. Wyvill, Data structure for soft objects, Yhe Visual Computer, Vol. 2, No. 4,1986,227-234.

[60] G. J. Agin and T. 0. Binford, Computer description of curved object, 3rd Intern, Joint Conf. Arlif. Intell., Stanford,August 1973,629-635.

[61] 0. S. Odesanya,W. N. WaggenspackJr. andD. E. Thompson,Construction of biological surface modelsfrom cross-sections,IEEE Transactionson Biomedical Engineering,Vol. 40, No.4, April 1993,329-334.

[62] J. L. Coatrieuxand C. Toumoulin,Future trends in 3D medical imaging, IEEE

Engineeringin Medicine and Biology, Dec 1990,33 -3 9.

57 Chapter 1: Introduction

[63] C. Smetsand M. DeGroof, Interpretation of 3D medical scenes,Machine Visionfor Aree-DimensionScenes, Acaden& Press,1990,163-193.

[64] D. F. Rogersand J. A. Adams,Mathematical elementsfor computer graphics, 2nd ed., McGraw-Ell, 1989.

[65] D. F. Rogers,Procedural elementsfor computer graphics, McGraw-Hill, 1985.

[66] PhilipsMedical Systems,PCRIPA CS a radiology revolution, Documentation number4535 983 00987,Shelton, CT, USA, December1991.

[67] C. E. Liedtke, Shapeadaptation for modelling of 3D objects in natural scenes,Proc. 1991IEEE ComputerSociety Conference on ComputerVision and Pattern Recognition,Maui, M, USA, June1991,704-5.

[68] R. Vaillant and 0. D. Faugeras,Using extremal boundaries for 3D object modeling, IEEE Transactionson PatternAnalysis andMachine Intelligence, Vol. 14, No. 2, February 1992,157-173.

[69] 1 P. Thirion, Segmentationof tornographic data without image reconstruction, IEEE Transactionson Medical Imaging, Vol. 11, No. 1, March 1992,102-110.

[70] J. Y. Zheng,Acquiring 3D modelsfrom sequenceof contours, IEEE Transactionson PatternAnalysis andMachine Intelligence, Vol. 16, No. 2, February 1994,163-178.

[71] J. D. Boissonnat,Shape reconstruction from plannar cross sections,Computer vision, GraphicsandImage Processing,Vol. 44,1988,1-29.

58 Chapter 1: Introduction

[72] R. C. Bolles andH. H. Baker, Epipolar-plane image analysis: a technique for analysing motion sequences,Proc. 3rd WorkshopComput Vision:Representation Contr., Bellaire,NU, October 1985,168-178.

[73] D. H. Marimont, Projective duality and the analysis of image sequence, Proc. IEEE Ist Int Conf. Compul. Vision, 1987,7-14.

[74] P. Giblin, Reconstruction of surfacesfrom profiles, Proc. IEF-E Ist Int conf. Comput. Vision, 1987,136-144.

59 C-ýH, A-Pi T- ER -,2 CONY T>0F-,^OB-j., CT-ýB--, A.,SED

2.1 Introduction

The centralpoint of this chapteris to generallydescribe the whole conceptof this thesis.No algebraicand algorithmicdetails are describedhere. They will be left to be exploredin the secondpart of this thesis:implementation. We can roughly divide this chapterinto two themes.Studies of basicrelevant principles are presentedas the first theme.Descriptions of how we derivethe conceptof project ObjectBased 3D X-ray imaging from theseprinciples, as the secondtheme. The project's aim is to deducean object's surfacefrom its X-ray projections.

This thesisis applicationoriented, mathematics is usednot as the main issuebut as a tool to reachour aim. We approachthe problemin a practicalway. Thereforewe do not intendto prove the assumptionsor statementsanalytically but rather put emphasison showinghow theseassumptions and statementscan be successfully implementedin the project.

60 Chapter2:Concept of Object-based3D X-ray Imaging

However,there are somematters with regardto the form in which the material will be presentedin this chapterand through out the thesis.They are the following:

We usedescriptive geometry rather than analysis,wherever possible. This meansthat, in manycases, figures are used together with somelines of text explanationthat attemptto bring the readerinto a statein which a statement or an assumptionpresented is acceptedto be true by usingthe figure to shapethe readerthoughts. Wherevernecessary, generic. - that meansessentially representative exampl es - are usedto describea concept.If possiblewe try to usethe examplesto cover all conditionsthat might appearin the real situationsof interest.This is the bestway to explainsome concepts clearly with minimisinga risk of the readermaking mistaken general idea. For example,many people may think aboutthe occlusionboundary or the visibleboundary of an object as a planar curve, if a sphereis alwaysused as the paradigmaticcase to explainthis boundary.This assertionis in fact falsefor almostany other surface. Formaliseddescriptions and definitions of the conceptusing mathematical languageare alsopresented simultaneously with verbal and pictorial explanation.This is a formal way to presentan ideaprecisely and concisely for the usein the rest of the thesis. There existsno formal agreementon someterminologies and notations. Thereforewe needto invent someof thesefor our specificpurpose. All non standardterminology and notations are definedand describedbefore being referredto in the thesis.

To limit the scopeof the investigationand for the work to follow, the surfaces must be consideredonly in the spacedimension, e. g., in 3D Euclideanspace. The time dimensionwill be accordinglyexcluded. Non-static (time-varying) or moving surfaces can still be consideredunder this limitation.By excludingthe time dimension,we imply that the surfaceswill be staticand rigid at the time of observation.And a rigid and static

61 Chapter2:Concept of Object-based3D X-ray Imaging

surfacein 3D canbe describedby two properties:topologicalproperty andgeometrical property.

2.2 Topological properties of an object

In this sectionwe presentthe applicationto our project of someexisting basic principlesin topology. All possibleshapes of objectscan be classifiedby propertiesof its surfacesto suit our reconstructionmethod, which is basedon a surfaceapproach. It is necessaryto studyand present some properties of the surfacesof interestin order to set up constraintsfor reconstructionalgorithms. Our reconstructionmethod described in the next sectionis a surface-based approach.As in the motivationof the researchstated in Chapter1, we use a modelto makethe observationof the original surfaceeasier in a computer.A model is a digital surfacerepresenting the original surfacein the real world. This leadsto the conclusion that an original surface,to be reconstructed,must satisfyconstraints required by the reconstructionmethod and the representationmethod.

The most suitableway to representthe surfacereconstructed by our methodis a B-rep. A boundaryrepresentation is valid if it definesthe boundaryof a reasonablesolid object. Accordingto the theory [1], [2] et al., we are only interestedin objectswhich satisfythe validity criteria of a boundarymodel that includesthe following conditions:

" Facetsof the modelsdo not intersecteach other exceptat commonvertices or edges. " The boundariesof facesare simplepolygons that do not intersectthemselves. " The set of facesof the boundarymodel closes, i. e. forms the completeskin of the solid with no missingpart.

Accordingto theseconditions, only 2-manifold, closed and orientable surfaces can be representedby theBoundary representation. Thus the surfacesof interest,called

62 Chapter2:Concept of Object-based3D X-ray Imaging objects,to be reconstructedby our method,must also be 2-manifold,closed and orientablesurfaces.

Derinition 2.2a A closed surface S is called an Ohject if S is 2-manifold and orientable. NB All the details on basic principles of Topology are presentedin Appendix A.

For example,a sphereand a torus (a spherewith one genus)are objects.But a Mobius strip and a Klein bottle which arenot orientableand cannot be representedin the real world, are not objects.The conditionsfor beingan object must be imposedin every stepof reconstructionalgorithm. This conditionwill preventthe algorithm from generatinga surfacethat is not an object in this sense.

From the definitionof an objectwe canconclude that an object is thereforeany surfacethat canbe deformedtopologically from a spherewith a numberof genus. However, only the topologicalproperties cannot give us a clearunderstanding about the shapeof an object.Imagine an objectdescribed by its topological property as a spherewith no genus,this objectcan range from a normal sphereto a bone as illustratedby Fig 2.2a.Another exampleis an objectdescribed as a spherewith one genus,this object alsohas a largevariety of shapes,for instance,a torus and a coffee mug are both to topologicallyequivalent to a spherewith one genus,see also Fig 2.2a. It is obviousthat we needanother property to describean object of interest.Therefore the geometricalproperties are alsonecessary in understandingshape of the object. The geometricalproperties are presentedin detailin the next section.

63 Chapter2:Concept of Object-based3D X-ray Imaging OJ

Fig 2.2a Sphereand bone are topologicallyequivalent,

torus and cpffeemug are topologicall equivalent

2.3 Geometrical properties of an object

There are manycriteria to classifygeometrical properties of an object. In fact, it is almostimpossible and certainlynot worth to try to explainall thesegeometrical properties.Therefore we will dealonly someproperties that are essentialto the project. Theseproperties are smoothnessand convexity.

2.3.1 Smooth and non-smooth object

First let us considera smoothobject. In this project we definea s7noothobject as an object suchthat thereis no point on the object's surfacethat hasradius of curvatureequals to zero. If an objectfails to satisfythis conditionthe object is non- smooth.The zero radiusof curvatureappears on the surfacewhere there is a singular point or a sharpedge as examplesillustrated in Fig 2.3a.

singularpoint cs edge

Fig 2.3a Non-smoothobjects.

64 Chapter2:Concept of Object-based3D X-ray Imaging

2.3.2 Convex and non-convexobject

In this sectionwe classifythe objectsinto two classes:convex objects and non- convexobjects. The conceptof langentialplane is usedin the classification.A tangentialplane is a planesituated such that all points on the object'ssurface lie completelyon one sideof that planeand there is at leastone tangentialpoint or touchingpoint betweenthat planeand the surface.

To describethis propertyfirst we considera smoothobject. For a convexobject, there will be no tangentialplane that hasmore than one tangentialpoint. If the object fails to satisfythis conditionthen the objectis non-convex.Some examples are Mustratedin Fig 2.3b.

2

2.3b (1) Convexobject. (2) non-convexobject.

In caseof a non-s7noothobject the samecriterion canbe appliedonly if the sharpedge of the objectis not a straightline segmentand there is no planarface on the object surface.Also for somespecial smooth object suchas a torus, there is a tangent planethat hasa curvedline commonwith the surface.If thereis suchedge or face on the object surfacethen the criterionmust be slightly changedas follows.

For a convexobject, there will be no tangentialplane that hasmore than one tangentialpoint or onestraight tangentialline segmentor one tangentialcontinuous planarface. If the objectfails to satisfythis conditionthen the object is non-convex.

65 Chapter2:Concept of Object-based3D X-ray Imaging

For example,all platonicsolids are convexobjects. A tangentialplane can only touch eithera vertex, an edge(straight line) or a face(planar face). For the caseof torus, there is a tangentialplane that hasa curvedtangential line but not straight line. Thereforea torus is non-convexob j ect.

2.4 Projection of an object

In this sectionwe studysome basic characteristics of X-ray projectionsof an object. This kind of projectiongives us informationabout both surfaceand volume or densityof the object.Certainly, both surfaceand volume information are important.In manycases they are closelyrelated. Information on one helpsto clarify informationon the other. However,in this research,the focusis only on the surfaceinformation from X-ray images.

2.4.1 Projection

The X-ray basedimaging technique maps a volumeof interestin 3D onto a projection planein 2D. The volumeis a densityfunction or an attenuationfunction that is a piecewiseconstant function definedin 3D. The function is transformedinto an optical densityor a grey scalefunction in 2D. The most importantproperty of this projectionis the sudectivemapping. A point in 3D is mappedinto 2D by surjection [4]. This meansit is not one to one mapping.The codomainwhich is being consideredand the rangecan be the same.Many pointsin 3D maybe mappedinto the samepoint in 2D. The mappingfrom 3D to 2D eliminatesthe depthinformation in the resulting2D X-ray image.Different shapesin 3D mayappear the samein 2D. Someproperties of shapesin 3D may be changedwhen projected into 2D. To understandthis information and how it correspondsto the actualobject in 3D we rely on somecharacteristics of the mappingdiscussed in detail in the next section.

66 Chapler2: Concept of Object-based 3D. X-ra.v linaging

2.4.2 Curves in X-ray imaging

To describe the X-ray projections of a 3D surface, only surface information is considered in this thesis. As describedin section 1.3, a contrast surface of the volume of interest occurs at the discontinuities of the density function. For instance, if there is a piece of bone embededin soft tissue then the surface of the bone is a contrast surface. The sets of contrast curves on a 2D projection plane appear at boundaries between regions of different optical densitiesresulted from the discontinuities of the density function in 2D. A tangent curve is the curve on a contrast surface where the surface is tangential to the X-rays. If there is a discontinuity on the surface where the radius of curvature is equal to zero, we call this such a discontinuity a singular curve. It is crucial to realise that becausethey are defined by the relative positions of an object and a centre of projection, tangent curves are not fixed to a surface as are singular curves, but slide over the surface as the centre of projection changes.

Fig 2.4a is used to illustrate simple X-ray basedimaging, where a contrast curve is a projection of a tangent curve on a contrast surface onto a projection plane. The case that a contrast curve is generatedfrom both a singular curve (sharp edge) on the surface and a tangent curve is shown in Fig 2.4b.

surface

, -; rojection direction

curve contrastcurve projectionplane

Fig 2.4a Contrast curve from a tangent curve.

67 Chapter2: Concept of Object-hased 3DA7-raY Imaging

contrast curve from sinpu'" lar curve contrastsurface

p roject ion direction singular cun, e (sharpe edge) tangent curve contrast curve from tangent curve projection plane

Fig 2.4b Contrast cun, e from tangent curve and discontinuity

The characteristicsof contrast curves can be explored both analytically and descriptively. In this thesis we do not intend to develop a complete theory to describe the characteristics of contrast curves analytically. We rather use a set of generic objects as examplesto descriptively explain the crucial properties. This will make the reader familiar with the issue before we presentthe discussionand assumptionson these characteristics. Certainly, from the set of exampleswe cannot cover all casesof the contrast curves. However we try to select the objects to be studies to be as varied as possible, to cover all the necessarycharacteristics vital for the project.

The set of objects that we used to study the characteristics of contrast curves is as follows-

convex objects " an ellipsoid(smooth object) "a polyhedron:tetrahedron (non-smooth object)

non-convex objects

objects with no genus (topological equivalent to a sphere) " furrow-shaped objects

" dimple-shapedobjects

smooth, non-smooth

68 Chapter2:Concept of Object-based3D X-ray Imaging

* bell-shapedobjects smooth,non-smooth 9 bone-shapedobjects objectwith one genus ea torus.

And somedescriptions for non standardshapes are as follows:

The furrow-shapes object: This maybe cafleda kidneybean or bananasurface. Although this surfaceis not convex,it containsno cavities.

The bell-shapedobject: A stronglycurved convex wart on a globally convexshape. It may be calleda pearsurface.

The dimple-shapedobject: A concavityin a globallyconvex shape, which could be called a pot hole or an applesurface.

Togetherthese examples exhaust much of the complexitywe may facedin practice. Understandingof the exampleswill at leastprepare us for exploringthe conceptof this project. The completeset of examplesof the objects' surfaceswith their contrastcurves and occludingcontours is shownin AppendixB. In this sectiononly someexamples from theseobjects are illustratedto clarify importantcharacteristics; the rest may also be usedas an examplethroughout this thesis.

From theseexamples, we canpresent some important (not all) propertiesof the contrast curvesof a singleobject in a projectiondirection as follows.

Contrast curves in 2D

* contrastcurve can be either a closedcurve or not a closedcurve.

69 Chapter2:Concept of Object-based3D X-ray Imaging

Theremust be at leastone closedcontrast curve. * There must be one closedoccluding curve, where the closedoccluding curve is the outer-mostcurve of the object'scontrast curve, seean examplefrom a bell-shaped object in Fig 2.4c.

(n) contrastcurve -noccluding contrast ame

Fig 2.4c Contrastcurve and occludingcontrast curve. e contrastcurve canbe classifiedinto two classes:

Simplecurve: a simplecontrast curve is a curvethat containsno singular point or branch. Complexcurve: a complexcurve is a curvethat is not a simplecurve. It can be roughly dividedinto five categories: Curvewith singularpointsonly: this kind of curvewe cantrace along the curvefrom the beginning to the endwithout facing any junction, for example,see the occluding contrast curve in Fig 2.4c. Curvewith branches:there is at leastone junction on the curve.Fig 2.4d illustratessome cases of branchesfrom a bone-shapedobject. A andB arecalledjunction points. In thiscase we cannottrace the curvefrom thebeginning to the endwithout repeating an old part.

Fig 2.4d Curveswith branchesand junction points A, B.

70 Chapler2:Concept of Object-based3D X-ray Imaging

Curvewith a swallowlail: this kind of curve occursvery often in non- convexsurface case. The shapeof the curve is shownin Fig 2.4e.We canfollow the curvethrough the crossingpoint from the beginningto the endwithout repeatingthe old part.

A

Fig 2.4e Curveswith swallowtailsand crossingpoint A.

Curvewith a butterfly: this is the most complicatedcontrast curve. In this casewe cannottrace every point on the curve from the beginning to the endwithout repeatingthe old part. Thereare both a crossing point andjunction pointson the curve as shownin Fig 2.4f This exampleis obtainedfrom a projectionof a dimplewith a sharpedge in AppendixB.

A

Fig 2.4f Curvewith a butterfly and crossingpoint B andjunction points A, C.

2.4.3 Relations between the contrast curves in 2D and the surfaces in 3D

*A contrastcurve canbe generatedfrom: 9a projectionof a tangentcurve

71 Chapter2:Concept of Object-based3D X-ray Imaging

a projectionof a singularcurve such as a sharpedge or a singularpoint (wherea contrastcurve becomes a contrastpoint). A singularcurve always generatesa contrastcurve in any projectiondirection. One curve in 3D (canbe eithertangent or singularcurve) cangenerate only one contrastcurve in 2D but one contrastcurve can be generatedfrom manycurves in 3D. This comesfrom the basicprinciple of suýection. A singularpoint on a contrastcurve may be generatedfrom either a singularpoint on a curve in 3D or from a smoothcurve with no singularpoint. Fig 2.4ýrillustrated examplesfor both cases.Singular point A is the projectionof singularpoint on B on the tangentcurve in 3D. Singularpoints C andD on the contrastcurve are not generatedfrom any singularpoint in 3D.

Fig 2.4g Singularpoints.

If two contrastcurves are disconnectedthese two curvesare generatedfrom disconnectedcurves in 3D. But a contrastcurve canbe generatedfrom either a one curve or from severaldisconnect curves in 3D " There must be at leastone closedtangent curve in 3D correspondingto a closed contrastcurve in 2D. " An occludingcontrast curve is generatedfrom a tangentcurve or a combinationof tangentcurves or a combinationof tangentand singularcurves. "A junction point on a contrastcurve is likely to be on the 3D curve (tangentor discontinuity)that generatesthis contrastcurve "A lips, e.g. contrastcurve A in Fig 2.4e,in 2D is normallygenerated from a cavity or a convexwart on the surface.

72 Chapter2:Concept of Object-based3D X-ray Imaging

2.5 Intuitive concept of Object Based 3D X-ray Imaging

In thelast section we presentedsome important information on relations betweencontrast curves and an object'ssurface. It shouldbe notedthat, first we consideronly one object.Multiple objectswill be dealtwith in the later stage.The main themeof this sectionis intuitively describinghow the contrastcurves in 2D can be used to deducethe correspondingobject's surface in 3D, [5]-[7].

In this thesis,the contrastcurve is the only 2D informationthat we are considering.As we know, a contrastcurve can be eithera projectionof a tangentcurve or of a singularcurve. The 3D surfacemust be deducedfrom thesecontrast curves only. Becauseof informationwe haveis a set of curvestherefore before we try to find the completesurface, the most obviousmethod to dealwith this task is to deducetheir correspondingcurves in 3D. These3D curvescertainly are on the object's surface.(If we assumethat the objectis homogeneous.) On how to do this, we first explore some detailsabout tangent curves and singularcurves on the surfaceof an object.

2.5.1 Tangent curves

From the propertiesdiscussed in the last section,there existsat leastone closed tangentcurve at a projectiondirection. The tangentcurve dependson the projection direction. A set of differenttangent curves is producedfrom a set of different projection directions.At this point, one canthink aboutthe object as a network of tangentcurves lies on the surfaceof the object.The network itself is a wire-framerepresentation of the object.Fig 2.5 illustratesthe network of tangentcurves on a sphereand the network as a wire-framerepresentation of the sphere.

73 Chapterl Conceptof Object-based3D X-ray Imaging

Gl-_ý (D (D Pdri (2) (3) r= 1proj n di= 2

Fig 2.5a (2) Contrastcurves on a spherefrom two projectiondirections. (2) Contrastcurves from moreprojection directions. (3) Contrastcurves as a wirc-framerepresentation of the sphere.

Ideally, if we can derivethe tangentcurves from their projections(contrast curves) thus the object canbe reconstructedand represented as a wire-framemodel.

2.5.2 Singular curves

Unlike the tangentcurve, a singularcurve is the fixed curve on an object's surface.It doesnot dependon the projectiondirection. For example,a sharpedge on an object doesnot changeits positionon the surfacewhen we changethe projection direction.

Also a singularcurve generates a contrastcurve in everyprojection. If a singular curve can be deducedfrom its counterpartin 2D (the contrastcurve) then this curve performsas anotherwire on the wire-framemodel from the tangentcurves.

2.5.3 3D Reconstruction

The reconstructionmethod is to representan objectby anotherobject called reconstructedobject which is reconstructedfrom the given informationof the original object. In this project we try to deducea 3D objectfrom given 2D information. Normally, the informationis incomplete,or in a form that needsto be interpreted. A reconstructionmethod, generally, employs those techniques that makethe

74 Chapter2:Concept of Object-based3D X-ray Imaging

reconstructedobject closestto the originalone. The degreeof accuracyof a reconstructionmethod depends on both the giveninformation and the methoditself. And a singleobject mayhave many different reconstructed objects that representits shapedepending on the methodand information used in the reconstructionprocess.

In principlethe reconstructionof tangentcurves and singularcurves are slightly different as decriedby the following:

Tangent curve

It is impossibleto directly deducea completetangent curve because it depends on the projectiondirection. We havea differenttangent curve for eachprojection plane as illustratedin Fig 2.5b. From this figure, only contrastcurve A is not sufficientto determinethe correspondingtangent curve B. Thereare infinite numberof curvesin 3D that havetheir projectionsas curveA. Thereforeit is necessaryto find other conditions to makethis task possible.We usethe conditionthat if there is an intersectionpoint betweentwo tangentcurves; we candefine this point from their projections.In Fig 2.5b point n, which is a point of intersectionbetween tangent curve B and C, can be defined uniquelyfrom its projectionpoints rn ando.

Fig 2.5b Contrastcurves A, D from tangentcurve B, C respectivelyand points o and m are the projectionsof point n.

In practice,there is a problemin identifyingpoint m and o becausethese two points are not pre-markedon the contrastcurves. One efficient solutionto this problem

75 Chapter2:Concept of Object-based3D X-ray Imaging is to usethe fact that the threepoints m, n and o arein the sameplane. Let us call this planea common tangentplane, point m, o tangentpoints and point na common tangentpoint. This planehas point n as a point of contactwith tangentcurve B and C (and, of course,also a point of contactwith the object's surface).Thus the tangent planealso touches curve m andn at point m and o respectively,see Fig 2.5c. From this fact, point n canbe determineby first finding the touchingpoints between the tangent planeand contrastcurve A andD, thenproject these two points back into 3D space,the intersectionpoint is point n. Pairingwith other contrastcurves in other projection planesgives more points on the tangentcurve. These points maybe linked with appropriatelines or curvedsegments. Thus a sufficientnumber of projectionplanes at suitableprojection directions will yield an appropriatenumber of points found at the suitablepositions along the curve.This is consideredto be a reasonableapproximation of the tangentcurves. The detail on how to definethe tangentplane is presentedin the next section.

Fig 2.5c Tangentplane.

Singular curve

To reconstructa singularcurve from its contrastcurves in 21), we use the same principle as the one appliedto the tangentcurve case. In fact, it is necessaryto usethe sameprinciple because in reality we do not know if a contrastcurve is generatedfrom a singularcurve or a tangentcurve. Fig 2.5d illustratesan exampleof deducinga point on

76 Chapter2:Concept of Object-based3D X-ray Imaging a singularcurve B. Unlike the tangentcurve, both contrastcurves A and C are the projectionsof the samesingular curve B. Thereforeif we know that the curve is singular,it is not necessaryfor the commonplane to be tangentialto the curve A, B or C.

Fig 2.5d Singularcurve and tangentplane. Whenthe completenetwork of both tangentcurves and singularcurves is obtained,the object canbe representedas a wire-framemodel.

Surface

The perfect surfacedescription of the objectmay then be deducedfrom the wire-framemodel by applyingsome topological constraints of a 2-manifoldand orientableclosed surface discussed in section2.2. The object is thereforerepresented by a solid model.The surfacereconstruction is describedin detail later in this thesis.

2.5.5 Multiple objects and object identification

This is the casewhen there is morethan one object in the volume of interest. The contrastcurves of theseobjects may or maynot overlap.To reconstructeach identify object separately,we needto contrastcurves for eachobject. -From the definition, only a closedsurface is an object,there must be an occludingcontrast curve for eachobject in eachprojection plane. Therefore we usethe principleillustrated by Fig 2.5e to distinguishcontrast curves.

77 Chapter2:Concept of Object-based3D X-ray Imaging

Fig 2.5e Identifying object'scontrast curves by two tangentplanes.

In eachpair of projectionplanes, there exists two commontangent planes touching the object's occludingsurface at two (or more) extremepoints, b and e as shownin the figure. Theseplanes also touch the objects'occluding contrast curves at the extremities, i. e. points a, d, c andf, that arethe projectionsof b and e. Under this condition, from the figure, the black contrastcurve can be matchedonly with anotherblack contrast curve. It is not possibleto matchthe black curveswith the grey curvesthat belongto the other object.If we usethis methodto matchcontrast curves in everypossible pair of projectionplanes then the contrastcurves can be identifiedfor eachobject.

2.6 Definitions and descriptions

In this sectionwe inventedsome of the definitions,descriptions which are used throughoutthis thesis.We alsodiscovered some properties and usedthem to make someassumptions useful for this project. 7he readermay skip this sectionand usethis sectiononly whenthe clarification of a definition or an assumptionis required.

Let us beginwith the spacewhich is Euclideanspace E d, i. e. d-dimensional

Euclideanspace which is the spaceof the d-tuples(e,, e2,..., ed) of real numbers d 2)1/2 2,..., d (Z is defined e,, with metric e, .A co-ordinatesystem of reference so

78 Chapter2:Concept of Object-based3D X-ray Imaging that a point in the spaceis representedby a vector of Cartesianco-ordinates of an appropriatedimension d. This vector canbe written in a form of d-tuples as shown in form [e, earlieror matrix as a columnmatrix e, - ed]T*

To be more specific,only E' (two-dimensionalEuclidean space), see Fig 2.6a, and E' (three-dimensionalEuclidean space) are requiredfor this project.

ly

xx 0 z7lýý (2)

Fig 2.6a 1) Cartesianco-ordinate system in E2 2) Right-handedtriad Cartesianco-ordinate system in E3

In E', there aretwo co-ordinatesystems: left-hand triad and right-handtriad.

Both systemshave advantages and disadvantages, however the graphicscommunity has standardisedon theright-handed triad andthis is the co-ordinatesystem used throughoutthis thesis,Fig 2.6a.

Fig 2.6b showsthe threemain co-ordinate systems employed in this project, the World Co-ordinate System E', 3D projection plane co-ordinate system E'-,P and the ,P,. 2D Projection Plane Co-ordinateSystem E, '. *, positionedin the E', wherepn is a projection planenumber, see detail in section3.2.2.

Derinition 2.6a A hounded-Projectionplane is projectionplane bounded by a closed curve on the planecentred at the projectionplane's centre 0..

To definea reconstructionspace, in eachprojection direction we generatea projection cylinder or rectangularblock (in parallelprojection) or a projection cone or

79 Chapler2:Concept of Object-based3D X-ray Imaging rectangularpyramid (in conicalprojection) from the edgeof the bounded-projection planeas in Fig 2.6c. The axis of projectioncylinder or the projectioncone is parallelto the projectiondirection.

Derinition 2.6b A reconstructionspace E,, E,' c: E' is a spacebounded by intersectionsof all projectioncylinders in parallelprojection or by intersectionsof all projectioncones in conicalprojection. The projectionof everypoint in this spacecan be seenin everybounded-projection plane.

The centreof the reconstructionspace is the sameas the origin 0 of the world co-ordinatesystem. We needto definethe reconstructionspace because, in this thesis, only objectswithin this spaceare under our consideration.Beyond this space,we assumethat there is no object or partsof objectof interest.

0/ yxpp /\-v I x

Fig 2.6b World co-ordinatesystem (xyz) in E3and 3D projectionplane co-ordinate system (XP, y,, z. ) in EP and P E2P. 2D projectionplane co-ordinate system (x,, yp) in P

80 Chapler2:Concept of Object-based3D X-ray Imaging

ConicalNection Parallelprojection

block

projecfi

Fig 2.6c Projectioncone and projectioncylinder, case the projectionplanes are boundedby rectangular closedcurves.

Next, we introducethe projectionsystem and other relatedtopics necessaryfor the reconstructionmethod. We beginwith the projection systemexpressed by Definition 2.6.2c. I

Definition 2.6.2c This definition defines a projection system used in the reconstruction method. A set of NPprojection planes

I IP"-ON,-I Np ý: 3 in E3 VPn , and pn = projection plane number centredat lop Iml -1 r= ', OPP.e E, ,P. is a set of Np X-ray images taken at a set ofprojeclion directions denoted by

P?0 # where d. is a unit vector in E' and d P"O cd,,, 4:* pno # pn,, ceR.

A set of centresofprojections which is a set of X-ray sources'positions can be expressedby F, fcp,pn=O = »1,-,- In a parallelprojection, CP,,,is at ooso that the projectionrays are parallel.

In a conicalprojection, CP,,,.is a point in E.

81 Chapler2:Concept of Object-based3D X-ray Imaging

OP is the centreof a 3D projectionplane co-ordinate system and d is the ,P,, inversedirection the 3D EIP vector of z-axisof the projectionplane co-ordinate system p". From this definition,it is importantto point out that the projectionplanes are definedat different projectiondirections and in any pair, the projectionplanes are neverparallel to eachother. This conditionis requiredby the reconstructionmethod. In parallel projection,any two parallelprojection planes are reciprocal. In conicalprojection, we can assumethat any two parallelprojection planes are approximatelyreciprocal by ignoring the magnificationeffects. Thus crucial information would not be significantly lost by this condition.

This definition canbe clarifiedby usingthe projectionplanes in the examplein Fig 2.6d. In generalthe projectionplanes are not necessaryorthogonal to eachother.

#iy

--->

Fig 2.6d The setof projectionplanes in the example.

The next definitionto be introducedis the geometricprojection. All the curves and points are projectedby the conditionsexpressed in this definition.

For T, w-*,P is Definition 2.6.2d a projectionplane V/,,,r= a geometricprojection II pn a

Ep2nv, suýection rI'P:P" E' --> GO'= llw', '(GOCP,,,,,dp,,, OP.,, ): GO E', GO'r=E.,,,,P P" r:

82 Chapter2:Concept of Object-based3D X-ray Imaging where GO is a geometricalobject or a configuration,e. g. a point, a curve or a set of geometricalobjects. GO' is a parallelprojection or a conicalprojection of GO on V/,,, at CP,,,r= r, dp,,r= 19, op.,, r=r, seeFig 2.6e.

parallel projection

L70

Fig 2.6e Geometricprojection in parallel projectionand conical projection.

In this projection,all the pointson a geometricalobject are projectedonto a projectionplane. For example,a projectedskew curve is a curve that lies in a projection planein 3D space.This meansall the pointson the spacecurve are projectedonto the projection plane.The sameprinciple can also be appliedto a 3D surface.All the points on a 3D surfaceare projectedonto a projectionplane to form an areawhich lies in the plane.

This is a surjection,which is not a one-to-onemapping but many-to-one mapping.Therefore one projectedgeometric object in a projectionplane may be projectedfrom more than one geometricobject in E'. For example,in Fig 2.4e GO' is the projection of both GO and GO..

In the last sectionwe left detail on how to definea commontangent plane. Definition 2.6.2epresents the geometricconditions for the plane.

83 Chapler2:Concept of Object-based3D X-ray Imaging

Derinition 2.6.2eA set of commontangentplanes 1,N -1 PMO'PON =0 is definedin T, # N, is the of a pair of projectionplanes yj,,ý r=- pno pn,. number commontangent planes found in this pair. In a parallelprojection, a commontangent pla ne. ý, is any planethat is perpendicularto both v/,,,. and v/,,, in the pair and touchesthe tangentcurves or the singularcurve, Fig 2.6f line in a conicalprojection, a commontangent plane ýi is any plane that containsthe (CP, CP,,.,): CP,,,.,CP,,,, r. touchesthe curves the curve, P,,., r= and contrast or singular Fig 2.6g.

Fig 2.6f Commontangent plane in parallel projection.

Fig 2.6g Commontangent plane in conical projection.

84 Chapter2:Concept of Object-based3D X-ray Imaging

Derinition 2.6f A set of common intersection lines {li I'vl-' Ipp,= i=O is a set of intersectionlines between a commontangent ý, the plane eExPnO.PnIand fig 2.6f projectionplane V/,P,, eT wherepn = pno,pnI, see and g.

Next, we presentthe definitionof the curves.

Derinition 2.6g For a projectionplane V,,, r='F,

CTS 'sl'v'7-' pn =IC, i=O , N'-0<:pn >Cs=Pn def 010 be a set of N"pn tangentor singular curvesin E. Let Arl I cc [CieI-, NP"',, C' def 0 pn = J=O =0 <=->pn

ffp,, in E'-P be a set of contrast curves pn where

pn vn dp,,,OPPJ. ce-,p" = nw-. v(crs, CP,,,,,

This definitioncan be illustratedgraphically by Fig 2.6h. In this examplethe tangent N' is to the ',. numberof or singularcurves PPI equal numberof contrastcurvesN,

To be noticedthat it is not necessarythat N'Pn is equalto N,',,. One contrastcurve may be the result of more than one tangentor singularcurve. Fig 2.6h(l), (2) representsthe casewhen a singularcurve becomes a singularpoint. We treat this point with the same principle as for the curvetherefore we still usethe samenotation as for the curve. All the curvesmentioned in this definitioncan be eithera closedcurve or a curve segment.

The next definitionis the fonnalisedcharacteristics of tangentpoints on tangent or singularcurves.

85 Chapter2:Concept of Object-based3D X-ray Imaging

Derinition 2.6h A set of commonlangentpoinis

i xcrp-l CTP; = CTPj : N' #0 no,pp% J=o 2 is a set of points of contact CTP, in E' where the common tangent plane ý, r= E PRO-p?h touches the contrast surface, i. e. each point occurs at the intersection point between two tangent curves or on a singular curve. A set of tangentpoints is definedas i JTP NZ-1 T=Zn N..ý :9Nc"p, N7' :#0 ,P j0 and W-. i TIPp'n Ilp" + P(CTPP',,.. CPP.,dpn) Oppn) = P,,, CTPno., pn pno,pn, respectively. p,% and TP'P',.P pn where = ,P , = pno,pnl, are on

Ts c c0...... C; ...... cc (1) ...... V/P. cl c4l'i V- (fN (4) ...... co (3) ...... 6

Fig 2.6h 1) One tangentcurve, one singularcurve and two contrastcurves 2) One tangentcurve, one isolated singular point in E3, one contrastcurve and one isolatedsingular in E2 point 'P 3) One tangentcurve with one singularcurve and contrastcurves 4) One tangentcurve, one singularpoint in E3 and one contrastcurve with one in E2 singularpoint P

Note that whenthe singularcurve becomes a singularpoint this definition still can be appliedbut not only the planeý, that touchesthe point but any planethat

86 Chapter2:Concept of Object-based3D X-ray Imaging containsthis singularpoint. For a singularcurve, different commontangent planes from different pairs of projectionplanes touch the curveat differentpoints alongthis singular curve.

This definitionleads to someinteresting properties of a commontangent point as expressedin Assumption2.6a.

Assumption 2.6a For a set of common tangent points CTP',, 'the P O-Pl% number of commontangent points N' canbe classifiedinto two cases.

case1 N' >0 and Nc" # oo,this caseoccurs when plane ýj touchesa contrastsurface at a finite numberof points.Fig 2.6 1, illustratesthe casesby using a furrow-shapedobject: a) when N' =I and b) Nc-p =

(1) (2)

Fig 2.6i (1) Onetangent point in one commontangent plane. (2) Two tangentpoints in one commontangent plane.

case2 N' = oo,this casehappens when a part of a contrastsurface or a curve

on a contrastsurface lies in planeý,. This caseoccurs when there is a flat face or a planarcurve which lies on a commontangent plane.

We can expressa commontangent point in relationto a pair of tangentcurves in the next assumption.

87 Chapter2:Concept of Object-based3D X-ray Imaging

Assumption 2.6b If CTP, is Ck" e Cr' (pn and r=CTPP',,,,P,, on PM = pno,pnl) Cl' rI ', P(Ck", CP OP ), C, C, TP rI " (CTPj CP dp", OP -- d?,,, : 9 .vn ,P, P,, pn j vn P P,,, 'P. TPj r=TP,. then TP, is a point of contactbetween I, e I..,, and C,.

This is illustrated by Fig 2.6j. CTPO CTPp',, TPo r=TPP',,, assumption r= P,,and betweenC, ' ', (pn = pn,,, pn) arethe point of contactor tangentpoint ,eC, and I, = I., where (pn = pno,pn). A projectorline suchas4or I. in Fig 2.6j is defined by the following definition.

Derinition 2.6i If P is a point in E,,P, in V, raT then a projector line L= M"(P, CPP,,,dp,, ) P" is a line in E'.

In parallelprojection, Lis a line that passesthrough P and parallelto d'P".

In conicalprojection, Lis a line that passesthrough P and CPJ,,,.

, Fig 2.6j Commontangent point and tangentpoints.

There is a necessarycharacteristic of a vertex,which we discovered,to be discussed.This characteristic,which is crucialfor makingthe decisionwhether a point in 3D spacebelongs to the objectof interest,is describedby the following definitionsand assumption.

88 Chapter2:Concept of Object-based3D X-ray Imaging

Derinition 2.6j Let C be a closed curve in E,',, ' and GO cE, ',*, be a geometric object.

Let Ec2g;EP2 be an interior spacebounded by C. Then GO is said to be inside or on C if if GO Ec2. and only g;4f

Assumption 2.6c If PE E' is a point on the surface of an object 0 then Vpn(P'c Ec,: P'= rlw'P(P, CPP,,,dp,,, OPP,, ), lpn where Cis an occludingcontrast curve of the objectin plane V,,,.

This assumptionmeans if a point is on the surfaceof an object then the projection of this point in everyprojection plane must be insideor on the occluding contrastcurve of the object in the correspondingprojection plane. But it is not always true that if a point, with its projectionalways inside or on the occludingcurve in every projection plane,is on the object's surface.This point can be either outside, insideor on the surface.If the point is insidethe surface,it is alwaystrue that its projectionsare insidethe occludingcurves but neveron the occludingcurves in all projection planes.In the casethat the point is outsideobject but insidethe object's occluding boundedspace generated by the intersectionsof projectedvolumes from the all occludingcurves, the point's projectionsare insidethe occludingcurves. This caseis illustratedby Fig 2.6 k. Point P outsidesphere C but insidethe occludingbounded spaceoccurs inside the occludingcurve of the sphereC in everyprojection plane. Vil

p 3 V,

Fig 2.6k Point P is outsidesphere C and its projections.

89 Chapter2:Concept of Object-based3D X-ray Imaging

We cannotdirectly useAssumption 2.6.4c to verify whethera point is on an object's surface.With somedegree of accuracy,we can modify this assumptionby using the approximationthat the volumebetween the object's surfaceand the surfaceof the occludingbounded volume is minimisedso that we can ignore this volume.The approximationyields the assumptionthat if the projectionsof a point are insideor on the occludingcurves then the point is on or insidethe object's surface.If we discardthe casewhen the point is insidethe object'ssurface, which is rarely occursto the point to be verified in this thesis,the following assumptioncan be stated.

Assumption 2.6d In a suitable set of projectionplanes T, P r=El is a point on a surfaceof an object 0 then

Vpn(P'c E.: P'= II" (P9 CP.P"I pn5op") ce lpn ,d where Cis the occludingcontrast curve of the object in plane V/,..

The suitable T maybe definedas a setwhich minimisesthe volume betweenan object'ssurface and the surfaceof theoccluding bounded volume. In practice, Assumption2.6d should be sufficientfor thereconstruction process. Note that not only a point on an object'ssurface satisfies this assumption but also,say, a point in an object'shole but outsidethe object.

To studythe distributionof an object'svertices or to reconstructa tangent curve,we considera set of planetangent points defined by Definition 2.6k, PTP,,, on a plane xV,,,in T. It is a set of all tangentpoints, on a contrastcurve in a projection plane,obtained from pairingthe projectionplane with the rest of projectionplanes in T.

90 Chapter2:Concept of Object-based3D X-ray Imaging

Derinition 2.6k A set ofpJane tangentpoints on a contrast curve C' r=C' Pn in y/P" c-T is defined by

", PTPJ,, =u PTP n is an integer and n r=[0, Np - 1] An # pn, R p4m where PTP,': ý" is all tangent points in plane y..,, in pair (y,, V,, ) or N4-1 PTPPtr TPPIn. =Ui=O

Let us recallthe mainexample to illustratethis definition as shownin Fig 2.61. In this examplepn = 0, the first projectionplane pair when n=0 or (y,,, y, ) is shown in Fig 2.6m:

pTpo,7r Tpol 0.1I i=O 0 IPOPP2}

The secondprojection plane pair when n=2 or NO W2)is explainedby Fig 2.6n:

pirp P-P TPO, 0.2 -uj=0 {pis PO

Thereforethe set of planetangent points on xVOis

PTPO=u PTPojr9 n= 1,2 of pTpop"r U pTpo7r .12 fpO9pl'lp2sp3}

91 Chapter2:Concept of Object-based3D X-ray Imaging

Fig 2.61Distribution of tangentand commontangent points.

Pair (W Fig 2.6m 0, WI)

(y Fig 2.6n Pair 0, yI).

92 Chapter2:Concept of Object-based3D X-ray Imaging

Definition 2.61A set ofpJanecommon intersection lines or tangentlines on a contrast curve C' r=C' vn in V/Pn r=T is definedby ,. ý" TL Pn Tlj! is an integerand n r=[0, Np I]An#pn, =uR 'PAn,n -

(yp,,, YJ where T='P"'n is all commonintersection lines or tangentlines in YP. in pair or

r TV! pn" .vn, n herelpn is definedby Definition 2.6f in the pair (yp,,, y. ).

From the main exampledescribed in Fig 2.6m and2.6n, the set of planetangent linesis

r. unpoo. TLO = n=1,2 Mair TL'nr 0.1u 0.2 [LO, L, ýLOQ

On projectionplane w,,,, a point in PTP,. is a tangentpoint or a point of contact betweena contrastcurve C' r=C; Pn and a line in TL,,,. It can be seenthat the points in PTP, C' in tangent definedby tangentlines TLP,, in the P,,spread along angle a set of as exampleshown in Fig 2.6o.

PO Lo

P3 P,

opo / xp

L, -"O P2 Ls L4 Wo

Fig 2.6o Distribution of planecommon tangent points.

93 Chapter2:Concept of Object-based3D X-ray Imaging

To analysethe distributionof theseplane tangent points systematically,we first considerthe definitionof meancurvature.

8Y Derinition 2.6m Mean curvatureR is the quantity obtained from the intrinsic equation of a curve for a finite length of the curve, where & is the length of an arc of the curve and 60 is the angle between the two tangents at either end of the arc. Its firniting value as & tends to 0 is the curvature [4]. For a reasonablereconstruction, what we needis the planetangent points in PTP, that spreadclosely in highly curvearc, andwidely spreadin smooth,flat arc: the

distribution of the tangent C' C,,. From the optimum plane pointson closedcurve r: P Definition 2.6m the equationof the meancurvature can be statedas: 80

To makethe optimumdistribution:

-IK ac

where & is a length of an arc betweenany two consecutivepoints P. and Pbwhere P., Pbr= PTP,,, on a contrastcurve C'. A lengthof an arc must be inversely proportionalto the meancurvature. Therefore we can concludethat 50 must be the sameangle for any arc on C' statedin the following assumption.

Assumption 2.6e If the anglemade by the tangentlines in TL,, with the positivexp- wis spreadat regularintervals over the range[0,; r) then 50 of any arc on a contrast C' C' betweentwo P. Pbwhere P., Pb is curve r= Pri consecutivepoints and ePTP,,, the sameangle.

By this assumptionwe needa set of projectiondirections E), that make PTP,,, spreadsat regularintervals. This set canbe determinedby the following assumption.

94 Chapter2:Concept of Object-based3D X-ray Imaging

Assumption 2.6f In parallelprojection or conicalprojection, if the projectiondirections in E) spreaduniformly over a solid angle of 4; r 0 or 2; r rl then the anglesmade by the tangentlines in TL,, with the positivexp-axis spread or, most likely to spreadat regular intervalsover the range[0,; r).

Remark In somesets of projectionplanes directions E) that spreaduniformly over 4; rf), there are someparallel projection planes. To satisfythe definition of 0 statedin Definition 2.6c, the numberof projectionplanes may then be reducedby half and spread over 2; rf) insteadof 4; rf).

2.7 References

[1] M. Mantyla,Introduction to solid modelling,Computer Science Press, Rockville, Md., 1988.

[2] C. M. Hoffinann,Geometic & solid modeling,Morgan Kaufmann,California, 1989.

[3] Y. L. Kergosien,Generic sign systemsin medical imaging, IEEE Computer Graphics &Applications, September1991,46-65.

[4] K. Selkirk,Longman mathematics handbook, Longman, Essex, 1991.

[5] R. Benjamin,IC visting notes,1990-1995. (unpublished)

(6] S. Prakoonwit,K Benjamin,R. I. Kitney, The object-basedreconstruction of complex-shapedobjects from a small number of X-ray images,Proc. World Congresson Medical PhysicsandBio-Medical Engineering.,Rio de Janeiro,21-26 Aug, 1994.

95 Chapter2:Concept of Object-based3D X-ray Imaging

[7] R. Benjamin, Object-based 3D X-ray imaging, Proc. First International

Conferenceon ComputerVision, Virtual Reality and Roboticsin Medicine, Nice, France,April 1995.

96 CWt P TEW 3,

3.1 Introduction

In this chapterwe describehow the dataused in this researchwas obtained,as shownin Fig 3. Ia. Both computer-generatedand real physicalobjects were createdand the methodto determineprojection directions was derived.The digital X-rays involved in the experimentsin this researchare describedin termsof both generalprinciples and somespecifications such as geometrical configurations of the machine,etc. A set of X- ray imagesof eachphantom was obtainedfor the processesin later stages.

Main original contribution

The main original contributionin this chaptercan be describedas followy: Method for determininga set of optimumprojection directions by using regularpolyhedra. Method for simulatingX-ray imagesfrom B-rep objects:Although this kind of methodhas already been proposed by someresearchers such as [5]. We presenta simplerapproach to suit this project.

97 Chapter3: Data acquisition

* Phantomdesigning: We havedesigned a phantomfor a physicalobject to be compatiblewith an X-ray systememployed in this project.

commercialprogram [3.21 [3.41

[3.4]

format) 31 . X-ray let seto. image __rOj o" directionsLsj X-ray system simulation F. A [3.4]

TNP-j T...... 0 T, ...... Np 2D X-ray images

Fig Ma Flow of operationsin the dataacquisition. Relevant section numbers are in parentheses.

3.2 Projection System

3.2.1 Projection directions

To get optimuminformation and to makethe optimumdistribution of the surfacepoints as describedin Assumption2.6f, we needa set of projection directions in E) spreaduniformly over a solid angleof 4; rf)(steradian). In somesets of projection directionsthis conditionmay be reducedto spreaduniformly over 2; rfl becausewe assumethat x-ray imagesat projectiondirection vector d are reciprocalto thoseat projection directionvector -d.

98 Chapter3: Data acquisition

JdPPJN In this section, some sets of projection directions e= -I (Definition 2.6c) .VFI-O are presented.In eachset, projection directions spread over 4; r fl or 2; r fl, are presented.To find thesesets, let us beginwith the definition of a solid angle.

Definition 3.2a Solid angle the 3D conceptsimilar to anglein 2D; the spaceshaped like a cone subtendedby a regionof a surfaceat a point not on the surface.A measure of solid angleis a steradian(0). That areaof the surfaceof a unit spherewith the centreat the vertex of the solid anglewhich is coveredby the cone definingthe solid angle.A whole spheresubtends an angleof 41rfl at its centre[7], et al.

From this definition,if we canpartition the surfaceof a sphereinto NP regular regionsand a the regionshave the samearea (the samesolid angle)then the normalsat the centresof the regionsspread uniformly over the solid angleof 47tfl. Thesemake us think about a Platonic solid or a regularpolyhe&on which is a polyhedronhaving all facescongruent with eachother. For a regularpolyhedron, also eachvertex is on the surfaceof the circumscribingsphere. These properties of a regularpolyhedron can be usedto imply that eachface has the samecorresponding spherical face on the circumscribingsphere. Thus everyspherical face has the samearea or the samesolid angle.Therefore if eachof the projectionplanes is parallelto eachface of the regular polyhedra,then the projectiondirections are spreaduniformly (equalsolid angles).

There are only five kinds of regularconvex polyhedra:

" Thecube: 8 verticesand 6 faces,gives 0 where N. = 3, seeFig 3.2a(l).

" Ae octahedron:6 verticesand 8 faces,gives E) where N. 4, seeFig 3.2aW.

" Ae tetrahedron:4 verticesand 4 faces,gives 0 where N. 4, seeFigl2a(3).

9 Yheicosahedron: 12 verticesand 20 faces,gives 0 where N. = 10, seeFig 3.26(4).

99 Chapter3: Data acquisition

Ae dodecahedron:20 verticesand 12 faces,gives 0 where N,, = 12, see Fig 3.2e.

(1) (2) (3) (4) (5)

Fig 3.2a Five regular polyhedra:(1) cube(2) octahedron(3) tetrahedron(4) icosahedron (5) dodecahedron.

The verticesand faces of thesepolyhedra can be defined by methodsdescribed in a vast literature, [I] et A For the sakeof simplicity,all the polyhedrain this thesisare centredat the origin and found by usingtwo conditions[I]:

* The polyhedronmust be in the, orientationthat allowsthe vertex co-ordinates to be as simpleas possible. q7 Use 0,1 the only number and goldenratio q7, where or

I +, J5 Ai1.618034, to representthe vertices' positions. 2

The full detailson the vertices,faces and the correspondingsets of projection directions,are presentedin AppendixC.

Each projectiondirection in set 0 canbe representby the vector normalto each face of the polyhedron.For example,we find four projectiondirections from a tetrahedronas shownin Fig 3.2b.

100 Chapter 3: Data acquisition

v3

d,

0 V2

Ad3

Fig 3.2b Tetrahedronand the four projectiondirections.

This normal vector is the vector product betweentwo consecutiveedges vectorsin the face,see Eq 3.2a.

e'xel, d, and e, (3.2a) = leilleil  =V-V. =V-V. where V., Vb,V, are the three consecutivevertices in a polyhedron'sface Pli.

But in this case,all regularpolyhedra are centred at the origin. Thereforea projection directionis alsothe normalisedvector from the origin to the centreof a polyhedroWs face.This canbe surnmarisedas shown in Eq 3.2b.

c dp" (3.2b) =, 1C.- P"I

V, the face'scentroid c... V, is the face'svertex and Nis of where N , a number vertices in a face. Note that, in this research,perfectly uniform angular distribution is desirable,but it is by no meansvital, seealso section 5.6.

101 Chapter 3: Data acquisition

3.2.2 Co-ordinate and Projection Systems

The mainco-ordinate systems in this projectare:

1. World co-ordinatesystem denoted by E. 2.3D denoted by E'-,P projection plane co-ordinate system pig 2p 3.2D projection plane co-ordinate systemdenoted by EPn

Thesethree co-ordinate systems have already been generally defined in Section 2.6. In this sectionwe discussthe co-ordinatesystems in detailand some constraints imposedin real experiments.

The origin of the world co-ordinatesystem is the referencefor the rest of the systems.To makethe real experimentmuch easier to operate,we definethe origin of the world co-ordinatessuch that the centreof the X-ray beamfrom everyprojection anglepasses through the origin. And the y -axisof everyprojection plane intersects Y- axis of the world co-ordinatesystem. Fig 3.2cis usedto illustratea projectionplane T., andthe relevantco-ordinate systems.

In Fig 3.2c, let us considerprojection plane y, in E, the unit direction vectors x,,,, y,,,, z,,, are the direction vectors of x.. axis, y,. axis and z,. axis respectively,of projection plane T,,, in E. They are defined by the following equations. cosacoso' sincc (3.2c) cosa sin sinacoso' yp" Cosa (3.2d) sina sin

102 Chapter 3: Data acquisition

z., = -dpn x -d,,,. (3.2e) = -dp.. y -d Z. . P". ( (-d,,,. dp,,.y ý__. Zz). where a= tan-1 22 tan-' and ý: - x x+d,,..z pn

Fig 3.2c 1. World co-ordinate system (x, y, z). 2.3D projection plane co-ordinate system (xp,,, yj,,,, z.. ). 3.2D projection plane co-ordinate system (x,,,, y,, d-

fop,. )N, A set of centresof projection planesr= -' can be defined by PmmO

OP =rd ,P,, P. rx (3.2f) r d,.. y rd Z. 'P..

103 Chapter 3: Data acquisition where r is a radiusof reconstructionregion or the distancebetween the centreof a projectionplane and the world co-ordinatesystem's origin.

JCP A set of centresof projections or a set of X-ray's sources, Ir, = PJPI_'O. can be deteffnined in the sameway as shown in Eq 3.2g.

CP = -pdist d+r (3.2g) 'P. vn wherepdYst is a projectiondistance from the centreof projectionCP,. to the centreof projection plane OPP..

The relationbetween the world co-ordinatesystem and the 3D projectionplane co-ordinatesystem can be expressedin the followingtransformation matrices. The first matrix is the transformation matrix for transforming a point in the world co-ordinates E' into a point in the 3D projection plane co-ordinatesE! P. If there is a point in the .v" world co-ordinatesystem represented in homogeneousco-ordinates [8] et al., as 1!=[P. x P.y P.z I]r thenthe transformationmatrix Tpý-" is

xp». x xp".y xp".z äl' ä2 z yp».x Yi.-Y yp». (3.2h) zpn. x zpn. y zp". z ä3

L0 0 0 1j where

A, = -(x,,. x OP., x+x..,. y OP,,.y + x... z OP,,,.z) A2 = -(Ypp, Y+yp,.z OPP Z) -XOP),, -X+Yp. -YOPj'p.. 0,. A3 = -(z,,,. x OP x+z,,,. yOP y+zp,. zop, Z) ,P.. 'p" ".

104 Chapter 3: Data acquisition

T x xpn. y xpn. zl Ypn=[Yp,, Ypn-Y Yp,, and x pn=[x,,.. , -X -Z], T z x y zp,. Zl pn=[Zp". --Pn.

[p,,,. Thus point P can be transfonned into point Pp, P, = x P,,,.y P,,,.z I]inthe

3D projection plane co-ordinatesby using the following equation

T W-+Pp pp" (3.2i) P""-- =

or point P,,,can be transfonnedback into point P in the world co-ordinatesby

P P. (3.2j) pm -

We canthink aboutany point on the projectionplane T, as a point in the 3D

projection plane co-ordinateswhere the z componentis equal to zero. This meansif the point is on the projectionplane, we cantransform this point from 3D projectionplane co-ordinatesinto 2D projectionplane co-ordinates E,. * by simplyeliminating the z componentof the point. 'Thereforethe point in 2D canbe expressedas P,'. =[p... x p... y 1]. A point in 2D can also be transformedback into 3D by simply

addingzero to the z component.

3.3 X-ray system

We selectedthe X-ray systemthat providestransformation of the image data into digital formatthat canbe easilytransferred into a computerto be processed.The systemthat satisfiesour needis a digital x-ray system.

105 Chapter3: Data acquisition

3.3.1 System overview

Over the last decade,radiological imaging technology has undergone a number of changeswhich haveresulted from the fusion of new typesof X-ray imageacquisition technologywith powerful microprocessorsand inexpensive high-density digital storage media[2]. This is resultingin the progressiveand accelerating replacement of film- basedimaging. The introductionof digital radiographicimaging technology promises consistentlyhigh levelsof diagnosticimage quality, more effectiveuse of radiationand more efficient clinicalwork practices.Digital X-ray imaginghas many facets encompassingthe acquisition,processing display, communication and archivingof diagnosticinformation as digital data.This sectionpresents an overviewof digital x-, ray machines.The diagramin Fig 3.3a showsthe main part Of the machine.

X-ra tube I plate

multiformat camera(filming)

discrecorder

viewing monitor (analoqucvideo)

Fig 3.3a Digital x-ray systemoverview

In this figure, x-rays aregenerated from an x-ray tube and are directedthrough an object or a patientto an imagingplate. In conventionalradiography, the imaging plate is a cassettewith film placedbetween two intensifierscreens (for improving the quality of the image)and directsa specifieddose of x-raysthrough the object to the film. Luminescence,or light, releasedby the interactionof x-rays that havepenetrated tissuewith the phosphormaterial in the intensifierscreen, exposes the film with an

106 Chapter3: Data acquisition attenuationpattern of the object'sstructure. The user developsthe film, which later read by using a light box or auto alternator.If we want to digitisethis image,a scanneror a video camerawith a framegrabber is requiredto convertthis analogueimage into digital format. In digital radiography,the imagingplate andthe imagereader can be roughly categorisedinto two techniques,digitalfluorography andphoto-stimulable phosphor computedradiography.

Digitalfluorography is usedin clinicalapplications which demandthe rapid acquisitionof a seriesof imagesor the real-timereview of the diagnosticimages during the procedure.Such devices use the X-ray imageintensifier television combination as the imagereceptor. The video signalis digitisedin real-time.These systems are well establishedin cardio-vascularimaging such as digital subtractionangiography or DSA.

Photo-stimulablephosphor computed radiography is more suitablefor digital X-ray imagingwhere the high spatialfidelity imagesare required.This technique producesimages by usingstorage phosphor screens, not much different to thoseused in conventionalradiography, but reliesupon a differentphysical phenomenon known as laser-stimulatedluminescence for the imagereader. The most commonlyused storage phosphorscreens consist of smallcrystals of bariumfluorohalide embedded in an organicbinder. X-rays are directedthrough an objectto the stimulablephosphor in the imageplate. WhenX-ray photonsinteract with the bariumfluorohalide phosphor, spontaneousluminescence is createdas electronsand holesrecombine. Then the image readerscans this imagingplate by usinga laserbeam of a particularwave length reducingphotostimulated luminescence. Following photostimulationelectrons relax back to their I ground stateproducing the emissionof light of intensityproportional to the numberof X-ray quantaoriginally absorbed. The releasedimage is detectedby a photomuliplierand digitisedto producea digital image.

The Image Processoris a part performingtasks to improvethe visual perceptionor conspicuityof objectfeatures of the resulteddigital image.The processed

107 Chapter3: Data acquisition digital imagecan then be transformedinto analoguevideo signaldisplayed on the viewing monitor. The processeddigital imagecan also be passedto the Image Recorder to be compressedor transformedinto suitableformat to be storedon a magneticor optical disc for archivingand postprocessing, or to a multiformat camerato producethe hard-copy(film) of the image.

3.3.2 Systemused in the project

In this project, the machinethat hasbeen used to obtainthe datais the Philips' digital x-ray machineat ImperialCollege's St. Mary'sHospital, Paddington, London. This machineuses the digital fluorographytechnique described earlier with the over- table imageintensifier of 38 cm,diameter. The output imagecan be displayedon a video screen,on a photographicfilm or canbe storedý on a floppy disk.

To transferthe imagedata from the machineto a computer,we tappedthe analoguevideo signaland frame grabbed this signalby the programsVideotools and Image Works installedin a SiliconGraphics'Indy workstation as shownin Fig 3.3b.

...... video panel* (display) analoquevideo signal video tools* (frame grabber)

imagework*

image image in reader Pro- moet standard

monitor /'7*

Silicon...... Graphics Digital X-raymachine Indy workstation (detailsin Fig 3.3a) * cormnercialsoftwares

Fig 3.3b Transferingdata from the X-ray machineinto a workstation

108 Chapter3: Data acquisition

3.4 Phantoms

In this sectionwe explain the phantomsor the test objectsused to test the principle and the algorithmsof this project.As shownin the Fig 3.1a, Two types of phantomsare involved: 9 computer-generatedobjects

0 physicalobjects.

3.4.1 Computer-generatedobjects

Object, creation

We use a commercialPC-based graphic package called Autodesk 3D Studio version 3.0 to generatean object representedin B-rep form, [9]. The objectsemployed in this project are as follows:

e an ellipsoid

ea dimple-shapedobject

9a bone-shapedobject

The 91lipsoidwas createdfrom a sphereby usingthe 2D scalingfunction. The dimple-shapedobject was definedas a surfaceof revolution.We usedthe 2D Shaper programin 3D Studio to createa planecurve in 2D for the surface I hen revolve this curve about the centralaxis in 3D spaceby usingthe 3D Lofter programin 3D Studio. The numberof facesand vertices can alsobe specifiedat this stage,see Fig 3.4a (1). The bone-shapedobject was createdfrom a combinationof two solids of revolution, seeFig 3.4a (2).

109 Chapter3: Data acquisition

(1) (2)

Fig 3.4a Computer-generatedobjects, (1) a dimple-shapedobject (2) a bone-shapedobject.

All of theseobjects are representedin B-rep form by a set of triangularfaces. The file format is AutoCAD's DYF format. In this project we needonly the two main descriptionsabout the object.These descriptions are geometric information and topological information..The geometricinformation in this caseis the positionsof the verticesand the topologicalinformation is how theservertices are linked togetherto form the object's triangularfaces. A DXF file containsmore information about the object than we actuallyneed at this stage,e. g. line attributes,size of the object's boundingbox. Thereforea programhas been written to extract only the necessary information.

The diagramin Fig 3.4b illustratesthe informationextracting process. As an example,the extractingprogram selects some necessary information from a DXF file representinga tetrahedron.The output of this programis the tetrahedronrepresented M, V2 VI) (V21 V3 VO) by four vertices,V,,,,,, vno = 0,1,2,3 and four triangularfaces 51 9 9 t (VI, VO, V3) (VO, and V1,V2). We usethe following notationto representthis object:

IV, [V,,,,) NY-' [Ffi.: (Vi 0= F}, where V= vno=O and F= Vj, Vk), Vij,,, Vlh'*'-l For r= filo=O . example,the tetrahedron(N, = Nf = 4) canbe representedas 0= IV, F}, where

V=f VO, V1, V2, V3}, fFO, F3,1=[(VO, V1, V2), (VO, V2, I. F= F1,F2, V3),... All the objects usedin this project had numberof triangularfaces (N ) large enough,e. g. 1000,so that f there were no effectson the object's smoothsurfaces.

110 Chapter3: Data acquisition

0b* 3ý ct in extractin verticesand D formnjat-0 program triangularfacw

vo e.g.

V3 V1,4 ...... **** V2

Fig 3.4b Processto extractsome information from a DXF file.

X-ray image simulation

Defining an X-ray imageof a B-rep objecthas been done by someresearchers in the past [5] et al. But for this project,it is not necessaryto usethe most sophisticated method.We only needa methodthat canwork and doesnot take muchtime to implementbecause this sectionis not the mainpart of the project. Thereforewe developedour own simplemethod which canbe easilyand quickly implementedand integratedinto our softwaresystem.

First let,us considerthe characteristicof an X-ray image.As explainedin Section 1.3, we can imaginean X-ray travellingfrom a point sourcethrough a volume of interest.The photonsof this ray are absorbedalong the way as they penetratethrough the volume.The absorbingpower canbe calledan attenuationcoefficient. The unabsorbedphotons can be detectedby a detector.In this sectionwe ignore the scatteredphotons, only principalphotons that movealong the ray are under consideration.

If an object is homogeneousor hasthe sameattenuation coefficient for the whole object,we can approximatethat the densityof eachpixel in an X-ray imagecan be determinedby the densitycoefficient and the distancethat the ray penetratesinto the volume of interestor the object.In our simulation,the X-rays are assumedto be

III Chapter3: Data acquisition linearly absorbedby the objects, and noise in the projections is not included. The densityof eachpixel canbe linearlyapproximated by Eq 3.4a, [5].

I(x, y) =pd (3.4a) where p is the object's linearattenuation coefficient, d is a total distancethat the ray is insidethe object or the volumeof interestand (x, y) is the position of the pixel, seeFig 3.4c. W.

hight

width cpj.

Fig 3.4c X-ray imageintensity of an object.

Ray equation

The maintask of this sectionis to definethe distanced. As explainedearlier, the object is representedby a set of triangularfaces in 3D space.Therefore we haveto determinethe intersectionbetween the triangularfaces and the rays.First, we begin with how to definea ray. Eachray is generatedfrom a centreof projection or an X-ray sourceCP (Definition 2.6c) and ends at the point P on a projection plane T,,, in the world co-ordinatesystem, Fig 3.4c. Thereforethe ray's equationcan be expressedby the following:

(tR) 1ý,,, +dRtR Y) = CPP,, (3.4b)

112 Chapter3: Data acquisition where P-CP dit = IP l - CP,.

[Ppn. And from Eq 3.2h, Tw,, PP,,,, P = x pp". y 0 Ln where

-width ppn. x< width 22 -hight Light 22 hight, width are the sizesof projectionplane T,,,.

Intersection of a ray and a triangular face

The next stepis to find the intersectionbetween a triangle and a ray in E. First, let us considerthe intersectionof a ray andthe planethat containsa triangular face expressedby n9P=k, wheren is the normalvector to the plane,k is a scalarand P is a point on this plane.Normal n to the planecan be determinedby

VO) (V2 VO n =(VI - X - (3.4c) and scalark is definedby

no VO. (3.4d)

T. A triangularface is definedby threevertices : V,,. = [v... x v,,.. y v,,.. z] vno = 0,19

2. If the ray is definedby Eq 3.4b, thenthe intersectioncan be determinedby finding the I parameterthat satisfiesthe following equation[6], et al.

113 Chapter3: Data acquisition

k-noCP IR = provided n* dr #0 (3.4e) nod,

If the line the thereis intersectionor and planeare parallel - no uniquepoint of ned, #0.

A ray intersectsa trianglein E' if the intersectionpoint is insidethe triangle. So the problemoccurs as how to identify whetherthe intersectionpoint is in the triangle. To solvethis problem,we definea point that is certainlyinside the triangle. This point can be the centroid of the triangularface, point C in Fig 3.4d. If the intersectionpoint is representedby point A andthe triangularface's vertices are V., n=0,1,2, the following algorithmcan be usedto identifywhether A is insidethe triangular face (see also Fig 3.4d):

1. In E, define line segmentAC by equation

dActAc P4c(lAc) = A+ (3.40 where tAc dAc=C-A aid e[0,1].

2. Define line segmentse, n=0,1,2, by

V. d,.I,., (3.4g) where

d (n+l)mod3 V,, t,. [0,1]. es=V - and e

3. Ifpoint A is either one eI or eI 'then A is consideredto be inside the triangular 0, face. If not, moveto the next step.

114 Chapter3: Data acquisition

4. Find the intersectionpoint betweenAC and e,,,n=0,1.2. If AC intersectseither e0, eI ore 2, thenpoint A is outsidethe triangle. If AC intersectsneither eo, eI nor e thenpoint A is inside the triangle (Fig 3.40. NB Line segment AC intersects line segment e., if at the intersection point:

IAc and 1,. r=[0,1].

All the basicgeometrical methods such as determiningthe intersectionpoint of two lines in E' canbe found in AppendixD.

outside inside Fig 3.4d Inter=fion of a ray and a triangular face.

Intersection of a ray and an object

An object is assumedto havethe format describedearlier, namely its facesare triangular.We are interestedin finding all the valuesof parameterIR in Eq 3.4b, correspondingto all the intersection,points (if there are any) of the ray with the object's surface.Or we are looking for intersectionsof the ray with all the triangularfaces of the ect.

So far, the algorithmfor definingan intersectionpoint worked for one triangular face.For an object,there aremore than one faceconnected together. Problems may ariseif the ray passesthrough the surface'svertex or edge(Fig 3.4e). In thesecases we may decideerroneously about the numberof intersectionswhich is usedto calculate

115 Chapter3: Data acquisition projection intensity.For example,in Fig 3.4e (1), the ray passesthrough the surface's edges.Therefore the numberof intersectionpoints shouldbe one.But by using the algorithmwe count the numberof intersectionpoints as two becausethe ray is consideredby the algorithmto be insideboth triangularfaces F, and F2.The same problemalso occurswhen the ray passesthrough an object'svertices, Fig 3.4e. The numberof intersectionpoints is five insteadof one as it is supposedto be.

(1) 2)

Fig 3.4e Two caseswhen a ray intersectsmore than one triangular faces.

This problemis solvedby checkingthe distancesof all the intersectionpoints along the ray. If the distancebetween any pair of intersectionpoints is equalto zero, then thesetwo points arewelded together and will be recognisedas one point.

In order to calculatethe projectionintensity I(x, y) due to a singleobject, we N for the 1. 1,,, N- 1 end up with an even number of values parameter: ,i=0,1,2, ..., 11 seeFig 3.4f From Eq 3.4a,thus the projectedintensity can be determinedby

N% 1(x, y) = gEd2i (3.4h) i=O where

The projectedintensity has to be multipliedby a weightingfactor, correspondingto the angleof arrival of the samplingray, to give the effectivereceived

116 Chapter3: Data acquisition intensity.The weight factor is equalto cosO,where 0 is the anglethat the ray forms with the vector z., (seeEq 3.2e),nonnal to the projectionplane y,., as depictedin Fig 3.4g. Therefore

ý4 I(x, y) =p cos01: d2, (3.4i) J=O IdR I. where cosO= o z,,

For multiple objects,the projecteddensity is assumedto be the sumof each object's projecte ensity:

N, -l I: N,, = numberof object. (3.4j) n--0

Fig 3.4f A ray alwayshas an evennumber of intersectionpoints.

Rp.

P QR) To/ ray ,R WP.

Fig 3.4g A ray arrivcs at a projectionplane with an angle

117 Chapter3: Data acquisition

There is one casethat this methodmay fail to producea right pixel value.The error occurswhen a ray is a tangentto an object'svertex or face.This kind of point shouldnot be counted.But we havenot yet developedany methodto checkwhether an intersectionpoint is a tangentpoint. Howeverwith the computer-generatedobjects usedin this project, the effectsfrom this error were not very seriousand could be ignored.

3.4.2 Physical objects

In the previoussection, the X-ray imageswere purely simulatedby a computer. We ignoredall the noises,unwanted signals and the imagealignment problems that might occur in real situation.In order to showthat our methodis capableof being appliedto real X-ray imagesand to studythe problemsthat may arisein the real kneejoint, for situation,we useda physicalobject: a plastic as a test object,. obtaining X-ray images.This object was usedto build a phantom.

We havedesigned the phantomto suit the X-ray systemsemployed in this research.What we neededwas to take a set of X-ray imagesof the phantomfrom a set of specifiedprojection directions. The X-ray machineused could not be adjustedeasily and accuratelyto give a preciseprojection direction. Therefore it was better to fix the positionsof the X-ray sourceand the receptorand adjustthe phantominstead, in order to get a requiredprojection direction. For projectionplane let the X-ray source

CP,,,and the centreof the receptorOP.,, are on they-axis of a co-ordinatesystem (x., ) calledthe machineco-ordinate system y., z. as shownin Fig 3.4h. We fixed this machineco-ordinates as a reference andplace the 3D projectionplane co-ordinate (x,,,, ) is system yp,,,zp,, suchthat yp,,-axis parallelto z. -axis in the samedirection, xp,,- axis is parallelto x. -axis in the oppositedirection and z,. -axis is parallelto y. -axis and passesthrough the origin 0, (seeFig 3.4h).

118 Chapter3: Data acquisition

rJuI

Fig 3.4h Co-ordinatcsystems in the experiment.

We fixed the phantomin the world co-ordinatesystem (xyz) which sharesthe sameorigin with the machineco-ordinates, then taking an X-ray imageof this phantom at a specificprojection direction could be doneby rotating the (xyz) system.Or we had to rotate the phantomto get a requiredprojection direction.

Sinpethe phantomhad to be rotated,therefore the best shapeof the phantomis a sphericalshape. In this project, our prototypephantom was built from a commercial globe model.We placedthe phantomin the world co-ordinates(xyz) where the centre of the phantomis at the origin, the north pole of the phantomwas on the positivey-axis and the southpole was on the negativey-axis, Fig 3.4i. The projection directionsthus could be markedon the surfaceof the phantomrelative to the world co-ordinate system,see also Fig 3.2c. At any projectiondirection, y-axis of the world co-ordinate system(N and S polesof the phantom)was alwayson y. z. plane,in order to make y,,,,-axis alwaysintersects y-axis.

119 Chapter3: Data acquisition

Fig 3.4i Phantom'sshell and the World co-ordinatesystem.

The phantomwas placedon the basewith calibratingarc C, Fig 3.41,which is is first parallelto z. -axis.Each projection direction markedwith two points: the point a, is the point wherethe projectiondirection vector dP, e E) intersectsthe phantom's surface.The secondpoint a, is on the samelongitudinal line but 90* towardsthe S pole from point a,.The calibratingholes ho and h, were usedto position the phantomat a requiredprojection direction such that a. is alignedwith h,, and a, is alignedwith h,.

The object was embeddedinside the sphericalphantom by injecting foam to fix the object's position insidethe phantom.In the experimentwith the X-ray machine,the phantomand its basehad to be in the right positionsuch that the centreof the phantom is on y. -axis at origin 0 andthe phantom'sN and S poleslie on planey. z. (geometricalconstrains required by the reconstructionprocess). To align the phantom's centre,we usedan emptyphantom with two coppercrosses at the N and S poles.The phantomand base were moveduntil the centresof the two crossesare at the same positionson the X-ray image.Note that if the X-ray beamis a parallelbeam, there is no needto do this alignment.But the X-ray beamis conicalin the systemused in this project, thereforethe alignmentprocess is neededto makesure that the geometric constrainsin the real world satisfythe geometric constraintsrequired by the reconstructionprocess in a computer.To align the phantom'sN and S poles,we used the C arc as a reference.After the centrealignment had beendone, the basewas rotated until the centreof C arm was parallelto yp,,-axis. Then the baseis fixed for the rest of the experiment.All of thesealignments were doneby usingthe real-timeimaging X-ray

120 Chapter 3: Data acquisition machine. Fig 3.4k illustrates the position of the phantom in the X-ray machine. Note that all of these geometical set up procedureswere required in the experiment only to make us easily know the direction and the position of X-ray image ý/P.. In general, the sets up procedures are not necessaryas long as we know exactly the position and the direction of the image.

20cm

calibrating hole

calibratinghole )ý- h,

front

.7ý

Fig 3.4j Phantom and base,

121 Chapter3: Data acquisition

phantom

n

Fig 3.4k Positionof the phantom.

Dimensions of the X-ray machine

In this project, we haveto considertwo dimensionsystems. The first one is in the real world. We measuredall the dimensionsin centimetres.But in the reconstruction process,we measureall the dimensionsin the unit in relation to the sizeof an input X- ray image.We call this unit pixel unit. For example,in this X-ray systemwith the frame grabbingsystem used, the receptorof 38 cm generateda5 12x5l2 pixel image,Fig 3.41. Therefore

1 cm -- 14pixels (3.4k) or at the resolutionof 0.07 cm.

In the reconstructionprocess, we use256x256 pixel images.Thus all the imagesframe grabbedfrom the X-ray machinewere scaleddown 50% with a grey scaleof 8 bits. This resultsin

1 cm m7pixels (3.41)

122 Chapter3: Data acquisition

512pixels x 512pixel = 38cm x 38cm

Fig 3.41X-ray imagedimension.

5= 38cm

...... ------front view imagereceptori

73 734 ...... phantom

290 unit: pixel unit, othemse stated

t 1P ...... ------...... X-ray source .....

Fig 3.4m Dimensionsof the X-ray machineused in this project.

123 Chapter 3: Data acquisition or at the resolutionof 0.15 cm. All the relevantdimensions of the data acquisition systemtherefore can be representedin pixel unit as describedin Fig 3.4no.

3.5 References

[1] J. Blinn, Platonic solids,IEEE Comp.Graphics& App., Nov 1987,62-66.

[2] A. R. Cowen, Digital X-ray imaging, Conferencepaper, IEE Colloquium on Medical Imqging,ý Image Processing andAnalysis, Digest No. 051, March, 1992,8/1- 3.

[3] A. E. Hill, R. D. Pilkington,A Completeguide to A utoCAD data book, Prentice Hall, New York, 1990.

[4] D. Raker andH. Rice,Inside AutoCAD: the completeguide to AutoCAD, matric edition by M. Beary,New RidersPub., Thousand Oaks, Calif, 1992.

[5] 1. Matalas,Detuity determinationin object-basedX-raytomography, MSc dissertation,Imperial Collegeof Science,Technology & Medicine,London, Sept. 1992.

1.0. Angell, High-resolutioncomputer graphics using C, Macmillan,London, 1990.

[7] K. Selkirk,Longman mathematics handbook, Longman, Essex, 199 1.

[8] D. F. Rogersand J. A. Adams,Mathematical elementsfor computer graphics, McGraw-l-fill, 1989.

[9] S. Elliott, P. Miller and G. Pyros,Inside 3D studio release3, New Riders Publishing,Indianapolis, IN, 1994.

124 C1L4PýT,ýE,R, 4" u,R>'-ýV7-Tý, Ti 0:N' , '-'R-. . 1

4.1 Introduction

In this chapterwe describehow to extractand analyticallydefine contrast curves,see Fig 4.1a. This part is not fully automatic.The user is requiredto makea decisionat two stages:the contrastcurve definition and the determinationof a set of occludingcontrast curves. At this stage,the processesin this chapterstill requireuser's intervention.But all the rulesand assumptions presented would be useful for automatingthe processin the future.

Main original contribution

The main original contributionin this chaptercan be describedas follows:

9 Concepton how to definecontrast curves: Although we use other researchers'computer programs to detectand analyticallyrepresent contrast curves,,we havedeveloped a conceptfor how to selectand definethese curves.This conceptis very crucialwhen we haveto dealwith complicated contrastcurves.

125 Chapter4: Curve representation

ObJ. ect identificationprocess: We observedthe characteristicsof an object's closedoccluding contour. Then make an assumptionused in our object identificationprocess.

Np 2D X-ray IF

[4.2] ) representsprograms ...... developed by other researchers

enhancedimages I NV

17 ...... contrastcurvc represcritation [4.31

q;z; ý [4.4] (analyticallyrepresented) setsof contrastcurves 16 14.41 CýL pn=0,1,2,...,NP-l FI detemiinationof sensible III...... f-I ...... occluding contrastcurves

setsof scnsible occludingcontrast curvcs OCe,pn-0,1,2,.... NP-l I" II II...... F-I

Fig 4.1a Flow of operationsin this chapter.Relevant section numbers are in parentheses.

4.2 Contrast curve representation

To obtainthe analyticallyrepresented contrast curves from an X-ray image,we usethe methodand the interactivesoftware developed by I. Matalas.All the details whichwe onutin thisthesis can be found in [1].

126 u

Chapter 4: Curve representation

As shownin Fig 4.1a, eachX-ray imageis enhancedprior to edgedetection to reducethe noiselevel, to sharpenthe edgesand to enhancethe contrastof the edges. Note that, an edgein this caseis a contrastcurve in an X-ray image.Some techniques suchas adaptivesmoothing and adaptivehistogram equalisation were usedto enhance the image.Then the edgeswere extractedby usingGradient edge detectors with relaxationlabeling to reinforcethe underlyingboundaries whilst suppressingspurious edgesdue to noise.These edges are then fitted by B-splinecurves which can be representedanalytically. The B-splinecurves are interactivelylinked to form a set of contrastcurves required by the next stageof the reconstructionprocess.

Each contrastcurve C' is presentedby a sequenceof 3rd order parametric [C,,, curve segmentsC' = (1);1 r=[lb,,,, te,. in E,'. ", wheresno is a segmentnumber N and N,,, is a total numberof curve segmentsin the contrastcurve. C,,,(t) 1: Aj", N=3 n--0 for 3rdorder parametriccurves, therefore C,, (1) = AO+ Alt + A2t'+A3t' or in matrix form:

C--('), Xl Fa-'X]+[42,, X], X], [a,. 2+ a3, x 3. ": La,,. +[a,, 1 c.. (t). yj y t,;. y a,. y y]

4.3 Contrast ýurve determination

The programby I. Matalas[I] for extractingand analyticallyrepresenting contrastcurves was developedfor generalpurposes. It is flexible to adaptto use in this research.The useris requiredto selectwhich contrastcurves are supposedto be connectedand which onesare not. To do this effectively,we representcontrast curves as a graph and applygraph theory to thesecontrast curves. A graph is definedby Definition 4.3a.

127 Chapter4: Curve representation

Definition 4.3a In projectionplane y,,,, a graph G is an orderedtriple (N, C) where N

(vertices),N= [N 'V", N. , C a nonemptyset of nodes nn--O numberof nodes,and a set of edgesor contrastcurves that eachone links nodesN.,,. and N.,, (N,,,,.may be

to N.,, ), C, [C,' T' N=a [2] equal = , Cn=Owhere C numberof contrastcurves, et al.

Our graphsalways have a finite numberof nodesand edges.An exampleof a [NO, graph is shownin Fig 4.3a,where number of nodesN,. 29N= [N,,,,}' = N, Rft--O JC' and numberof contrastcurves N, = 2, C' = )".. Each contrastcurve is representedby a set of 3rd order polynomialas explained=O in section4.2. Before we move to the next step,let us consideranother definition in the graphtheory:

Derinition 4.3b A graphG is calledplanar graph if G canbe drawn in the planewith its edgesintersecting only at nodesof G. NB In this research,the edgesare contrastcurves and the contrastcurves' positionsare fixed in the projectionplane.

In a projectionplane W, if the network of all contrastcurves is a planargraph, it meansthat we neverallow the situationwhen two contrastcurve segmentspass across eachother without any nodeat the intersection,Fig 4.3b(l). In this case,the curve is broken by insertinga nodeat intersectionpoint, Fig 4.3b(2). And anotherexample is shownin Fig 4.3c. The graphin Fig 4.3c(l) is not a planargraph but the graph in Fig 4.3c(2) is planar.

Nrq NI7ci Coe

Fig 4.3a A graph.

128 Chapter 4: Curve representation

(2) (1) Fig 4.3b (1) Two contrastcwves pass across each other with a node. (2) Two contrastcurves pass across each other without a node.

C; q

cte i

ki) (2) V)

Fig 4.3c (1) Non planar graph.(2) Planargraph. (3) Occluding curve.

Having studiedthe surfacesand their associatedcurves as describedin Section 2.4, we decidedthat, to copewith complexcontrast curves with branches,swallowtails and butterfliesetc., the bestway is to presentthe curvesby a set of simplecurve segmentswhere eachsegment contain no branch,no crossingpoint etc. Thereforein this research,the usermust seJect, link or breakcontrast curves so that onlyplanar graph is created.Another reasonwhy we haveto createonly planargraphs comes from the propertiesof sur/ectionor the way tangentand singularcurves in E' are projected onto a projectionplane creating the correspondingcontrast curve in E. If two contrast curvesintersect or connectto eachother in E, it is not necessarythat their correspondingtangent or singularcurves also intersect or connecteach other in E.

Consideringonly planargraphs gives us more freedomto dealwith combinationsof the graphelements both in E' and E. In E, we can reconstructeach curve separatelyand link themtogether to form a network of tangentand singular

129 Chapter4: Curve representation curvesby directly consideringtheir positionin E, not their contrastcurve positionsin E' (seeChapter 5 for detailshow to link thesecurves). In E2, we havemore combinationsto consider.For instance,if we haveto find an occludingcontour of the graph in Fig 4.3c(I -2), we cannotpresent the occludingcontour by curvesq, q in Fig 4.3c(l). We haveto breakcurves q and q, then link the proper segmenttogether. This situationis ratherdifficult comparedto finding from the planargraph in Fig 4.3c(2). In figure, be by C(' that the occludingcontour can simplyrepresented curves , q*0

We can summarisethe rulesthat the usermust obeywhen selecting,linking or l breakingthe to GPn ' follows: contrastcurves createa graph =JNvn%C;P,, on w;,. as

Rule 4.3a The resultinggraph must be aplanar graph and there is a node at every intersectionso that no two curvescross.

Note that, accordingto the rule, 2nd degreenodes are also allowed.Therefore a smoothcurve maybe representedby morethan one curve segmentin some circumstancesdepending on the edgedetecting and edgeanalytic representation process We thus GP, for the in employed. obtain a planar-. graph objects eachprojection 0,1,2, NP In it is break plane wP, pn = ..., -. practice, alsoworth to any non-smooth curve segmentinto two smoothsegments at a singularpoint. Becauseif the quality of an imageor the edgedetection process is not good enough,some segment may be definedwrongly. For example,if thereis a gap betweenC' and N and a gap between 00 Cc and N, curve C' may be automatically and wrongly linked to C' and becomes a 1012 non-smooth segmentwith No as a singular point. This new non-smooth segment does

not come from the sametangent curve. Therefore if we do not break this kind of curve up, it may cause an error in later processes.On the other hand, if we break a non- smooth curve up into two smooth segmentsand it occurs that the two segmentscome from the sametangent curve, it does not matter. The reconstructed curves of these two

130 Chapter4: Curve representation segmentswill be automaticallylinked up in the reconstructionprocess described in Chapter5.

Although the processto definecontrast curves needs user's intervention,the user is requiredno other specialskills thanjust obeyingthe rule. Invalid contrastcurves will be automaticallyeliminated later.

4.4 Occluding Contrast Curve Determination

The projectionin E' of a solid objectin E3 is boundedby a curve commonly referredto as the outline or (self-)occludingcontours. In visual terms, occluding contoursare the projectionof curveson the surfaceof an opaqueobject that separate visible from invisibleregions. Occluding contours are closedcontours. These curves on an object's surfaceare calledfolds [3]. In an X-ray image,as describedin Chapter2, the occluding contour or the occluding contrastcurve is the outer-most contrast curve of an object. AJIthe contrastcurves are the projectionsof the tangentcurves and the singularcurves of the objectin E.

We usethe propertythat the occludingcontrast curve of an object is a simple closedcurve [4], et.al., to separatethe objectsfrom their surroundand parts within objects.The simpleclosed curve in E' meansthe curvethat dividesthe planeinto only two disjoint connectedregions: the boundedinternal enclosed by the closedcurve and its unboundedcomplement [5].

In this project,we reconstructeach object separately.Thus it is essentialto identify which contrastcurves belong to which object.In this sectionour task is to identify closedoccluding contrast curvesftom a set of contrast curve segmentsC,,,. If we investigateonly a singleobject, the occludingcontrast curves can be easilyidentified by choosingthe outer most contrastcurves that separatethe set of contrastcurves from the backgroundand the rest of the contrastcurves are not outsidethis occludingcurve.

131 Chapter 4: Curve representation

But if we investigatemore than one objectand we do not know exactlythe numberof the objectsto defineall closedoccluding contrast curves of theseobjects is not a simple task any more.

One methodis to find all possibleclosed contrast curves then eachclosed contrastcurve is pairedwith the closedcontrast curves in other projection planes.The closedcontrast curves which arethe occludingcontrast curves of the sameobject in all projection planethus canbe matchedby the pairingmethod that will be explainedin detail in the next chapter.For example,a set of contrastcurve Co.,of two separated bonesmay appearas shownin Fig 4.4a,where

[CO, CC qIC2, ql' p, =0c

Let us definea set of closedoccluding contours:

Derinition 4.4a For a projection plane V.. r=T, let

[OCII'locn--D OCPM = be in E'P a set of occluding contrast curve P"1, where OC,',,.. C-C, 'Rt c standsfor contrastcurve, ocn = occludingcontrast curve numberand N,,,,,,,= numberof occludingcontrast curves in VP,,.

Therefore,in this we begin definingOC' by example, with , all possibleclosed contrastcurves, Fig 4.4b:

JOC,.,. ): on 6 and OC.. = Ocm--O'

132 Chapter 4: Curve representation

fc;, 1, fc, l [CO q), ocp.,.. Cý oc, C; OCp ',,. OC' l = = 3 = CICICII where FnIo ^2 I I

cc 4 = fqelqlloce occC,copn. ýfc 31. 'Pol's1

Theseoccluding contrast curves sets are subsequentlymodified by the object identificationprocess to eliminatethe invalid closedcurves. Thus closedcurves OC,,,.,,,,,ocn = 0,1,2,3, which are not valid occludingcurves, are removedfrom the OC.C., OC.01., left be identified bone A set. Only , and are to as occludingcurves of and B respectively.

This approachemploys only contrastcurve information regardless of the density or grey scaleinformation available in an X-ray image.It needsto find all the simple closedcontrast curves in the graphin everyprojection plane. If a closedcurve doesnot passthe matchingprocess, it is not a valid curve,therefore we throw it away and try other different closedcurves. We continuein this fashionuntil all possibleclosed curves havebeen tried. The trial and error processis theoreticallypossible - but it is practically difficult. In all but the smallestof graphs,there will simplybe too manyclosed curves to try.

As we know an X-ray imagealso contains the object's information on density. Thereforeinstead of usingonly contrastcurve information, we can employboth density informationwhich is presentedby the grey scaleof eachpixel in the imageand contrast curve information.Some researchers have tried to identify objectsfrom this type of imageby usingboth the curvesand the densityinformation [6] et. al. However, the methodsare still very limited andcannot cope very well with complicatedcontrast curves.We, at this stage,allow the userto intervenein the closedoccluding contrast curve selectionprocess. All the contrastcurves are superimposedin the grey scale image.The user hasto chooseall sensibleoccluding contours from this image.

133 Chapter 4: Cun, e representation

Although this approach is not fully automatic, it does not very much depend on the user decision. Becauseit is not necessarythat the user must select all the correct curves. The invalid curves are automatically eliminated later. The idea is to reduce the number of occluding contours we have to check. For example, if we superimposethe contrast curves in Fig 4.4a (1) on the grey scaleimage as shown in Fig 4.4a (2), then the 0(',,,,,,, o (Fig 4.4b) the user may select 4 and P as sensibleoccluding curves. The object identification processsubsequently checks only three sensiblecurves instead of all possible six curves and identifies as an invalid occluding contour.

In some cases,the user may also have to link or break contrast curves in order to form a simple closed occluding contrast curve. This may occur when the quality of a 2D X-ray image is not good enough and the edge detection cannot detect the complete boundary of an object.

co

bone A cl

c2r

bone B ý11 P"

Fig 4.4a (1) Two bones and their contrast cun, es (2) Density profile of the two bones.

134 Chapier 4: Curverepresentation

Coe

C3 Cý C2 0 cp" ocýo I

ICO' Cie 5

OCý2 Oc;. ) cc J C;e cl;lc

,c3e Cie

oc;. 4 0 cp,.,5

Fig 4.4b All possibleclosed contmst curves of the two bonesin Fig 4.4a.

4.5 References

[I] I. Matalas,Contour-based image segmentation, Whil/PhD TransitionReport, Imperial Collegeof Science,Technology and Medicine, London, 1994.

[2] J. L. Gersting,Mathematical structuresfor computerscience, Computer Science, New York, 1993.

[3] J. M. H. Beusmans,Computing occluding contours using spherical images, CVGIP: Image Understanding,Vol. 53, No. 1, January1991,97-111.

[4] J. P, Stenstromand C. I. Connolly,Constructing Object Models from Multiple Images,Inter. Journal of ComputerVision, 9: 3,1992,185-212.

135 Chapter 4: Curve representation

[5] S. B. Tor and A. E. Mddleditch, Convex decompositionof simple polygons, ACM Transactionson Graphics,Vol. 3, No.4, October 1984,244-265.

[6] A. Pauletti,Resolving overlapping objects in X-ray images,MSc. dissertation, Impenal Collegeof Science,Technology & Medicine, 1991.

136 C U,AT, TE R-5 0,,B, J., 'SV; EACT" RE-Cý0 NS, -T R- U, "C TT, 0, Nx

5.1 Introduction

The aim of this chapteris to explainhow to reconstructa network of the tangentand singularcurves of an object.Both tangentcurves and singularcurves are reconstructedby the samemethod. We reconstructeach of thesecurves separately and then subsequentlylink themtogether to form a network. Thenthe reconstructedobject can be representedby a wire framemodel. We describefirst how to dealwith a single object. Then the casewhen there are multiple objects to be reconstructedis explained. The flow of operationsin this chapteris shownin Fig 5.1a.

Main original contribution

R. Benjamin,who is the principalsupervisor of this project, proposeda new idea about reconstructingan object'ssurface points by using conunontangent planes in [4], [5]. But the ideawas only describedin generalterms. Crucial detail on how to

137 Chapter5: Object surfacecurve reconstruction implementthis ideahas not yet beenmentioned. Therefore the main original contribution in this chaptercan be describedas follows:

Determinationof commontangent points (or an object's surfacepoints): We presentan effectiveimplementation of the idea describedin [4], [5]. Both

paralleland conical:projections are considered.We also analysethe distributionof commontangent points. Verification of a commontangent point: We observedand discoveredsome characteristicsof an object'ssurface points. These characteristics were used to formulatean assumptioncrucial for the methodfor verifying a common tangentpoint. This verificationmethod is also vital for the object identification process. 9 Reconstructionof tangentand singularcurves: We presenta new approach to reconstructa network tangentand singularcurves on an object's surface. This approachcan copewith a complicatednetwork by reconstructingeach curve separately.These reconstructed curves are linked togetherto form a completenetwork later in 3D. This approachallows us to reconstructa very accuratewire framemodel of an object. Objectidentification: We presenta novel approachfor identifying an object by usingthe conceptof an object'sclosed occluding contrast curvesand the verificationof a commontangent point. This approachallows us to effectivelydeal with multipleobjects in one scene.

138 Chapter5: Object surfacecurve reconstruction

curvcs sets of conmst setsof sensible C,L Z Arp-I pn=OI, .... occludingcontrast curves in EP2.P c, NP-1 15.81 OCpn pn -0,1,2,...... 1 EI object ...... identification determinationc lzjo> tangentpoints No objects, eachof them is representedby its occludingcontrast curve set of tanjent points O.., on-0.1,2,...,NO-I on eachprojection plane (X-ray image) U NP-1 71ý.andPTP,.. Pn=O, .... in Eý2p ý ......

determination 15.41 of verification of I commontanget points commontanget points

I setof commontangent points analysis: CTP in E31 distribution [5.6 of tangentpoints and common I tangentpoints) [5.71 reconstructionof tangentand singularcurves

setof reconstructed tangent and singWar curves RC" in E3

diderntification No objects, eachof them is represented by its recontructedtangent object'scurve and singular curves O., No-I on=0,1,2,-. -, [5.8] 0 ...... Fig 5.1a Flow of operationsin this chapter.Relevant section numbers are in parentheses.

139 Chapter5: Objectsurface curve reconstruction

5.2 Common tangent plane

(WP,,, ), For a pair of projectionplanes W,,, a commontangent plane is defined by Definition 2.6e.It is a planethat touchesthe object's surfaceor, in other words, touchesthe object'stangent or singularcurve and as a consequence,also touches the correspondingcontrast curves in the two projectionplanes. This is why we usethe term tangent planeand the term commonindicates that the two projectionplanes have this planein common.

In parallelprojection, the planeis alsoperpendicular to both projection planes But in conicalprojection, a commontangent plane is the that ýV,P,,, and yp,,,. plane containsthe two centresof projectionsCP,,,. and CP,,,. A commontangent plane is NI denotedby 4 and a set of commontangent plan es is where N, is the numberof commontangent planes found in this pair.

The commontangent planes are employedas a maintool to obtain an object's surfaceinformation. We usethe conceptof commontangent plane to identify an object and detertninethe object's surfacepoints.

In this section,we explorethe plane'sgeometrical details. A commonplane in parallelprojection is explainedfirst. The more complicatedplane in conicalprojection is subsequentlydescribed. Let us beginwith geometricaldetails of a projection plane

V.,,,can be definedby the following equation.

d, klpn (5.2a) pnoQ=

140 Chapter5: Object surfacecurve reconstruction here k,,, is a scalar,dp,, F- 0 is a projectiondirection which is perpendicularto the plane.Q= [q. x q.y q.z]T E E' is a generalpoint on the plane.Scalar kp,, can be obtainedfrom

kp= dpo OPF (5.2b) where OP, Er is the centreof in E' and canbe defineby using Eq 3.2f

5.2.1 Parallel projection

As we mentionedearlier, in parallelprojection, the two propertiesof a common tangentplane are (Fig 5.2a):

1. The planetouches the object'stangent or singularcurve. 2. The planeis normalto both projectionplanes in the pair.

The first propertymay be expressedalternatively as the planemust touch the object's surface.And the latter propertycomes from the definition that the planethat touchesthe surfaceat a point mustalso touch ihe associatedcontrast curves at the projectionsof the points on the two projectionplanes in the pair, Fig 5.2b. L. commonaxis

Fig 5.2a Commontangent plane 4 in parallel projection.

141 Chapter5: Object surfacecurve reconstruction

L1,

Fig 5.2b Tangentplanes touch an object'ssurface (in parallel projection).

According to Definition 2.6c, the two projectionplanes are neverparallel or dp"O# cd., for any real numberc. Thereforethe line of intersectionor commonaxis

L,,.,always exists and this axis is perpendicularto thenormals of both y,,,, and Thus the unit directionof this line canbe obtainedfrom the following equation.

d1pnox d,,, dca (5.2b) dpo x d...,

Note that vector d., is alsothe normalvector of the commontangent plane ý,. Thereforeý, canbe definedby

dc Q, = k, whered, d., (5.2c)

142 Chapter5: Object surfacecurve reconstruction

Point Q, is a point on ý,,and scalark,, dependson wherethe planetouches the object's surface.Intuitively, we canimagine the planemoves along the conunonaxis searchingfor touchingpoints with the surfaceas k,, varies.

To clarify this concept,Fig 5.2 b is usedas an example. movesdown along L ca and touchesthe surfaceat P,. Note that, as a result, 4 alsotouches the two tangent curves C6", q andtheir correspondingcontrast curves CO, q. Thus the planeto can be definedby usingEq 5.2c.

dt eQo = kt. (5.2d) where k4o= d40 P.

Point P is either P,, P2or P2.Then the planemoves further down and touches the surfaceat anotherpoint P4.In the samefashion the plane4, can be expressedby equation:

dc*Q, =k4, (5.2e) where k=dP andpoint P is either P3,P. or P..

In this example,there are two touchingpoints in the pair. In general,the more complicatedshape the largernumber of the touchingpoints.

5.2.2 Conical projection

In conicalprojection, the mainconcept is still the sameas the conceptin parallel projection. But let us considerthe secondproperty in the parallelprojection case.The planeneeds not be perpendicularto the two projectionplanes. By the definition, the

143 Chapter5: Object surfacecurve reconstruction planemust touch the object's surfaceand must also containthe projectionsof the touching point. Let us considerFig 5.2c, POis the touchingpoint and its projectionsare P1,P2. CP,,,, and CP,, arethe centresof projectionof y,,,, and y,,, respectively.The CP,,,,,CP;,, PO,P, P2.Intuitively commontangent plane must contain ý, and we can think of a commontangent plane as the planethat rotatesabout CP.,,,,CP,,, axis searchingfor the touching points (Fig 5.2d), as in the caseof parallelprojection. A commontangent plane 4, can be generallydefined by the following equation[A2] et. al.

((CP,,, (P )) (Q, - CP,,) x - CP,,, 0 - CPPI-J)=0 (5.20 where Qj is any point on the plane4, andP is the touchingpoint POor one of its projections(P,, P2).

q'i cpp,ý

Fig 5.2c Touchingpoint and its projections(in conical projection).

144 Chapter5: Object surfacecurve reconstruction

CP9,

cpm I

Fig 5.2d Commontangent planes in conical projection.

5.3 Determination of tangent points

We now usethe conceptof commontangent plane to find an object's surface points. In this sectionwe consideronly a singleobject. As describedin the last chapter, the object's informationis now representedby a set of contrastcurves in 2D projection

rp- -, E! Pin CCPn CPC.,. 0,1, planeco-ordinate system P. eachprojection plane, C"--0 pn = 1. is deduce NP - Our task at this stage to the points on the object's tangent or singular in E, by C'P,,in EP. Each Cp,,,,,in curves using contrast curves P" contrast curve segment Cc.,,is separately dealt with one at a time. P

Let us usethe examplethat presentsa sphereas an object of interest.Fig 5.3a showsthe object and its projectionsystem. In this casethe object's contrastcurve on projection plane yp,,, is representedby two curvesegmentsq, q r=C, ',,, in E"'. These .V"q two contrastcurves are the projectionsof the tangentcurve segmentsC, ',', q r=Cc, in .V V. For simplicity,we chooseCO' as an examplecurve to deducethe object surface points on tangentcurve C,,' by usingthe commontangent plane concept.

145 Chapter5: Object surfacecurve reconstruction

CO

cc 0

projectiondirection

ly.

Fig 5.3a Tangentcurv es and contrastcwves of a shericalobject.

5.3.1 Parallel projection

A commontangent plane is createdby pairingtwo projection planes.The commontangent plane in this pair is perpendicularto both projection planesand perpendicularto the intersectionline as describedin the last section.The two tangent points TPO,TP, in Fig 5.3b on planesyp,,,, VP,, respectivelyare the points of contact betweenthe contrastcurves and the two commonintersection lines 10and 1,, where the commonintersection lines are the intersectionfines between the commontangent plane and the two projectionplanes.

146 Chapter5: Object surfacecurve reconstruction

Lc&

,ý-l lk

Fig 5.3b Determinationof an object'stangent and commontangent points by using a commontangent plane.

V,, e Y, Let us beginwith a commonaxis of a pair of projectionplanes V P,., ý which canbe obtainedfrom the following equation.

(t) Bca+ dcal (5.3a) = j, IeR

T P,,. x P,.. y Pz] r=E' is a point on the axis at t. Vector dcais the where =[P, -.. direction vector of the axis definedby Eq 5.2b and B,,. is a point in E' definedas follows:

. b,.. (5.3b)

147 Chapter5: Object surfacecurve reconstruction

-d, x y dp.. z p.. ,P%. d,.x d,.y d,.z x k, äo ä, A2 0 ..

Ao dp,,.. dp, dp,,.y = y ý..- -4 P,,..z AI= dp,, - dp,,. x-d,,,.. x dp,,. z ý., A2 = dp,,..x dp,,.y- dp,,..y d,,,. x and

(dp,,. dp,,. dpn= x, ydp,,. z-ý, pn = pno, ptil.

A tangentpoint TP maybe obtainedby calculation,in E, the point of contact betweenintersection line I and contrastcurve C' directly. But C' is representedin E'-'. A this in E-'Pinto in V is transformationof planecurve P. a spacecurve more

complicatedthan the transformationof the commonaxis L,.. in E' into a line in E!Pn, P

Moreover the geometricoperations in E' are muchmore costly and more complicated than thosein E'-P.Pn Therefore, to definethe tangentpoints, everythingwill be handledin

E'-P.Once all the tangentpoints are defined,then a projectionor a transformationis P" usedto transformthese points into E' in order to deducethe commontangent points (Definition 2.6.3b) on the object's surface.

It can be easilyseen that both 10,1,are alsoperpendicular to L.. This leadsto the conclusionthat, thereis no needto calculatethe two tangentpoints in E. We then transform L. into E',*. In thereforethe tangent is simply P" eachprojection plane, point the point on a contrastcurve that hasthe tangentfine normalto the projection of L,,..

Let us expressthe projectionsof L., TP andI in E,.P as L., TP' and F

respectively.In practiceit is not necessaryto algebraicallydefine I, its projectionsand

149 Chapter5: Object surfacecurve reconstruction the touching point betweenP and C. Only the tangentangle 0 of P which is a constantin eachpair of projectionplanes, is sufficientfor determininga tangentpoint TPI. 0 is definedin E,', *, with respectto x'P.axis in anticlockwisedirection. Thus TP' is a point on C' that hasa tangentangle equal to 0. We can easilyobtain 0 by using the projectedcommon axis L,. which is definedby the followings.

CP Lf = nw-ýP(L ) ca pn ca dp,,,OP (5.3c) 'P., 'p.

It shouldbe notedthat L. is alreadyon projectionplane V,.,. Thereforethe projection function IT P only a transformation L,,, in E' into ý.' in E,',, P,. Refer to Eq ,, performs of 5.3

PIca Q) = BI, + d,', t (5.3d)

2 P is the in EP. B' is in from ca a point on axis P?f a expressed matrix as

rb,.. x Lb.'. y] which is the x andy componentsof the following matrix

b' ca x- -b,.x' bj'.y by T bc' b,. .z z 1J L1J T w-+pB P"" ca

149 Chapter5: Objectsuýface curve reconstruction

Bca Bca where, is expressedin homogeneousform. The directionvector d,. is also presentedin matrix form as

rq"a. dr ca Ldc..y] which is the first two componentsof the following matrix

d' ca x- dc'..y d.. y Tz _.p dc' dc, .. z z 1J L1J Tw-*P d 'p, ca

dca is dcaexpressed in homogeneousfon-n. The transformation TP'."' is where matrix pol the 4x4 matrix definedby Eq 3.2h.

P is normalto L,'.. Hence,as shownin Fig 5.3c, the tangentangle 0 is easily obtainedby

0=0 (5.3e)

where Ocais an anglein radianbetween L. ' and x, axis in anticloclwisedirection. Thus tangentpoint TP' is the point, on C' that hasthe tangeQttanO. As describedin the P previouschapter, C' is presentedby a sequenceof order parametriccurve segments [C. ', C' = (I); t r= in E, P,,where sno is a segmentnumber and N. is a total numberof curve segmentsin the contrastcurve.

150 Chapter5: Object surfacecurve reconstruction

N C (1)=ZA. t" (5.30 sm R--o

N=3 for 3rd order parametriccurves, therefore

C (1) AO Alt 42t2 + 43t' sn = + +. (5.3g)

or in matrix form

[t .I+ [a2 [Cý'X .3 '71'X X]12 lc.c.. (I). x] [ao,x - . = + + ('). y ay. cý.y_ a2y a3,y_

Thus at a tangentpoint,

d c,,,(t). x tanO (5.3h) or;.y+a2. y tý.x+a, xt+aý. xt'

which can be rewritten in quadraticfonn as

(3a,. x tan 0- a,. y) + 2(a2. x tan 0- a2y) 1+ 3(a,. x tan 0- ay) 12= 0. (5.3i)

If t is a real root of Eq 5.3i andin the range[lb,,,, te,,, ] then a tangentpoint is

TV = C. Q) obtainedby substitutingI into Eq 5.3g.

151 Chapter5: Object surfacecurve reconstruction

ýyp.

TP

V

Fig 5.3c Commonintersection line and tangentangle in parallel projection.

5.3.2 Conical Projection

In conicalprojection, we canimagine a commonplane rotating about a line CPP,,,CP.,,, which joins the two centresof projectionsCP,,,. and CP,, in a pair of

WPN.A tangentpoint is the point of contactbetween a contrastcurve and the plane.

To implementthis method,many approaches can be employed.We can definea commonplane k that rotatesabout line CP,,,,,CP,,,, then find a tangentpoint The whole schemehas to be performedin E, which involvesa complicatedcomputation.

For example,a contrastcurve C' is originallyrepresented in E!Pn P. A transformationhas to be performedto determineC in E. By this approach,it is difficult to solve all relevantequations analytically. Therefore an iteration methodhas to be appliedto determinea point of contactbetween this curveand the commonplane. First, a cornmonplane ý mustbe definedat a certainposition. (Because ý is any planethat containsline CP,,,,,CP,,,. ) Thenfind an intersectionpoint between4 and C. If the point is not tangentialto ý, a new commonplane ý at different position hasto be defined until the intersectionpoint is tangentialto ý. The processgoes on until all tangent

152 Chapter5: Object surfacecurve reconstruction points are found. This is not very economicalas far as computationalcost is concerned. Another alternative,instead of transformingC into V, we calculatea common intersectionline I betweený andprojection plane VP,,in E. Then transform I into F in E, P.If the intersectionpoint betweenP and C' is not tangential,a new ý must be 'P definedand the processgoes on as explainedearlier. This alternative,again, is not very attractive in term of computationalcost.

A simplerand more effectiveapproach can be obtainedby using the ' characteristicof a commonintersection line I in eachprojection plane in a pair. As illustrated in Fig 5.3d, it is obviousthat underthe conditionimposed by Definition 2.6c, line CP CP,,, alwaysintersects both projectionplanes V/,,,,and V/,,, The 'P,,ý, . intersectionpoint on V/,,,in the pair is denotedby CR,,,. As commonplane ý rotates about line CP,,,,,CP,,,, line I rotatesabout point CRP"on plane V/,.. Intuitively, we can think of I as a line rotating in order to searchfor a tangentpoint on curve C' in

E3. Let CR' be the point CRP,,in EP'P,we alsoknow that P rotates about CR' We ,P,, P". can use this characteristicto easilydefine a tangentpoint without iteration.

153 Chapter5: Object surfacecurve reconstruction

top view

Fig 5.3d Commonintersection line I in conical projection.

To describethe methodin detail,we beginwith the line CP,,,,.CP which can be ,,,, definedby a parametricequation in Eq 5.3j,

PR(t) = BR +dR t, IeR (5.3j) where

BR = CP,,,.or CP,,, and irectionvector

CP cpp"ý - dR = pp",

154 Chapter5: Object surfacecurve reconstruction

Therefore intersection CR., betweenthis line be defined an point and a plane V/,P,, can by

( dpn* ýR ý,,,- CRP" BR+ dR pn pno, pnj (5.3k) = dpn di? = ý *

here k,,, is a scalarobtained from Eq 5.3b, dp,,r: E) is a projection direction which is

perpendicularto plane In this case,dp,, * d,, is neverequal to zero becausethe line

and the planeare neverparallel.

The point CR canbe transfonnedinto E 2P by describedin pn Rn the sameprinciple Eq 5.3c and Eq 5.3d:

CR' CP,,, d Opp") PH=riw-P(CRP,, PH PH$ [crp",.X- (5.31) crp,.-Y-

which is the first two componentsof the following matrix

cr.pnFx cr. f.. y Tw'P CR pn pn cr. Z pn

where CR, is CR... expressedin homogeneousform.

The fact that line P rotatesabout CR.., impliesthat line P alwayspasses CR' Therefore, tangent TP', line P is the line through point ,,- at any point that contains both TP' and CR' as an exampleillustrated in Fig 5.3e. ,P,,

155 Chapter5: Object surfacecurve reconstruction

x L;K;.

Fig 5.3eTangent point in conicalprojection.

As previouslymentioned closed contrast curve C' is representedby a sequence [C. of parametriccurve segmentsas C= The tangentpoint TP is on both curve segmentC and line P. Let Eq 5.3f representthe curve segmentand let P is a parametricline definedby the following equation.

Pr(Ir CR,'P,, + d,, R. (5.3m)

Thereforeat tangentpoint Tp'

PrOr )=C, (1) ]. 1t r=[Ib,,,te,. (5.3n) lfdr=[dr. x d.. y]" and C. Q) is expressedbyEq 5.3g, then the aboveequation can be rewrittenas

RP,.x+d.. xX+C, ý. X+ a2.X 12+ a3.X 13 (5.3o)

156 Chapter5: Object surfacecurve reconstruction

and

CRI y + dp.y tr = a,,.y + aly + a2.y 11+ a3.y 11. (5.3p) ,pP,,.

Also, at a tangentpoint TPI

d,..y = tan 0 (5.3q) d,. x and substitutingEq 5.3h into this equationyields

dj y £ý.y + 2cý.y t+ 3cý.y t 2 ,r. = (5.3r) dj x t' .. x cý.x+2cý. xt+3a3.

Solving the three equations(Eq 5.3o, p andr) for t, we obtain a quadricpolynoinial equation:

12 13+A 14 =0. Ao t+Al t+A 2 +A 3 4 (5.3s)

defined The coefficientsA 0-4 are as:

A =a. ya. x-CR',. ya. x+CR' xa. y-a. xa. y 001pI p" 101

A =2a. ya. x-2CR' ya.2 x+2CRI xa.202 y-2a. xa. y 102 pn pn A 21p21203=a. ya. x-a. xa. y+3a. ya. x-3CRI pn ya.3 x+ 3CR pn xa.303 y-3a. xa. y A3= 2a,.y a3 x-2a xa I 3y Y. A4=a2 ya 3 x-a 2 xa 3

157 Chapter5: Object surfacecurve reconstruction

The solution of this equationgives four roots. Only real roots in the rangeof (lb,,,,le,,, ] are considered.Replacing I in Eq 5.3gby eachof theseroots gives a tangentpoint TP'.

Note that, Eq 5.3sis not valid if

cý. x+ 2a2. xt+ 3a3. x 12 = 0, (5.3t)

or when P is a vertical line. In this case,parameter I may be found directly by the solution of Eq 5.3t. This equationgives two roots. Again, only real roots in the rangeof [ib,,,,te,. are considered.A tangentpoint then canbe determinedas describedearlier.

Note that, to acquireall the tangentpoints, we proceedin the samefashion for all (N N - 1)/2 pairs of projectionplanes. There are manyways for finding roots of a PP polynomial.In this thesis,we usethe EigenvalueMethod and the computerprograms describedin [I].

5.4 Determination of commontangent points

All we haveso far is a set of tangentpoints in 2D projectionplane co-ordinate E! ". From this information, haveto deducetheir tangent system Pn we associatedcommon points in 3D world co-ordinatesystem E. In this part, the calculationsmust be performedin E'. Unlike the last sectionwhere all calculationsare performed in E,2',, P,, in this case0 relevantgeometric objects, e. g. points,lines must be transformedinto E'. We employthe transformationsdescribed in Section3.2.2. Oncewe havetransformed all the relevantpoints, the next stepis to selector matchthe tangentpoints in each planein a pair to a reconstructcommon tangent point.

158 Chapter5: Object surfacecurve reconstruction

5.4.1 Tangent points matching

For convenience,let us beginwith an exampleof a tangentpoint in a pair of projection planes.Let C' denote,a contrastcurve. Fig 5.4aillustrates this examplein conical projection.

ý wpm, 01ooo. TP

,Oo 7P2 TP4

CP c P"o pp.

Fig 5.4a Commontangent planes and tangentpoints.

First we considercommon tangent plane 4, Apart from the two tangentpoints TP,, TP,, 4,, alsotouches tangent curves at a point on the surfacecalled conunon tangentpoint. A commontangent point canbe determinedif there is at least one tangentpoint on eachprojection plane in a pair.

From the example,it is obviousthat we matchTP2with TP, in order to deduce a commontangent point in E'. But, in the previoussection, we calculatedtangent points in eachprojection plane separately. The methodhas not producedinformation showingwhich point on one projectionplane is on the samecommon tangent plane with which points on anotherplane.

159 Chapter5: Objectsurface curve reconstruction

In fact, a common tangent plane can be treated as an imaginary plane and it is not necessaryto definethis planealgebraically. We usethe fact that a tangentpoint is in a commontangent plane then its projectionlines are also on the samecommon tangent planeand intersecteach other. A projectorline is definedby function Aý,-,,` as statedin

Definition 2.6i andthis functionis exploredin greaterdetail later. We use the condition that the two projectorsintersect each other in the tangentpoint matchingprocess. This condition canbe expressedin Definition 5.4a.

(W., Derinition 5.4a In T, a pair of projectionplanes 9WPJ%) e two tangentpoints, TP. r: TPP, on wp, and TP,,e TP,,,,on V,,,, are saidto bepreliminarily matched, if both of their projector lines L. E LPFIOand L. e L,,, intersectone another.

Note that we usethe termpreliminarily becausethe condition statedabove is not sufficientfor matchingthese tangent points in somespecial cases. For instance,to in Fig 5.4a containsonly TP2and TP4whichare matchedbecause 4 intersectsL4. A commontangent point canbe clearlydeduced from thesepairs of tangentpoints. But 4,, four T? TPI TP3,TP.. The on there are tangentpoints O, and projector of T]?o intersectsboth projectorsof TP3and T?,. And the samesituation also occursfor the projector of TP,. An ambiguityarises and there are severalways to matchthese points. By using Definition 5.4ato preliminarymatch these points, we end up with TPOis matchedwith both TP, and TP, and so as TP, The problemis we do not know, at this stage,which pair will givesa valid commontangent point. This problemis dealt with later in this thesis.

5.4.2 Determination of common tangent points

In this section,we dividethe probleminto two cases:the casewhen a common tangentplane contains one commontangent point andthe casewhen a commontangent

160 Chapter5: Objectsurface curve reconstruction planecontains more than one commontangent point (but doesnot containan infinite numberof the points).

Case 1 One common tangent point on a common tangent plane

In this case,a commontangent plane touches an object's surfaceat only one point. There is only one tangentpoint on eachprojection plane. Therefore it is quite straight forward to seethat the commontangent point is the intersectionpoint of the two tangentpoints' projectors.

Refer to Fig 5.4b, Nr,,, = Npr,,,= 1, whereN, 7,, = numberof tangentpoint on yp,,, and ,P the two projectorlines 4 andL, are

Aý'*(TP2, CPP,,,,d,,,. ) (5.4a) ,p,,. and

j. L, = A; ",,,*(TP,, CP,,,,d, (5.4b)

Before thesetwo lines are further described,let us illustratethe projector line function A!, *w in detail. If P is a point in E' on planeV., the projector line L is definedby --..

L=A! pn'(P, CP,,,,d pn) - (5.4c)

This line canbe dividedinto two cases:parallel projection and conicalprojection. In parallelprojection, the line containspoint P andhas d..,,as a directionvector. In conical projection, the line containspoints P and CP,,,.The projector fine canbe expressedin parametricform as:

161 Chapter5: Objectsurface curve reconstruction

PLQL) dttL (5.4d) =P+ , tL eR.

In this equationPLQL) represents a point on the line L at parametert. and d. is a direction vector definedby

I dP" parallel projection CPPn-P dL =, ICPn PI conical projection. - ,

Thus, from Eq 5.4aand 5.4b,lines 4, L4 canbe rewritten as

Pz,QL, )= TPj 11, (5.4e) +dz, ,i=2,4

At the intersectionpoint betweenthese two lines, Pz'QL, )= PL.Q,, ). The methodto solve this equationand find an intersectionpoint canbe found in AppendixD. However, in practice,it is unlikely that the two lineswill exactlyintersect. In this case, there is a smallgap betweenthe two closestpoints on the two lines. The distance betweenthese two points,which canbe determinedby Eq. D 14 and Eq. D 15 in Appendix D, is the shortestdistance between the two lines.Therefore the common tangentpoint canbe approximatelydefined by the middlepoint betweenthese two points.

162 Chapter5: Object surfacecurve reconstruction

uFP. 0

Fig 5.4b Onecommon tangent point on a commontangent plane

CASE 2 More than one common tangent point on a common tangent plane

This is a particularcase, to which we must pay specialattention. We use the examplein Fig 5.4c to explorethis case.On commontangent plane ý1,there are two tangentpoints on eachprojection plane. According to Definition 5.4a, TP, are preliminarilymatched with both TP3and TP, becauseL, intersectsboth L3and L5' The samesituation also occursfor tangentpoint TPOEach intersectionpoint is found in the samefashion as describedin previoussection. There are four preliminarilymatched pairs of tangentpoints. Thereare two possibilities:all the pairs can be usedto deduce the correspondingcommon tangent points in E3 or only someof them can be used.The latter meanssome pairs generate invalid commontangent points. As describedin Chapter2, the ambiguityarises because an X-ray projectionis a sudection,which is not a one-to-onemapping. Either one or more than one commontangent points in E' can in E,,P,. Therefore four be projectedonto the sametangent point 'Pn eachone of the intersectionpoints P., n=0,1,2,3, hasan equalchance to be a commontangent point

163 Chapter5: Objectsurface curve reconstruction basedonly on the informationwe haveat this stage.To solvethis ambiguity,further information is required.We useAssumption 2.6d to verify eachintersection point. For example,if point POsatisfies the conditionin this assumption,PO is a commontangent point and TPO,TP3 are saidto be matched(not only preliminarilymatched). The detail on verification processis describedin the next section.

CPMI 14:t

Fig 5.4c More than one commontangent point on a commontangent plane.

5.5 Verification of a common tangent point

We observedand discovered some characteristics of an object's surfacepoints. Thesecharacteristics were usedto formulateAssumption 2.6c and Assumption2.6d. This assumptionis usedto checkwhether an intersectionpoint betweentwo projectors is a commontangent point on an object'ssurface. A point is a commontangent point if its projection on everyprojection plane lies insideor on the object's occludingclosed curve on that projectionplane. In this section,we describethe implementationof the assumptionand the statementwhich were symbolicallyexpressed in Chapter2.

164 Chapter5: Object surfacecurve reconstruction

Let point P be an intersectionbetween two projector lines in E' and PP,,is its projection on projectionplane

CP,,, d OP,,,) (5.5a) Popn =nw`P(P,P" pnq

as definedby Definition 2.6.2b.

We this II",',, P in detail by the in Fig 5.5a explore geometric projection .vn using example (parallel projection)and Fig 5.5b (conicalprojection). The projector line L can be determinedby the following equation.

PL(tL) =P+ dL (5.5b) where P. is a point on L, and

dp" parallel projection pEn dL=o P -C (5.5c) lp cpp. I conical projection, - , d'P.r= E) is the projectiondirection and CP cir, is the centreof projection of V,,,. In ,pn parallel projection,point P' canbe determinedby transformingthe intersectionpoint of L and yp,, in E' into 2D projectionplane co-ordinate system Or we simply

E3-P - transformthe point into 3D projectionplane co-ordinate system Pn -

T'PP=P .vn - ppi X" pn. PP. -Y Ppn-Z

163 Chapter5: Object surfacecurve reconstruction where transformation matrix T`P is defined by Eq 3.2h. Thus PP',,in E2,* is obtained 'p" 'p, by eliminatingthe z componentof P,,,.Therefore

rp". X1 P,' = in LPP--YJ

In conicalprojection, we haveto determinethe point of intersectionbetween L and w,,, in order to obtainthe projection.At the point of intersection,

kPn- dP"0P tL = (5.5d) dpn* dL where d., E0 and k, is definedby Eq 5.2b. SubstitutingIL from this equationinto Eq

5.5b givesthe projectionof P in E. This point then canbe transformedinto point P' in EP',P,by using transformationmatrix T,*,, P as explainedearlier.

lýdp. p

Zz2

Fig 5.5a Projectionof a point in parallel projection.

166 Chapter5: Object surfacecurve reconstruction

9

liý

Fig 5.5b Projectionof a point in conical projection.

Point location problem

Accordingto Statement2.6d, we needto checkwhether the point P' is not outsidethe associatedoccluding contrast curve. We dealwith this problemas a general case;on y'P., given a point T and a closedcurve C in E, P,then checkif the P' is not outside C.

Let us considerthree possible cases shown in Fig 5.5c-e,point P, in Fig 5.5c is on C, thereforeit is obviousthat P' is not outsideC In Fig 5.5d, horizontal line 4 or line ý is Both lines through P'. If PI is inside vertical created. pass * the numberof intersectionpoints of 4 and C in the upper sideor the lower sideof P, must be an odd number.In the example,there are three intersection points (C, E and F) in the upper side and one point (D) in the lower side.It is alsotrue in the casethat 4 hasbeen created,the numberof intersectionpoints in the left or in the right side of P' must be odd. There is a specialcase when the line touchesthe closedcurve as shownin Fig 5.5c. The numberof intersectionpoints is not necessaryodd any more, eventhough point P' is insidethe curve.We dealwith this ambiguityby discardingany touching or tangentpoint. We simplydo not count this kind of point. Thereforethere is only one intersectionpoint (B) on the right side.The procedureto checknot outsideon V.,,,can be summarisedas follows:

167 Chapter5: Object surfacecurve reconstruction

1. Check,ifpoint T is on closedcurve C then T is consideredas being not outside C.

2. If P' is not on C thendefine a horizontal line L. that pass through T or a vertical line L, that pass through T 6ust one line, it doesnot matter whetherit is horizontal or vertical).

3. Find the intersectionpoints of the line and C

4. If the horizontal line wasdefined, then count the numberofall the intersection points (do not countany tangentpoint), in the left or right side. If the numberof the intersectionpoints is odd (zerois regardedas an evennumber), then P' is not outside, otherwiseP' is outside.If the vertical line wasdefined, we proceed in the samefashion but count thepoint in the upper or the lower side.

In practice,a closedcurve C is a closedoccluding contrast curve r=OCP'. where OCP',,,., is a setof contrastcurves as described in the previous

chapter.Each contrastcurve is definedby a sequenceof 3rd order polynomialcurve IC,. C, (1);t r=[tb,,,, te,,, C,' r=OC',,,,, The therefore be segments = 0 procedure can rewritten with algebraicdetails as follows:

1. Checkevery curve segment in OCC.,,,,,iffind a segmentsuch that

C.(t)

[c.(t). x] (5.5e) c.(t). y [a3, - [a,.a.. x] [cý.x" x' X = + I +[a,, 12+ 13 y Cý.Y_ a,, y. a3,y_

Ic [lb,.,,le,,, ], then P' is on and is not outsideOC, 'P'., where O.*

168 Chapter5: Objectsurface curve reconstruction

2. If T is not on OC,,,,., thendefine a horizontalline 4:

PL.Qz. )= P' + dLtL. (5.5f)

of, where dLo= (I or a vertical line I.: P4Qj, )= P+ dL,t4 (5.5g)

dL, If. where =

3. Find the intersectionpoints by checkingevery segment of OC.,",,.,,,. Ais can be easily done becausethe line is either horizontalof vertical. Yherefore,instead of solving two independentequations, only solving one equationis enough.This is one reasonwhy we useonly horizontal or vertical lines. If the horizontal line was defined, then we solve thefollowing equation:

p'. y=ao. y+cý. yl+a2, yl 2+tý. Ytl (5.5h) for I. If te [tb,,,,te,,, ] then the line and the curve intersectat point T. P= [C",(t). xp 01.y] If the vertical line wasdefined, then solve

p'. x = ao. x+cý. x 1+a2. x 12 + a3. xP (5.5i)

fort. If I r=[lb,.,te,,, ] then the line and the curve intersectatpoint

[p T. P= F. x c.,, (t). Yl

4. If the horizontal line wasdefined, count the numberof intersectionpoints p= [P. X P.Y]T in the left side, we count only thepoints that have p. x< p'. x or in the right side, we count only thepoints that have p. x> pl. x. If the numberof the intersection is then T is OCc If the line defined, point odd not outside P... vertical was

169 Chapter5: Object surfacecurve reconstruction in the upper side we count only thepoints that have py > pl. y, or in the lower side we count only thepoints that have p. y < p. y. Note that if intersectionpoint P is also a tangentpoint P, or:

d c,,,(1). y tan0 -- (5-5j) d c,, (1).x where

0 horizontal line R vertical line, 2 we do not count this type of points.

Now we havea methodto verify a commontangent point. Let us get back to the examplein Fig 5.4c, we usethis verificationmethod to verify the commontangent from four intersection PO P3.A is points the points ... commontangent point the point suchthat its projectionon everyprojection plane is not outsidethe object's closed occludingcontrast curve in that projectionplane.

5 c

Fig 5.5c Point locationproblem: a point is on a curve.

170 Chapter5: Objectsurface curve reconstruction

YT F

c upper A3n B IoNOer D left right I ILI

Fig 5.5d Point locationproblem: P' is not outsideC and no tangentpoint involved.

Ljy *.. pm F

A( upper Lp pB lower JC XPR E

left right 'V

Fig 5.5c Point locationproblem: a tangentpoint involved.

000 5.6 Distribution of tangent points

To find all commontangent points, we pair all the projectionplanes. If there are NP(NP N, planestherefore there are pairs.In eachpair a set of common projection 2 tangentpoints is found from their correspondingtangent points on the contrastcurves. Let us recall the distributionof object'svertices in section2.6. Considera contrast curve Ce CP',,on projectionplane y,,,, all tangentpoints on this curve are determined by pairing this projection planewith the rest. Thesetangent points are calledplane tangentpoints on C' andthe set of thesepoints is representedby PTP., as definedby

171 Chapter5: Objectsurface curve reconstruction

Definition 2.6K. In this section we explore how these plane tangent points are distributed on a contrast curve in detail. Then we discuss the distribution of the the commontangent points of an object's surface.For simplicity, we consider parallel projection first andthen take into accountthe effectsof conicalprojection.

Let us consideran arc betweentwo consecutivetangent points on a contrast 80 C. As definedin Definition 2.6m,the meancurvature ic is the quantity curve -8S, where 5s is the lengthof the arc of the curveand 80 is the anglebetween the two tangentsat either end of the arc or at the two tangentpoints

5.6.1 Uniformly distributed projection directions

Parallel projection

To studythe distributionof the tangentpoints, we beginwith the characteristic of a set of the tangentlines TLP, (Definition2.61) on plane W... For simplicity, we first explorethe case,as an example,when NP=3 which detailsof the projection planes can be obtainedby usinga cubeas shownin AppendixA. Let us considerthe first projectionplane V,,, a tangentline on this planeis a line of intersectionI between a commontangent plane 4 and V0, Fig 5.6a.In this caseN. = 3. All the tangentlines in (yo, ) to Lo in to L, Only pair V, are parallel and pair(WOIV2)are parallel - tangent anglesof theselines are underour consideration.Therefore the lines 4,1. can be plotted suchthat both linespass through the origin the anglebetween these two lines is 1. 80 which is We then proceedin the samefashion with the caseswhen N. = 3,4 and 2 12. The set of tangentlines in the caseof N, =4 is shownas an examplein Fig 5.6b and the rest of the caseswith numericaldetails are shownin AppendixA. We can conclude that if the projectiondirections are uniformly distributedthen the tangentlines' angles

172 Chapter5: Object , surfacecurve reconstruction

are spreaduniformly over 2n (Assumption2.6e andf) with the gap betweentwo tangentlines being equalto 80. The angle80 canbe obtainedfrom

7c (5.6a) N p-i

where N. is a numberof projectionplanes.

L - L TY ,--:

wo wo I/

(1) IV, (2)

Z---7t- - WO (3) (4)

LOY'59 IX 3 L,I (5)

Fig 5.6aCharacteristics of tangentlines on a N=3 projectionplane, when P (1) Setof projectionplane planes. (2) Pair yo, W, and tangentline (3) Pair line L,. (4) Tangentlines YOI XV2and tangent on yo. (5) Distribution of the tangentlines.

Fig 5.6b Distribution of tangentlines when Aý, = 4.

173 Chapter5: Objectsurface curve reconstruction

Now, we know that if the projectiondirections spread uniforn-dy over 4n fl or 2n fl, then the tangentlines' angleson eachprojection plane are also spreaduniformly over 27c.But what doesthis informationk tell us aboutthe distribution of the tangentpoints? Let us considera curvesegment with no inflection point and no singular point as shownin Fig 5.6c. If NP= 4, thereforethe planetangent points in PTPP,,on 1ý are Tlý, n=O,1,2,3 the anglebetween two tangentsat any two consecutive planetangent points, such as TPO,T? and TP,, TP2etc.is equalto 80 which is This j -1.3 is also true in any positionand direction of the curve (seeFig 5.6d). , So what is meant 80 by this?Let us considerthe definitionof the meancurvature x again,.- x=- 6S, where 50 is a constant.Therefore 5s decreaseswhen ic increases,or the higher curvaturethe shorterdistance between two planetangent points. This is what we need, the tangentpoints clusterclosely in highly curvedparts (high ic) and widely spreadin smoothflat parts (low ic) of the contrastcurve. We call this property the optimum distribution ofplane tangentpointswhich canbe seenmore clearly from the graph in I Fig 5.6e. In this graph,5s is proportionalto andthe slopeis equalto 60. In term of K numberof projectionplanes which directly effects80, the larger numberof projection planes,the lessslope or the lessdistance between each pair of two consecutivetangent points. It also meansthat the more projectionplanes, the more tangentpoints.

Fig 5.6c Distribution tangent ) of plane points.

174 Chapter5: Objectsurface curve reconstruction

Fig 5.6d Distribution of planetangent points in Fig 5.6 with different object'sposition.

7r/9 : 7C/II

Piz

Fig 5.6e Graph shownthe relationshipbetween distances (between two plane tangentpoints) and the inversecurvature.

'contrast So far we havepaid attentiononly to the distributionon a curve segment with no singularor inflectionpoints. In the caseof a singularpoint, the anglebetween the two consecutiveplane tangent points with a singularpoint in between,is no more equal to 50. We can studythis caseby separatingthe segmentinto two segmentsat the singularpoint. Thus eachsegment becomes a smoothsegment which we have studied Fig 5-6f illustrates this P,,is T? earlier. an exampleof case,where a singularpoint and O, TP, are planetangent points. And so as the casewhere there is a reflection point betweentwo consecutiveplanes tangent points andthe reflectionpoint itself is not a tangentpoint. The anglebetween the two planetangent points is not equalto 60, but

175 Chapter5: Objectsurface curve reconstruction equalto zero or ic = 0. We canthink of this segmentas two normal segmentsstudied previouslyas shownin Fig 5.6g,where TPý is a reflectionpoint.

; ýýSpo/, --*Llj

cl,

Fig 5.6f Contrastcurvc with a singularpoint.

TP TPO 0 f/col, COOO TP, TP,

Tý TP,

Fig 5.6g Contrastcurve %ýith a reflectionpoint.

Thereforewe can concludethat if the projectiondirections are uniforn-Ayspread over 47c0 or 27c0, then the plane tangent points on a contrast curve on a projection planedistribute optimally. The distributiondepends only on the curvatureof the contrastcurve, regardlessof the directionsor positionsor sizeof the curve.

The distributionof planetangent points on projectionplane xV,,,is in 2D E'P To determine distribution projection planeco-ordinate system Pn . the of the common tangentpoints on a tangentcurve in 3D world co-ordinatesystem E, we haveto deducethe distributionin E' from the distributionin E,2,, ','. The two distributionsare the sameby assumingthat the tangentor singularcurve in E' is parallelto the projection plane.As shownin Fig 5.6h,tangent curve C' is parallelto yp,, and as a result,

176 Chapter5: Object surfacecurve reconstruction

C. The distribution TP., 3 is parallel to contrastcurve of the tangentpoints n=0, ..., the distribution tangent CTP,,, 3. This the sameas of common points n=0, ..., means the commontangent points also distribute optimally along C' in E. And becausea tangent curve is on an object's surfaceand the distributionof the tangentcurves on the surfacealso dependson the set of projectiondirections, therefore we imply that the commontangent points alsodistribute optimally over the surface.In other words, the commontangent points spreadon the surfaceclosely on highly curveipartsand widely spreadon smooth,flat part.

TP2

CTPI '4'

Fig 5.6h Distribution of commontangent points whenthe tangentcurve is parallel to a projection plane.

Conical projection

Let us recall the geometryof conicalprojection as shownin Fig 5.6i. Radiusr standsfor the radiusof the regionof intereston a projectionplane W., The tangent anglevaries from ý to ý± AO, where ý is the tangentangle in parallelprojection. The r deviation AO is equalto tan-, a

177 Chapter5: Object surfacecurve reconstruction

The effectsof conicalprojection on the distributionof planetangent points can be illustratedby an examplein Fig 5.6j. The optimumdistribution in parallelprojection is not necessarilyoptimum in conicalprojection. The greaterAO, the more effect on the distribution. In this project,we assumethat projectiondistances d,, =4 and d,,>> r which resultsin a >> r. ThereforeAO is approximatelysmall. Hence the distribution is still assumedto be optimum.

CPO

Fig 5.6i Geometryof conicalprojection and the deviationof the tangentangle.

>

parallelprojection

I.; UILIV, dl PlujrL; Llull

Fig 5.6j Planetangent points distributionsin parallel projectionand in conical projection.

178 Chapter5: Object surfacecurve reconstruction

5.6.2 Non uniformly distributed projection directions

In this case,we cannotpredict the distributionof eitherplane tangent points in E 2p tangent in E. The 80 is not a constant.The distribution PRor common points dependson how the projectiondirections spread, and alsothe object's position and direction.

5.7 Reconstruction of tangent and singular curves

In this sectionwe explainhow to reconstructtangent curves or singularcurves by using their correspondingcontrast curves and the commontangent points. As describedin Chapter4, eachcontrast curve contains no branchand no self crossing.We begin with the reconstruction of a tangentor singularcurve from eachcontrast curve separately.-Then we link all thesereconstructed tangent and singularcurves in E' to form a network. In an X-ray image,we do not know which contrastcurve is the projection of a tangentcurve andwhich contrastcurve is the projectionof a singularcurve. Therefore we reconstructboth tangentand singularcurves by usingthe samemethod.

5.7.1 Reconstruction of a tangent or singular curve

Supposewe havea contrastcurve C' and a set of planetangent points TP., ii = 00 1,2 and a set of commontangent points CT?,,,n=0,1,2 as shownin Fig 5.7a, our

aim is to reconstructthe correspondingtangent curve C. We haveto deducethe tangentcurve segmentbetween each pair of consecutivecommon tangent points. For example,segment CTP, CTP2must be deducedfrom segmentTP, TP2 in Fig 5.7b. The approximationcan be donein manyways suchas finding the best projection of curve 2p segmentTP, TP2 in EP into E. But in this project, we simplify the problemby

representingthe curve segmentT?, TP2bythree straightlines. This approachis called

179 Chapter5: Object surfacecurve reconstruction the three-lineapproximation of a curve.An optimumway to representa curve by three lines is illustratedin Fig 5.7c.

C

w

Fig 5.7a Planetangent points on contrastcurve C' and the crrespondingconunon tangent points on tangentcurve C'

The following algorithmis usedto determinethe three-lineapproximation of a curve:

d. x 1. Findpoint Poon the curvesegment TP, TP2 that has tangentequal to where d. y . d= [d. x d.y]r TP, 7? 21- 1 JTP2 I _TPI

direction line 1? is the vector of TP, 2by using the sameapproach we use tofind a langentpoint inparallelprojection (seeSection 5.3).

2. Define link L, which is normal to line TP,T?, andpassesthrough POby parametric equation PL,(11, ) = Po+d4t4 (5.7a) where

180 Chapter5: Object surfacecurve reconstruction

I [_ d. y I]T d. x# 0 d d. x [1 Of otherwise 6

3. Find the intersectionpoint P3betweenline L. and line TP,TP2. (Ihe methodlofind this intersectionpoint can hefound in AppendixD. )

4. Point P, and P. can befound by ! (P3 P4 ý +TPO (5.7b) 2 and ! (P3+TP2). Ps= (5.7c) 2

L4 5. Define lines and 4 as

PL,QL. ) = P4+dLIL,, (5.7d) and PI,(Q = Ps+ d4tL,. (5.7e)

6. Yheintersection points P,P2 can be detenninedby solving thefollowing equation

pl, (10= C.(1) ]=[ao*x 4; [a,, X [t73'YX]13 xiL. 'X], 2+ n=4,5 (5.7f) n*x+d4. +[ + P".y+dL,. yt4 ao.yj ai.y a2y. a3.

where C,.(1) is aP orderpolynomial representingcurve segment T?, TP2as describedearlier.

7. Determineline 4 by

181 Chapter5: Object surfacecurve reconstruction

PI,(tz, ) = P,+ d4 tL. (5.7g) where [dL,.d,. x] d4 y P2 pt - JP2 PI I -

and line 4 which is parallel to L, andpassesthrough P. by using equation

PI,(tz. )= Po+dL, tz, (5.7h)

8. Find intersectionpoint P. betweenline 4 and 4, and intersectionpoint P, between line 4 andL,.

9. Yhenpoints SSPO' and SSPI' can be obtainedas

sspo, X] =rsýo-y (5.7i) P6 + P, 2 and sspl. x rLssm ssp, -Y. (5.7j) P7 + P2 2

182 Chapter5: Objectsurface curve reconstruction

rTP, r T

CTP2

TP, 7? CTPCTP2. Fig 5.7b Contrastcurve segment 2 and the correspondingtangent curve segment

Lo

Ll 0.5do 0.5d, d, d,

Fig 5.7c Three-lineapproximation of a curve.

At this stage,contrast curve segment TP, TP2 is approximatelyrepresented by three straightlines TPISSPO', SSPO'SSPI' and SSP,TP2. We project theselines into E' to

representthe correspondingtangent curve CTP,CTP2. Points TP, and TP2arethe CTP, CT? Our is how projectionsof commontangent points and 2 respectively. problem to find the points, say SSP,,,SSPI, that have SSPO'and SSP,' as their projectionson

projection plane xg'p,Both SSPOand SSPImust be somewhereon projector lines P4, PLI, (Fig 5.7d) respectively,where

PL,,= App',,* (S S P,,, CP,,,, d.,. ), n=O, 1.

A tangentor singularcurve C in E' is not necessarilya planarcurve but if we assume, that any curve segmentbetween two commontangent points is a planarcurve, then

183 Chapter5: Object surfacecurve reconstruction

CTP,CI? lies CTP, tangentcurve segment 2 on the sameplane that containspoints and CT? Hence SSPO,SSPI this The following usedto 2. points are alsoon plane. algorithm find points SSPOand SSP,in E3, seealso Fig 5.7e,f

1. Transformpoints TP,, SSPo, S SP, ' and TP2ftom 2D projection plane co-ordinate E3hy Eq 3.2j. systemE', Rn' into world co-ordinatesystem using

2. Detennineprojector lines P4, P.4 as

PpL.QpL. S SP,,' + d,, IpL. n=O, I (5.7k) and dP" dP" e E),in parallel projection dn cPP"- SSP,, CP r=r, in conical projection. lcpj, P" - SSPF

CTP, CT? nA *PA kA 3. Define plane A that containsTP,, and 2by equation = .

4. Define plane B, n,, 9 Pu= kBwhichis nonnal to plane A or nBo nA= 0 and contains points CTPI, CTP2.Plane B is theplane that the tangentcurve segment CTP, CTP2is assumedto lie on.

between13 lines p4, p4 SSp Find the intersectionpoints andprojector which 0 and SSp,respectively.

NB All the detailsabout how to definea planeequation, intersection between a line and a plane,etc. canbe found in AppendixD.

184 Chapter5: Object surfacecurve reconstruction

sspo-'ý-Sspe- PLO. -ý- PLJ

w

Fig 5.7d Pointsand projectorlines.

p SSPOI PL, TP2

CT? l

w

Fig 5.7e Reconstructionof a tangentcurve segmentin parallel projection.

SSP,.

IVP.

L,J;.

Fig 5.7f Reconstructionof a tangentcurve segmentin conical projection.

185 Chapter5: Object surfacecurve reconstruction

Therefore tangent curve segmentCTP, CTP2is then representedby three lines CT? SSPO,SSPO SSPI SSP,CTP2. We in fashion which are l and proceed the same whith almost all the rest of segmentsof curve C' except the first and the last segments. These two segmentsare a special case.If the beginning point or the end point of C' in E' is not a common tangent point, we have to find the position of this point first. For example, in Fig 5.7a, to reconstruct the first segment VbCTPOor the last segment

CTP2V,, we assumethat, for these two segments, each of them is coplanar with the

adjacent segment.Hence, for instance,tangent curve segment VbCTPOis on the same

plane, say plane C, as segment CTPOCTPI. Point V. is the projection of vb' on plane C.

Then the three-line approximation can be employed as normal to this segment as shown in Fig 5.7g.

TPý TP2 C, CTPý

CTPO b

v

Fig 5.7g The first tangentcurve segment reconstruction.

There is one situation-.... which the previousalgorithm cannotcope. This happenswhen thereis an inflectionpoint on a contrastcurve segmentbetween two planetangent points. In this case,we dividethe segmentinto two segmentsat the inflection point, Fig 5.7h.Therefore the three-lineapproximation algorithm is appliedto eachsegment separately. Note that the two segmentsare assumedto be coplanar,hence the inflection point in E' canbe obtainedby projectingthe inflection point on plane Y'P',

onto planeB.

186 Chapter5: Object surfacecurve reconstruction

CTP,

I w

Fig 5.7h Reconstructionof a curvewith a reflection point.

So far, we havepresented methods to obtainobject's surfacepoints and reconstructtangent curve segmentson the object's surface.These object's surface points canbe categorisedinto two types: 9 Primary surfacepoints, which arethe commontangent points.

e Secondarysurface points, which arethe points that are generatedfrom the three line approximationalgorithm.

And thesesurface points arelinked together to form a set of reconstructed tangentand singularcurves. We definethis setby the following definition.

Derin ition 5.7a A set of reconstructedtangent and singular curvesof an object obtainedftom the three-lineapproximation method using the contrastcurve on projection plane xv, is definedby

N."s is RCrspn =f RQ`1'173ý-',tpr--0 a numberof reconstructedtangent curves.

tv. SSPO, SSPI. )is RCýs SSPO.SSPI, ICTPN. N. N,. V. And M = 0, OICTPOI... -21) -Is -I . a sequence of surfacepoints of the tangentor singularcurve, where N,. is a numberof curve segmentsthat representedby threelines. The order of surfacepoints in the sequenceis the sameas the order of their correspondingpoints on the contrastcurve as illustrated in Fig 5.7i. Note that the term reconstructedis usedto distinguishour approximations C'P'n from a set of real tangentor singularcurve -

187 Chapter5: Objectsurface curve reconstruction

sS Pl. ssp,.'; '-ý 0,-

orderdirection TP2 ssývp s ee .. M (IS, CC CTPI PO sspO, I O

Vb w

Fig 5.7i Orderof the points.

Special case

There is a specialcase to be considered.One contrastcurve may be the projection of manytangent curves. An exampleis shownin Fig 5.7j. Contrastcurve C' is the projection of two tangentcurves C, '', s, n=0,1. Hencethere existsat least one tangentpoint on C' that belongsto morethan one commontangent point, e.g. TP in Fig 5-7i is the projectionsof both CTPOon C07'and CTP, on q. To cope with this case,we definethe commontangent points on eachof the tangentcurves as follows.

Let N C' be TP,, i=0,1,2, N- I the tangentpoints alongcontrast curve ..., and 0,..., M, be M, tangent that have TP, their CTP,.jJ = -1 common points ýas projections d,, denotes distancebetween TP, CTP,.,, d,,, d,,,.,,, Fig in common. j the and and < see 5.7k. If M, > 1, i=0,1,.., N- 1, thenthe numberof reconstructedcurves RC, is equalto the maximumofM or maxM andMI is the set of i suchthat M,. = maxM, RCkrs,k=0, maxM-, where

,f CTPIksO--:51

189 Chapter5: Objectsurface curve reconstruction

To definen for eachCTP.,., we usethe following algorithm(written in psuedo codebased on C language). set a=m while (a o MI) I if (m * (N- 1)) ( increasea by I or a=a+ else (decreasea by I or a=a- now a r=MI for(b =0 to b=M. -1) I jdm. I is find b such that b -da, k MinimUM

Yhereforen

Note that this methodis alsocapable of dealingwith the complicatedsituation when two or more tangentor singularcurves share one or more commontangent points, as an exampleshown in Fig 5.71.

189 Chapter5: Object surfacecurve reconstruction

If"Irn

W.pn0

Fig 5.7j Two tangentcurves are projectedonto the samecontrast curve.

TP,

Fig 5.7k The tangentand commontangent points.

190 Chapter5: Object surfacecurve reconstruction

RC2r$

W. j+; =3

Fig 5.71 Two reconstructedtangent curves share the samecommon tangent point CTP,,,, where the IdO M, 3, ldi,. dj, I> Id,,, dil., I Id,. > di+1.21 conditionsare: max = - j.j - and 0 -d 1421 -

5.7.2 Tangent or singular curve linking

As explainedin Chapter4, on a projectionplane WP,, all contrastcurvesinC, pn are planargraphs and eachcurve has only two nodes,beginning and end nodes,with no junction or branch.A tangentor singularcurve RC' is reconstructedfrom each C' C' In this discusshow link contrastcurve r= P. separately. section,we to all of these reconstructedtangent and singularcurves together to form a network.

The algorithmto link thesecurves is ratherstraight forward. The detail is as follows:

1. Letpn =0 and yp. be a referenceprojection plane, beginwith endpointVb of find RCO" e RC"Pn thenftom the rest ofpoints, the point or points V. andlor V f Of " N. - 1, that have the distances to Vb of RCOT$less than, say 8.

Z If thepoints arefound, then weld thesepoints togetherto becomeonly one welded point, seean examplein Fig 5.7m. Ae position of this weldedpoint is the centroid of all thepoint that are weldedtogether.

191 Chapter5: Objectsurface curve reconstruction

3. Repeatthe procedure with the endpoints (V, or V, ) that have not yet beenwelded

If there are someendpoints that are not closeenough to any otherpoints, just leave them.

4. Repeatthe wholeprocedure for the rest of theprojection planes, yp,, r: T, pn = 1,2, Nn 1. ..., -

NB Welding the beginningand end pointsof the samecurve segmentis aowed in this algorithm.

For example,tangent curve Ci, producestwo contrastcurvesq, q on Fig 5.7n. C,' is RCO"(Fig 5.7o) q is projection plane , usedto reconstruct and usedto reconstructRCý (Fig 5.7p). We canclearly see the effect of sudectionat junction point P'. The samepoint on V, maybe generatedftom manypoints at different positionsin E. This is one reasonwhy we broke a connectedcontrast curve into segmentsat junction points andhandled them separately.Using the linking " ' ' algorithm, Vb of RC. is weldedwith V, of RCý and so as V, of RC, and V, of RCý'. ThereforeRq' and RCý' are linked to representCý7,5.

7 RCý RC,

weldingprocess IK RC,'

Fig 5.7m Reconstructedtangent or singular curve linking.

192 Chapter5: Objectsurface curve reconstruction

Note that, a commontangent point CTP is a junction point betweentwo tangentcurves. Therefore the commontangent points automaticallylink all the reconstructedtangent and singularcurves together to form a wire-framemodel of an object of interest.Only the informationon the positionsof all surfacepoints and how thesepoints are linked is availableat this stage.The methodto definethe surfacefrom this wire-frameis discussedin the next chapter.

TP.

Fig 5.7n Tangentand contrastcurves of a bcan-shapedobject.

TP.

Fig 5.7o First reconstructedtangent curve.

193 Chapter5: Object surfacecurve reconstruction

po

RCT3

Fig 5.7p Secondreconstructed tangent curve.

5.8 Multiple objects and object identification

5.8.1 Object identirication

So far we haveconcentrated only on the casethat there is only one object in the volume of interest.This resultsin the conclusionthat all the contrastcurves on all projection planesbelong to onb object.If thereis more than one object in the volume of interest,then the sitýationis more complex.We haveto identify which contrastor tangentcurve belongs to which objectin order to reconstructeach object separatelyand correctly. This problemwas mentionedin Section2.5.4 and in this sectionwe dicussin detail how to identify objectsfrom their contrastcurves on projectionplanes.

We beginwith the following assumption.

Assumption 5.8a There alwaysexists a closedoccluding contrastcurve of the projection of an object on any projectionplane V,,, r=T.

We usethis assumptionand the property of a commontangent plane mentioned in Section2.5.4 to matcha closedoccluding contrast curve in a projection planeto

194 Chapter5: Object surfacecurve reconstruction closedoccluding contrast ýurves on other projectionplanes. The closedoccluding contrastcurve that belongstothe sameobject canbe matchedin everypair of projection planes.Two occludingcontrast curves on Merent projectiontionplanes in the pair are saidto be matchedif eachpair of the extremetangent points on different projection planesare on the samecommon tangent plane. As an exampleshown in Fig 5.8a, closed occludingcontrast curve Oqon V/,,which belongsto object 0 is matchedwith curve Oq on ý/, not OC,,' on V,, becausethe extremetagent points, TP, on Vo and TP, on V/,,are on the samecommon tangent plane ý0 and so as TPOon Vo and TP3on V/0, both of them are on ý1. On eachplane, the two extremetangent points are the two tangentpoints on the curvethat havemaximum Y and minimumy respectto y'-axis in the projection plane,where y'-axis is parallelto the commonaxis in the pair. This property can be usedto formulatean assumptionas followings.

Assumption 5.8b If the closedoccluding contrast curves on all projection planes belongto the sameobject then these occluding contrast curves can be matchedin every pair of projectionplanes.

In chapter4, we havealready defined a set of sensibleclosed occluding contrast curves OC on eachprojection plane as

N. 1OCP-,.,. ) ". -I OqPn = 0, NP (seeDefinition , pn = -I 4.3a).

195 Chapter5: Object surfacecurve reconstruction

L,.

Fig 5.8a Objectidentification by using commontangent planes.

The following algorithmwas developedto identify objectsfrom their closed occludingcontrast curves and get rid of the invalid closedoccluding contrast curves (thosethat do not belongto any object).The ideasunderlyingthis algorithm are based on Assumption5.8a and b. We want to identifywhich closedoccluding contrast curve belongsto which object.To makethe algorithmeasily understood, we presenteach step togetherwith explanationsand exampleswherever necessary.

Beginwith curveOqO r=oq on V/0,call thiscurve the reference curve.

2. Testif this referencecurve can be matchedwith any of the in 0 curves P,,p=I 2,3, Np Create the that ..., -. setsof curves are matchedwith the referencecurve: I'v""-I, MoQpn MOC,,,,, pn = 1,2,3, NP-I and N.,.,,,, = numberof the curves ft--0 .., on V,, that is matchedwith the referencecurve. And MOC,,... esOC is the curve that is matchedwith the referencecurve.

3. If MOC.. for 1,2, NP there exists =0 pn = ..., -1 then the referencecurve is not valid or doesnot belongto any object (Assumption5.8b) andjump to step 8. Otherwisethe referencecurve is valid and thenmove to step 4.

196 Chapter5: Object surfacecurve reconstruction

4.7he maximumpossible mumber of objectswhich the referencecurve may belong to is equal to maxN. = N.,,,, x N.., x x N.., o ... v--j.

5. Createa tree of allpossible setsof the curvesthat belongto all thepossible objects as shownin Fig 5-8b.Each noderepresents a matchedcurve and a layer is the projection plane that containsthe matchedcurves. Yhere are max Noparts to travel downftom the top ker VOto the bottomker V/, Ais have No V,,-,. meanswe max sets of curvesthat eachset has a potential to belongto an object. Ae reasonfor creating the free is the sur/ectionproperty, one occludingcurve may belong to more than one object as shownin Fig 5.8c. Aere are manycombinations of thesecurve to be matched Henceit is better to manipulateby representingthe all combinationsby the tree. NB. The algorithmsto handlethis tree havealready been discussed in vast literature, see[2], et. al.

ro-f en I rl? ob

mocl-I

1V4

Fig 5.8b Tree of occludingcurves.

197 Chapter 5.- Object surface cun, e reconstruclimi

(2) (3)

Fig 5.8c (1) Many olýjectscan generate the same occluding contrast cur%c. (2) and (3) Example shokN,s that two bones can share the same contrast cun-es at a projection angle.

6. In each part or set of cunes, such as an example in Fig 5.8d, use A ssuniplion 5.A to check if all of these cunes in the set really belong to the same object. Bv pairing all the cun, es in this set except the reference curve ((N. - 1)(NP - 2) /2 pairs), ýf there exists even only a pair that two cun,es do not match, then this set is invalid 01hensise an object is identified by,this set (?f occluding contrast curves. lheýfore at thismage an 00,, is represented by,its occluding contrast curves on all prqjeclion phines.- object 0

j'V jreferencecun, ejujoCcP, P-1 where OC, are curves the part and otio pnýO ' in is

the object number.

lavers rcf cur-\ c IVO moc",

'V2

Y3 110C, .

IV 5

Fig 5.8d A part in an occluding contrast cun-c tree.

198 Chapter5: Object surfacecurve reconstruction

7. Find more objectsby repeatingstep 6 until all theparts have beeni checkedand all the objectsthat sharethe samereference curve are indentifled

8. Changethe referencecurve, thenrepeat the wholeprocess until all the closed occludingcontrast curve in 0q on VOhave been used as thereference curves.

9. Regard the occludingcurves that havenever been succesfully matched as invalid closedoccluding contrast curves and discard them.

To illustratethis algorithm,we usethe following example.Let us considera set

four T VP'I' of projectionplanes vn--O andon eachplane, there are a set of occluding . contrastcurves, say

' oq = foc,-.,,,, Lo on to ocý= cl.c,OM16 onV, O.=O foq,,,J' V2 oq = Ocn--Oon and L-0 oq = joc; on y/,.

1. Begin with OC,, Oq on V/,,as a referencecurve. ,ýe

2. Pair this referencecurve with the rest of the curveson the rest of projection planes, for instance

199 Chapter5: Object surfacecurve reconstruction

pair matchedcurve found on matchedcurves MOC,,, plane

(V/0, V/1 OCrO OCIO. VI) 2

V/2 Oq. Oq. VOs V/2) 4 6

V/0, VO V3 0q.,

In MOC #0f 1,2,3 thereforethe is We then this case, P" orpn = referencecurve valid. proceedto the next step.

3. The maximumnumber of objectsthat maypossibly found at this stageis max N. =2x2xI=4.

S. Createa tree as shownin Fig 5.8e.There are four setsof occludingcurves or four pathsfrom the top to the bottom in the tree. Thesesets are fOq, OCI, [Oq, Oq,.,,,Oq,,, OC; 03, O)Oq,4$Oqll, ý, 11,etc. Each set may representan object.

layers rcf curveoCe,, 'Yo ýo ý>ýOc ýO-C e IV, 0 I 1,2 ..

lroc;, "llroc;, 41foce c Ar OC;. 'V2 4 6 2.4 6 jo oc; 4L OC3,. OC3.. 0C. 3. z i "

Fig 5.8e An exampleof occludingcontrast cwve tree.

6. In eachset, checkthe validity by pairingthe curvesin the set. The numberof pairs IOqOI ). Oq, OC;, OQ we haveto checkis three.For set 01 4V the pairs are

200 Chapter5: Object surfacecurve reconstruction

(Oq, CCOq4) (OqO OCU) and(Oq. Os P s 4 , OqJ If everypair is matched,therefore this [OCO-0 = OqO Oq. is 0. I I 4 9 set valid andrepresents an object: 0q.111

7. Then we proceedin the samefashion with the rest of the sets.

8. Change the reference curve to OC,,.,,then repeat the whole process again.

NB Supposewe are dealingwith the imagesof two simplebones. The contrastcurves of thesebone on V/,are shownin Fig 5.8f. A curve suchas OC,' is not an occluding curve of any of thosebones. It will not passthe matchingprocess and thereforethe algorithm will automaticallydefine it as an invalid curve.

w

Fig 5.8f Invalid occludingcontrast curve.

This algorithmhas been designed to copewith the worst casein identifying objectsfrom contrastcurves. Every possiblepair-, of curvesare checkedto makesure that an object will not be presentedby an invalid set of occludingcontrast curves. Becausein somecases, in somepairs of projectionplanes, it is possiblethat a valid occludingcontrast curve may be matchedwith an invalid one. If this situationoccurs, it would causeserrors in later reconstructionprocess. To clarify this point, let us consider the contrastcurves of a sharpedge dimple, Fig 5.8g. In somepairs, suchas the one in Fig 5.8h, invalid OCO"is Oq. Curve OC,' has shown curve matchedwith valid curve , to be eliminatedby pairingwith other curves.

201 Chapter5: Object surfacecurve reconstruction

Fig 5.8g Dimplc-shapedobject mith sharpedges.

--I -.. - ---- .. --

Fig 5.8h Invalid curveis matched.

However, in general,there are only one or two parts in a tree to be checked. The algorithmdoes not requireany excessivecomputational cost to searchtoo many parts in a tree.

5.8.2 Curve identirication

Thus, now an objectis representedby a set of closedoccluding contrast curves. We are dealingwith X-ray imagesof objects.The contrastcurves are not only the occludingcurves. The problemwe facedwas how do we know which contrastcurve (exceptthe occludingcurves) belongs to which object.Let us considerthe contrast curvesof two bean-shapedobjects in Fig 5.8i. Occludingcontrast curve OCOrepresents the first object and Oq representsthe secondobject. In this projectionplane, there are more than thesetwo occludingcurves. There are also contrastcurves C; and q. We cannotsimply imply that C3"belong to the first objectjust becauseit connectsto OCO'. An examplein Fig 5.8j illustratesthis point. The problemof how to identify contrast curvesarises from the propertyof a sudectionwhich is not a one-to-onemapping. If in 2D E', P,it is two contrastcurves are connected projectionplane co-ordinate system 'Vn

202 Chapter 5: Object surface curve reconstruction, not necessarythat their correspondingtangent curves are also connectedin 3D world co-ordinatesystem E'.

This problem is tackled in E' by using information from other projection planes. ExtendingAssumption 2.6d for a curvein E' (whichis, in fact, a seriesof pointsin E) gives

Assumption 5.8c In a suitable set of projection planesT, C' is a tangent or singular curveon the surfaceof an object0, then Vptz(C'r. E,ýý: c dp., =n;: PN --(crs, CP,m, whereOC is the occludingcontrast curves of the objectin plane V,..

This assumptioncan be explainedin plain words as the projection of a curve on an object'ssurface on anyprojection plane must not be outsidethe object'soccluding contrastcurve on that projectionplane.

Curve identification

An algorithm to identify a contrastcurve was developedbased on this Given C the this assumption. a contrast curve on projection plane V/,P., stepsof algorithm are as follows.

1. Define a set of object. PO, that C belongsto by: ]n E-'P,determine e. 4 5 interval C. * P's somepoints, g. or points, at regular on * Use the very7cationmethod (in section5.5) to checkeach of the points. If all of thesepoints are not outsideof the object's occluding curve, then thid object becomesa memberof PO. * Checkfor the points with all the rest of the object's occluding contrast curves/it yp,.

203 Chapter S. Object surface curve reconstruction

Z Delennine the reconstructedcurve RC " of C. (RC 73 is representedby a set of points in E' as described in Definition S.7a)

3. Define a set of objects, RPO, that RC n belongs to by thefollawing steps: Use the verification methodto check eachpoint on RC '. If all the projections of thesepoints are not outside the occluding contrast curve of an object in PO on allprojection planes except V.. then RC " and C, belong

to the object. A nd this object becomesa memberof RPO. Checkall the points with all the occluding curves of the rest of the object in PO.

For example,in Fig 5.81,if we considerC; the algorithm will define PO = (beanO,bean I ), becauseC; is not outsideoccluding curve either of beanOor beanI. But RPO will haveonly beanOas a member.This mean C; andits correspondingRC; ý5 belongto beanO.

Thereis a specialcase to be noticed.When one object is completelyinside another object, suchas bean I is entirely insidebean 0 as shown in Fig 5.8k, then curve C;, which in fact belongsto bean 1, will be identified by the algorithm as a contrast curve that belongsto both objects(bean I and bean2). To solve this problem, we use the following criterion: the contrast curve belong to the object whose occluding contrast curve is completelyinside another object's occluding contrast curve in every identify projectionp1mie. By using this criterion, the algorithm will C'3 as a contrast curve of bean 1.

In the caseof multipleobjects, we thereforecan summarise the procedureas follows:

1. Identify objectsby usingthe occluding contrast curves.

204 Chapter5: Object surfacecurve reconstruction

2. For each object,reconstruct tangent and singular curvesftom the object's occluding contrastcurves. If only the boundingsurface of the object is required then goto step

3. Reconstructand identify eachof the rest of the tangentand singular curvesftom the rest of contrast curves.

4. For each object, link the reconstructedcurves together

Thus at this stage,each object is representedby its contrastcurves and reconstructedtangent and singularcurves.

Fig 5.8i Contrastcurves of two bean-shapedobjects.

205 Chapter5: Object surfacecurve reconstruction

Fig S.8j Contrastcurves of two bean-shapedobjects when there is a connectedpoint.

coc bean0

c2

Ce 3 beanl Ctj

Fig 5.8k Contrastcurves of two bean-shapedobjects when one curve is completelyinside the other.

5.9 References

[I] W. H. Press,Numerical recipes. ýthe art ofscientific computing,Cambridge University Press,1989.

[2] R. Sedgewick,Algorithms in C, Addison-Wesley,Reading, Mass, 1990.

0. Angell, High-resolutioncomputer graphics using C, Macmillan,London, 1990.

206 Chapter5: Object surfacecurve reconstruction

[4] R. Benjamin,IC visiting notes,1990-1995, (unpublished).

[5] R. Benjamin,Object-based 3D X-ray imaging, Proc. First Inteniational Conferenceon ComputerVision, Virtual Reality and Roboticsin Medicine, Nice, France,April 1995.

207 CH-,A P TER, 6 0 B, W'.CT-S, "(.;, FACE RE,', CT N TR-'UC 10,N ,0S

6.1 Introduction

From the last chapter,what we havenow is the networkof the reconstructed tangentand singularcurves. This is a wire-framerepresentation. Only verticesand edgeslinking the verticesare represented.As discussedin Chapter1, althoughthis representationscheme requires small storage and canbe accessed quickly (as line drawings), it may causeambiguities in interpretingthe object and can not be ,, processedto improvevisualisation of the object,e. g. hiddenface removal,surface rendering.

In this chaptersurface reconstruction algorithms are presented.Strictly speaking,the algorithmconsidered can be statedas follows: given an object's network of reconstructedtangent and singular curves,reconstruct, to the extentpossible, the appropriate surfacethat approximatesthe unknowno1ject's surface. As definedin Chapter2, by a surfacewe meana closed,compact, connected and orientabletwo- manifold embeddedin 3D spaceE'.

208 Chapter6. - Object surfacereconstruction

Surfacereconstruction methods can be classifiedaccording to the way the surfacesare formed into two approaches: 1. Global approach. 2. Local approach.

The first approachis to definea modelthat is topologicallyequivalent to the object. This modelcan be an elastic,deformable model. The model is thereforebent, twisted, compressed,and stretchedto fit the object'swire-frame or a set of the object's surfacepoints. For example,we canimagine the wire-frameof a spherebeing wrapped by a rubber sheetto createa sphericalsurface. There is a vast literature on this approach.Some applied superquadric models to fit the data set, e.g. [1], [2]. B-spline surfacepatches were alsoused to estimatethe surface,such as the methodsdescribed in [3] et.al. Another approachis to fit the datawith an elasticallydeformable model as presentedin [4].

The secondapproach is to reconstructa surfacelocally, face by face,to fonn the whole object's surface.For instance,to reconstructa sphere'ssurface, we can build a set of polygonsas a sphere'ssurface patches. Then glue thesepatches together to form the whole sphere'ssurface.

The first approachnormally requires the pre-requisiteinformation on the , topology of an object or needsa largenumber of verticessuch as thosedata from range scanning.These requirements are difficult to meetwith the informationcontained by the reconstructedtangent and singularcurves. Therefore we havechosen the second approachand haveimplemented algorithms that allow us to reconstructa surfacefrom a wire-framemodel. The surfacereconstruction method which we havedeveloped consistsof two main algorithms.The first one is calledthe GeneralSurface ReconstructionAlgorithm. This algorithmis ableto reconstructsurfaces of arbitrary topology dependingon the structureof the wire-framemodel. The secondone is called the occluding SurfaceReconstruction Algorithm. Unlike the first algorithm,this one

209 Chapter6: Object surfacereconstruction reconstructssurfaces from occludingcontrast curves only. The resultantsurfaces are restrictedto the surfacesthat arehomeomorphic to a sphere(genus 0). Both algorithms producevalid solidobjects.

We choseB-rep schemeto representthe final reconstructedobjects. Each object is representedby a set or setsof triangularfaces. This is the reasonwhy we represent eachcontrast, tangent or singularcurve segment with the three-straightlines in the last chapter.Triangular faces are employedbecause they are heavilyused in computer graphicsas a meanfor displayingand manipulating surfaces. This doesnot only simplify someof the computationsinvolved in the renderingsthat occur later, but it also allows us to take advantageof the fact that most sophisticatedgraphic platforms or softwares havehighly efficientbuilt-in functionsfor computingthe visibility of the triangularfaces and for renderingthem usingvarious lighting and shadingmodels. These computer programscan rendera triangularmesh significantly faster than a set of arbitrary polygonalfaces or curve surfacepatches ([5], [6)).

Each 'of our surfacereconstruction algorithms consists of two main stages. In the first stage,we determinean object'spolygonal faces. Each face is not necessarya triangle or a planarpolygonal face. The differencebetween the two algorithmsarises at this stage.The secondstage is the triangulationwhich triangulatesthe facesthat are not triangular.Both alýorithmsuse the sametriangulation method. The flow of operations in this chapteris shownin Fig 6.Ia.

Main original contribution

The main contributionin this chaptercan be describedas follows:

e Generalprimary surface reconstruction: We presenta novel methodto derive a completeB-rep modelfrom a wire-framemodel, which was generatedby the methoddescribed in Chapter5, by usingthe topological propertiesof an

210 Chapter6: Object surfacereconstruction

object's surface.We usedthe work presentedin [II] and [ 12] only to reduce the dimensionof the spacewe areworking, i. e. E' to E' Boundingprimary surface reconstruction: We presentan effective implementationof the ideadescribed in [ 16] and [ 17]. Facetriangulation: We inventeda new methodto triangulatea non-planar polygonalface general surface reconstruction method. This methodalso works for the facesgenerated by the boundingprimary reconstruction method.

for each object object's object's general surface bounding surface

objectis represeted object is represetcd by a set of reconstructed by a set of reconstructed occludingtangent tangent and singular curves curves 0 =(Re) 0 ={RC")

[6.21[6.31 16.41 IC17 I generalprimary surface bo ing primary surface reconstruction nstruction set of primary polygonalfaces set of primary polygonalfaces F Vjj, fin 0. Njr I FEA fn 0. ,FEf,, = FVf,, , - r] [6.5] secondarysurface identification

setof secondarypolygonal faces SFVfi,, S FEf. fn = 0. mr,I

facetriangulation 1 [6.51

setof triangular faccs I O= A= (T

Fig 6.1a Flow of operationsin this chapter.Relevant section numbers are in parentheses.

211 Chapter6. - Object surfacereconstruction

6.2 General primary surface reconstruction

The aim of this sectionis to build a set of polygonalfaces from the reconstructedtangent and singularcurves of an object.

6.2.1 Background principles

Topological principles

As describedby Definition 2.2a,an objectof interestis a closed,2-manifold and orientablesurface. Some of theseproperties are explainedfundamentally in Appendix A. In this part, we will explorethe propertiesin detail.and show how we use them to developthe algorithms.

Any surfacecan be representedby a set of triangleswithout loosing any topological properties.Let us stateDefinition A3 in AppendixA, here:

Derinition A3 A 2-manifold surfaceS is orientableif the surfacesatisfics the following condition.

All trianglesin V, can be given an orientationin sucha way that two triangles with a

commonedge are alwaysoriented coherently, i. e. one edgeoc'cars in its positive orientation in the direction chosenfor its triangle, and the other one in its negativeorientation.

Thereforeall triangularfaces of an objectare saidto be coherentlyoriented. For is example,let us considera spherewhich representedby four triangular facesin Fig 6.2a. For simplicity,all of thesetriangles can be illustratedby a directedplanar graph or in in a planerepresentation as shown Fig 6.2b. All the verticesaround each face occur a consistentdirection (sayclockwise) as viewed from outsidethe object. In every pair of faces,the commonedge has different direction, e. g. e4in FOhas direction PO-> P2but e4in F, hasdirection P2-> Po.

212 Chapter6: Object surfacereconstruction

P3

P,

P2

Fig 6.2a A sphere representedby four curved triangular faces.

e, PO

el

P2

Pl

Fig 6.2b Planerepresentation of the four curvedtriangular facesin Fig 6.2a.

Pl

P2

Pl

Fig 6.2c Modified planerepresentation of the four curvedtriangular facesin Fig 6.2a.

213 Chapter6: Object surfacereconstruction

We can extendthis propertyinto the casewhere the object is representedby a set of polygons,not necessarytriangles [7], et.al. For instance,the spherein Fig 6.2a can be representedby a four-sidepolygon and two triangles(Fig 6.2c) insteadof four triangles(Fig 6.2b), while all the topologicalproperties are still the same.We modified and summarisedall the importantproperties into Definition 6.2a.

Definition 6.2a If S is a closed,2-manifold and orientablesurface or S is an object, then S can be representedby a polyhedron.All the polygonalfaces of the polyhedronsatisfy the following conditions:([9], et.al. )

* The polyhedronis consistentlyoriented; that is, the edgesaround each face in the planerepresentation of the polyhedronmust occur in a consistent direction(say, counter clockwise). e Every edgebelongs to exactlytwo faces. 9 Facesmay not intersecteach other exceptat commonedges or vertices. 9 Every vertex is surroundedby a singlecycle of edges.

NB The verticesof the polyhedronare the object's surfacepoints and the numberof N. vertices N, edges N, faces Nf and genus of the polyhedron are related by the

Euler formula which statesthat N,,- N, - Nf =2- 2Ng [7] et.al.

The conditionsin this definitionare the conditionsimPosed on the object's faces in E. We havefound that it is difficult and complicatedto usethese conditions directly in E' to developthe reconstructionalgorithm. Thus to reducethe complexityof the in problem.We use the ideaproposed [11], et.al. The ideais to use explicitly the fact in topology that the object's surfacepoints lie on the surfacethat is assumedto be, at least locally, diffeomorphic to E', where:

Diffeomorphism, [18] et.al., is a one-to-onecontinuously differentiable mappingf:A-4B of a differentiablemanifold A (e.g. of a domainin a Euclideanspace)

214 Chapter6: Object surfacereconstruction into a differentiablemanifold B for which the inversemapping is also continuously differentiable.IfAA)=B, we can saythat A andB arediffeomorphic. In differential topology, diffeomorphicmanifolds have the sameproperties, and one is interestedin a classificationof manifoldsup to a diffeomorphism(this classificationis not identical with the coarserclassification up to a homeomorphism,except for casesinvolving small dimensions).

In our case,if we can exhibitsuch a diffeomorphicproperty, then we can flatten or unfold the surfaceand reduce the dimensionof the spacein which we are working, E3 Le. to E'. This ideais usedin manysubjects, especially in Geography,when one 3) wantstoproject their 3D data(in E onto a plane(E'). Note that this approachhas to be done injectively,which is impossiblein somecases, e. g. for closedsurfaces. But, in this research,we are dealingwith closedsurfaces. Objects of interestare closed surfaces.Therefore the ideacannot be applieddirectly. Refer to [I I], althoughthis approachdoes not work for the whole object,it works locally under someconditions as summarisedin the following proposition.

Proposition 6.2a Let S be the surfaceof an object definedby Definition 2.2a in E3 (three dimensionalspace) whose principal radii of curvatureexceed R at everypoint. Let S' denotethe orthogonalprojection of S on a planeT at point P (on the surface). PlaneT is a planesuch that thereexists an openset U or neighbourhoodof point P of S V suchthat U' is a diffeomorphismfrom U onto any disk lying in T whosecentre is and whoseradius is smallerthan R (seeFig 6.2d).

NB This propositionimplies that U' (projectionof U) cannotfold over'itself

The propositionabove gives the sizeof a domainin which this projection is a diffeomorphism:so, for that domain,triangulation in T providesa triangulationof S. The proofs of this propositioncome from differentialgeometry which are ornitted here.

215 Chapter6: Object surfacereconstruction

Fig 6.2d U in E3 is diffeomorphicto U'.

Here a remarkof importancefor the sequelmust be made.The accuracyof the proposition abovedepends on the numberof the surfacepoints in the neighbourhoodof point P. Or more precisely,in the neighbourhoodof P, the discretisationof surface points must be finer thanthe smallestradius of curvatureat this point. In this research we assumethat the discretisationis fine enough.Because, refer to section5.6, we implied that the distributionof the surfacepoints is optimum.The densityof the surface points is approximatelyproportional to the curvatureof the surface(assuming that the numberof projectionplanes is reasonablelarge enough, e. g. 10).

6.3 Implementation

In this section,we describethe algorithmto reconstructthe polyhedronthat representsan objectby implementatingthe topologicalprinciples and conditions presentedin the previoussection. Given a set of connectednetworks of the reconstructedtangent and singularcurves of an object,the algorithmreconstructs a ployhedronthat satisfiesthe conditionsin Definition 6.2a from eachconnected network separately.Note that, in general,there is only one connectednetwork for one object, exceptin somespecial cases. The algorithmcan be divided into four major parts. 1. Createa main connectednetwork from the given connectednetwork.

216 Chapter6: Object surfacereconstruction

2. Determinea referenceplane at a given point in the main network (planeT in proposition5.6a). 3. Define the first polyhedron'sface. 4. Define the rest of the faces.

Thesefour stepsare presentedin detailthe following sections.

6.3.1 Main connected network

To createa mainnetwork from a givenconnected network, we removeall the points in the given network that havedegrees less than 3, wherethe degreeof a point, D(P), is a numberof lines directly connectedto this point. Then we link the points that have degreemore than 2 togetherand call thesepoints primary points. (All points in the main network thereforehave degrees more than 2. ) For example,in Fig 6.3a, on ) segmentPP2where D(P) = D(P2)= 3, we deleteP3 and P, (D(P3) = D(P, 2), then fink P, and P2together. In a specialcase, if thereis a point P suchthat D(P) at the 0 end of a segment,then we deletethat segmentas shownin Fig 6.3b. Also the redundant points or the points that havethe sameposition are welded together and recognisedas one point.

D

Fig 6.3a To createa main connectednetwork, the points that havedegree less than 3 are removed.

217 Chapter 6: Object surfacereconstruction

F,P

P,

Fig 6.3b To createa main connectednetwork, the segmentthat has degreeI end point is removed.

6.3.2 Reference plane at a given point in a main network

To definea referenceplane at a givenpoint in a main network, we base ) ourselveson the methoddescribed in (12]. The referenceplane T(P, as associatedwith point P, is representedas a point C, calledthe centre,together with a unit normal vector n,. The signeddistance of an arbitrarypoint X r=E3 to T(P,) is definedto be (X) (X ) Dist, - C) 9 n,. The centreand normal for T(P, are determinedby gathering together k neighbourhoodpoints of P,; this setis denotedby Nbdp(P,) and in the next

sectionwe discusshow to definethese neighbourhood points. The centreand the unit (X) normal are computedso that the plane(Dist, = 0) is the least squaresbest fitting ). ), planeto Nbdp(P, That is, the centreC, is takento be the centroid of Nbdp(P, and the normal n, is determinedusing principal component analysis. To compute n, we form the covariancematrix of Nbdp(P,). This is the symmetric3x3 positive semi- definite matrix:

CV= Z(Q-ci)O(Q-c, ) (6.3a) QeNbdp(P,)

ý, Xj X, where 0 denotesthe outer productvector operator[ 13]. If ý-- ;>- denotethe CV eigenvaluesof associatedwith unit eigenvectorsv, iii V2, v', respectively,we then ' ) choosen, to be either v, or -v,. The selectiondetermines the orientationof T(P, and

218 Chapter6. - Object surfacereconstruction it must be done so that nearbyreference planes are consistentlyoriented. Detail on how to select n, is discussedin the following section.

6.3.3 Definition of the first primary polyhedron's face

Refer to Definition 6.2a,each face of the polyhedronrepresenting an object must occur in a consistentdirection. This conditionis rathertoo complicatedto be implementedin E' unlessthe faceis planar.As mentionedpreviously, to reducethe complexity of the problem,we usethe diffeomorphicproperty to reducethe dimension of the working spaceby locally mappingthe relevantpoints and edgesonto an appropriatelydefined plane. For instance,if the planerepresentation of a part of a main connectednetwork canbe illustratedby Fig 6.3c (1), the procedurefor determiningthe first face in clockwisedirection, e. g. FOin Fig 6.3c (2), canbe describedas follows:

Begin with an edgein the main connectednetwork and call this edgereference edge er which has the endpoint called referencepoint P, e.g. e, = AB and P, =B in Fig 6.3c (1).

2. To createreferenceplane T,,,,which is representedas C, and n', at P weusea of the set of thepoints that are directly connectedto P with a set of theface's union r P, ), verticesfound sofqr as the set of k neighbourhoodpointsof Nbdp(P e.g. ) [A, Nbdp (P, = C, E, D, Fj in Fig 6.3cand the set ofassociatededges Nbde (P (Pr) ) ) Nbde = CPE(P, u FE, whereCPE(P, is a set of all edgesthat begin at Pr and ) end at thepoints that directly connectedto P- Note that CPE(P, doesnot include e {eE, e.g. CPE(B) eF,ec, eDI in Fig 6 3c (2). FE is a set of all theface's edgefound sofar. To b*enoticed that, in this set the edgesare organisedin sequence,e. g. kq, I FE = ec e., at P, =G in Fig 6.3c (2). To define n, if e, is thefirst edgeof the 3 face, then nr can be either eigenvetorv, or -vr. If er is not thefirst edge,therefore

219 Chapter6: Object surfacereconstruction

such that n. 9 n, 2:0 wheren, is the normal of referenceplane Tp,, at the select nr -, -, point P, previous reference -,. NB Adding the face'svertices in Nbdp, e.g. [directly [A, Nbdp(J) = connectedpoints to J) u B, C, G) in Fig 6.3c (2), gives the bestfit for makingdiffeomorphic projections possible not only in the area around P, but alsothe face.

3. Orthogonallyproject Nbdp and Nbde onto Tp,where the projected setsare representedby Nbdp' and Nbdel whereNbde'= CPEIu FE', as an exampleit, Fig 6.3d

4. On Tp,,at P,' selectthe next edgeof theface eý, which can be determinedby the following steps:

CPE' 4.1 e'f r= and e,' x eý re, nrl (6.3b) ' x eý

such that the angle betweene, ' and eý is maximumor ( )l epoe , 0: 0: max O= co!ý" 5 5 7c (6.3c)

RL76.3effl may help to picture the situation. UP

4.2 If there is no edgein CPE' that satisfiesthe condition in 4.1, then eý is the has hetween edgein CPE' that the minimumangle e,' and e,f or

5- heýeý)eý min o= cosýI 0: g 05 7t (6.3d) 1e, 'l t

as an exampleshown in Fig 6.3e(2).

220 Chapter6., Object surfacereconstruction

5. Add to FE, is the then . ef where eý projection of ef , set ef as a new e,.

6. Repeatsteps 2-5, forfurther consecutiveedges until the starfingpoint is reached

7. Aeface thereforecan be representedby a set of edgesFE or a set of the sequellce JeB, of verticesFV, e.g. for FOin Fig 6.3c (2), FE = ec,eo, e,, e.,,eAl a?idFV= {A, B, C,G, I, J).

eD e, e D

(1) (2)

Fig 6.3c (1) First AB face Fo (2) FaceFo edge of - and the adjacentfaces.

221 Chapter6: Object surfacereconstruction

:r,

Nbde

-4:ý

TV

Nhdet

Fig 6.3d Projectionsof Nbdp and Nbde.

222 Chapter 6.,Object surfacereconstruction

TP,

T,, h

Fig 6.3eDoemination of the next edgein a face.

6.3.4 Determination of the rest of the primary faces

Lý The determinationof the rest of the facescan be obtainedby defining, the adjacentfaces from an already-definedface one by one until we completea closed surface.For example,in Fig 6.3c (2), let FObe the first facewhich is alreadydefined, we then definethe adjacentfaces, F,, i=1,2,..., 7, oneby one. Thesefaces may be called the first layer. Thenwe proceedto the next layer by definingthe adjacentfaces of F,, F, and so forth until completethe whole closedsurface. The following procedure which is basedon the conditionsin Definition 6.2a,explains in detail how eachadjacent

223 Chapter6., Object surfacereconstruction

face can be defined.Let F, be an already-definedface with set of edgesF4 and the set of verticesFV,; to defineF, which is an adjacentface of F,:

e, of Fj is - ej, where e, e FE, and has degreeless than 2. Thedegree of an edgeis the numherof the definedfacesthat sharethis edgesofar. NB e, = -e,, becauseevery face must occurs consistently oriented.

2. Do steps2 and 3 in section6 3.3.

3. Do steps4 and 5 in section6.3.3 and checkthefollowing conditions: o ef musthave degree less than 2.

* ef mustnot be an edgeof an adjacentfaceof Fj that also has e, as an

edge.(Twofaces can not havetwo consecutiveedges in common) if the newlyformed ef doesnot satisfythese two conditions,then substituten. with -nr and repeat this stepagain.

4. Repeatstep 2 and 3,forfurther consecutiveedges of Fj, until the startingpoint is reached.

5. Aeface Fj is thereforerepresented by set FEJ FVJ. . or

6. Move to the next e, in F4, thenrepeat the wholeprocess again.

We use this algorithmto definethe polyhedron'sfaces, to proceedlayer by layer until all the edgesin the network havedegree exactly 2, or in other words, all edgeshave beenshared by exactlytwo faces.

224 Chapter6: Object surfacereconstruction

6.4 Bounding primary surface reconstruction

In somecases 'only the approximateshape or only the position of an ob ect of interestis required,or we haveprior knowledgethat the object of interestis convex or roughly convexwith minor concavities.It is not necessaryto reconstructevery detail of the object's surface.We canreconstruct only the object's boundarysurface. This can be doneby using only a set of occludingcontrast curves. The rest of the contrastcurves are ignored.In this situation,the reconstructionof the boundingsurface is much simpler than the generalprimary surfacereconstruction. Before we proceedto developan algorithm to reconstructa boundingsurface, let us considera main connectednetwork createdby the methoddescribed in Section6.3.1. If only occludingcontrast curves are usedto createthis network suchthat all the pointsin the network are commontangent points with degree4 and eachpoint is alsoa crossingpoint of two different tangent curves,see Fig 6.4a.With theseproperties, the following algorithmto reconstructa boundingsurface from the mainnetwork was developed.

Ze,

e, E ej B 0 __! e,e.

1

e,

9A

Fig 6.4aTwo crossing reconstructed tangent curves.

225 Chapter6: Object surfacereconstruction

6.4.1 Definition of the first primary face

1. Begin with an edgein the main connectednetwork and call this edgereference edge e, which has the endpoint called referencepoint P, e.g. e,=AB wid P,=B in Fig 6.4a A beginningpoint Pb and callpoint theface's .

) 2. AIP, selectthe next edgeof theface ef e CPE(P, such that:

ef is not in the samereconstructed tangent curve as e,, and in CPE(P,), is Pb the endpoint of ef , comparedwith of others the closestto NB P, alwayshas degree 4 andthere are only three edgesas the candidatesin CPE(P,). For in Fig 6.4a, P, hasdegree 4 CPE(B) le,, example, =B which and = e, . e3l. We do not considerel becauseit is in the samereconstructed tangent curve as e,. Thereforeonly e, and e, are the candidatesfor ef. If E is closerto B than D, then e3 becomesef, the next edgeof the face.

3. Add ef to FE, thenef is a newe,.

4. Repeatstep 2 and 3forfurther consecutiveedges until Pbis reached.

5. Aeface thus be by a set of edgesFE the . can represented or a set of sequenceof verticesFV

6.4.2 Definition of the rest of the primary faces

For the rest, we proceedlayer by layer as describedin the generalsurface reconstructionmethod. Let F, be an already-definedface with set of edgesF4 and the set of verticesFV,, to defineF, which is an adjacentface of F,:

226 Chapter6. - Object surfacereconstruction

1. e, of F, is - e,, where e, r: FE, and has degreeless than 2 (the number of already establishedfacesthat sharethis edgesofar).

Z Do step 2 in the algorithmfor thefirstface and checkthe additional conditions: e. musthave degree less than 2, mid

ef mustnot be an edgeofan adjacentfaceof Fj that also has e, as an

edge.(Twofaces cannot have two consecutiveedges in common.) If the newlyfound ef doesnot satisfy thesetwo conditionsthen select the other candidateedge.

3. Repeatstep 2for consecutivefurtheredges of Fj, until the slarlingpoint of Fj is reached

Face F is by F4. FV,. 4. i thereforerepresented set or

6 Move to the next e, in F4., thenrepeat the wholeprocess again.

We then usethis algorithmto definethe rest of the faces,face by face, layer by layer, until all the edgesin the networkhave degree 2.

6.5 Face triangulation

Thus we now havean objectwhich is representedby a set of polygonalfaces calledprimaryfaces. All thesefaces were establishedfrom a main connected network. In this type of network,we remove, all the surfacepoints that havedegree lessthan 3. In this sectionwe discusshow to establishsecondaryfaces. A secondary polygonalface is definedfrom a primarypolygonal face by insertingback all the degree- 2 points betweenevery two adjacentpoints that havedegree more than 2. For example, in Fig 6.5a,F is a primaryface which is representedby a set of orderedprimary points

227 Chapter6. - Object surfacereconstruction

FV=fPj, O:5i

Pl

po P2

Jr.

Fig 6.5a Primary face.

228 Chapter6: Object surfacereconstruction

Pl

pl,

PO

P2.4

Fig 6.5b Secondaryface.

As discussedearlier, we needto convertall arbitrarypolygonal faces into a set of triangularfaces. To do this, an algorithmfor an optimumtriangulation must be derived.An optimumtriangulation is a partition of a geometricinput, that is best accordingto somecriterion that measurethe size,shape, or numberof triangles.A vast literlature on triangulationshas been written suchas [14], [5], [15]. In this case,the geometricinput is a set of orderedpoints SFV andthe criterion are as follows:

* The triangulationmust use the edgesof the boundaries(directed edges in SFE) as edgesin the triangulationand any addededges must be the shortest as possibleand without addingany extra point. 9 The triangulationmust generate the triangularfaces that satisfythe conditionsin Definition 6.2a.

Although there are manyexisting triangulation methods, we found that it was easierto invent a new algorithmto speciallydeal with the secondaryfaces than to try

229 Chapter6: Object surfacereconstruction

implementingexisting methods to copewith our specificdata structure. Therefore the triangulationalgorithm that satisfiesthese criteria was developedand can be described asfollows:

Given a secondaryfacerepresented by setsSFV and SFE.

1. Define all theprimary points in SFVas cornerpoints.

2. AI each cornerpoint P, createa cutting edgece(P), where ce(P) is an edgethat links the two adjacentpoints of P togetherin the consistentdirection as shownin Fig 65c.

3. Yhusat eachcomerpoint P, a triangularface is establish,e. g. trjaýjgujarjace } Tp=f Po,P, P, in Fig 6.5c.

4. Removeall the cornerpoints that havealready beenused to establishtriangular faces toform a newSFVand link two adjacentpointsof a removedcornerpoint P by invert-direction cutting edge,i. e. -ce(P)in Fig 6.Sd, loforin a new SFE.

5. Define the of the cutting edgesin SFE . all endpoints as a newset of cornerpoints.

6 For all the comerpoints, at the cornerpoint P that has the shortestCutting edge ce(P), establisha new triangularface.

7. Repeatsteps 4 to 6 until SFVand SFE representa triangularface.

230 Chapter 6: Object surfacereconstruction

Po-, Vp p

ce(P) = P,Po

P,

Fig 6.5c Cutting edgeat comer point P.

PO. --- PO OVce(P) I

-ce(P) nP, remove p P,

Fig 6.5d Comerpoint P removing.

The following exampleshown in Fig 6.5e(1) may serveto more fully illustrate the algorithm.

Let us considera secondaryface (Fig 6.5e),

SFV [PO') A9 P2)' = PO.0I PO.111 PLO11P1.1 9 *11 and JP. Pl, SFE = po,o, poopo.1, po,, --I

}* PI P2 1. Define a set of comer points{POI 9 2. Createcutting edgesce(PO), ce(PI) and ce(P2)as shownin Fig 6.5e (2). facesTp 2 3. Thus triangular ,n=0,19 are established. 4. Removeall the old comer points,an inner faceis createdas shownin Fig 6.5e (3). 5. The new comer points from the newly establishedtriangular faces therefore are IP0,02 P2.0P2,41' POJ11 P1.0) P1119 9

231 Chapter6: Object surfacereconstruction

6. Cutting edgece(p,,, ) at p,,, is the shortestcutting edgetherefore triangular face T, is

established,Fig 6.5e(4). 7. Proceedin the samefashion until the triangulationis completed.

Thus a reconstructedobject is now representedby a set of triangular facesin E' N, -I 0=A A is A= [Tt.}t.. triangularfaces in E'. or object where a collection O of [V, According to [5] et. al., A with vertices = can be describedby a variety of

B-rep data including list N., integertriples [(q.,, 8, )I"r-I the structures, a of " , 'V'. tn. 0 such that Va., V V.. arethe orderedthree vertices of the Ih triangular face. An d we ,0., choosethis data structureto representa reconstructedobject becauseit is compatible with the softwareused to visualisethis object.Also this data structurecan be easily derivedto other B-rep datastructures.

232 Chapter6. - Object surfacereconstruction

P, P += comer point P,PI P, P,pl, CITp 0 0.0.1, 0.1Ol ce(P,) Pl pl . p . ce(P,) SýF) 0,0 PO Pý 0 0 zo P2,1 ý .7. 2 P2, P2,2 ce(? .2 PO P2.3 (2) P2 PO 3 P. 2,4 2,1

P, P

Pi, PO P1.0 Po. o ce(ý I -ce(,P pl, Pi, I PO, I ceM.,) P0.0 9.0 0 r pP 2 2 22.4 .0 PP2. ý, ' P2 P, P2,2 2 P?. -COM 2 ce

(3) '2,3 jr,2.3 P2.4 (4) P2.4

Fig 6.5e (1) Secondaryface SF (2) Cutting edgesat primary pointsused to establishtriangular faces (3) Cutting edgesat a new setof comer points (4) Shortestcutting edgeis usedto establisha new triangular face.

6.6 References

[1] F. Solina and R. Bajcsy, Recovery of parametric models from range images: the deformations, IEEE Transactions case for superquadratics with global on Pattern Analysis andMachine Intelligence, Vol. 12, No. 2, Feb 1990,131-147.

[2] N. Yokoya, M. Kaneta and K. Yamamoto, Recovery of superquadric primitives from a range image using simulated annealing, Proc. 11th Inter. Conf. on Pattern

Recognition, Vol. 1, Hague, Netherlands, 30 Aug. - 3 Sept 1992,168-172.

233 Chapter6. Object surfacercconstruction

[3] C. W. Liao and G. Medioni, Representationof range data with B-spline surface patches,Proc. 111hIAPR inter. conf. on Pattem RecognitionVol III. ConferenceC. - Image, Speechand SignalAnalysis, Hague, Netherlands, Sept 1992,745-8.

[4] D. Terzopoulos,I Platt, et. al., Elastic deformable models,A CM Computer Graphics, Vol. 2 1, No. 4, July 1987,205-214.

[5] L. L. Schumaker,Triangulations in CAGD, IEEE ComputerGraphics & Applicailons, January1993,47-52.

[6] T. A. Foley, D. A. Lane,etc., Visualizing functions over a sphere,IEEE ComputerGraphics &Applications, Jan 1990,32-40.

[7] P. J. Giblin, Graphs,surfaces and homolqy, Chapmanand Hall, London, 1977.

[8] H. Ferguson,A. Rockwoodand J. Cox, Topological design of sculptured surfaces,SIGGRAPH'92 Conf. Proc., Vol. 26, No.2, Chicago,July 1992,149-156.

[9] M. Mantyla, Boolean operation of 2-manifolds through vertex neighborhood classification, A CM Trans.on Comp.Graphics, Vol. 5, No. 1, January1986,1-29.

0] J. D. Faux andM. J. Pratt, Computationalgeometryfor designand manufacturer, Halstead Press, Horwood, NY, 1981.

[11] J. D. Boissonnat,Geometric structures for three-dimensional shape representation, ACM Transactionson Graphics,Vol. 3, No. 4, October 1984,266-286.

[12] H. Hoppe, T. DeRose,etc., Surface reconstruction from unorganised points, SIGGRAPH'92 ConferenceProceedings, Vol. 26, No. 2, Chicago,July 1992,71-77.

234 Chapter& Object surfacereconstruction

[13] E. L. AlIgower andP. H. Schnýdt,An algorithm for piecewiselinear approximation of an implicitly defined manifold. SIAMJounial qfNumerical Analysis, Vol. 22, April 1985,322-346.

[14] M. Bem andD. Eppstein,Mesh generation and optimal triangulation, Computingin Euclideangeometry, D. Z. Du ed., World Scientific,Singapore, 1992.

[IS] F. P. Preparataand M. 1. Shamos,Computational geometry an introduction, Springer-Verlag,New York, 1985.

[ 16] R. Benjamin,IC visiting notes,1990-1995, (unpublished).

[ 17] R. Benjamin,Object-based 3D X-ray imaging, Proc. First International Conferenceon ComputerVision, Virtual Reality and Roboticsin Medicine, Nice, France,April 1995.

[18] M. Hazwinkel,ed., Encyclopaedia ofmathematics, Reidel, Dordrecht, 1988.

235 IIýw PT IP UZA,T. TO N' ,, ý ý, AN VISEA.L ISATION.

7.1 Introduction

In the previouschapter, each reconstructed object was representedby a B-rep or polygonalmodel individually. The datarepresenting this object is in numericalform, i. e. vertices' positionsand faces'indices. It dependson the user how to dealwith this data. It canbe storedor transferredfor further processing,such as using CAD/CAM to analyse,obtain someobject's features or control machinesto createthe real physical model of the object.But if the userneeds to understandthe object visually, it is impossibleto go through all of thesenumerical data and figure out or understandthe object's structure.Thus we needsome tools or methodsto help us convert this data into a form that canbe easilyunderstood and helps us to extractuseful information from this object.The tools areused to manipulateand visualise the object.

In this chapter,we do not intendto invent or developany new methodto manipulateand visualise the object.But we emphasisethe use of existingmethods to

236 Chapter 7. Manipulation and v1suallsation satisfyour demand.Therefore we discusssome background on the essentialmethods required as well as the existingsoftware employed to implementthe methods.

7.2 Object manipulation and visualisation

As mentionedearlier, one mainpurpose of object manipulationand visualisation is to assistour understandingof the object structureor to conveyas much 3D information as possibleto the observer.There are manyattempts to develop techniquesfor better 3D viewing for betterunderstanding, and for better use of the data. Accordingto [1] and [2], the humanvisual systembeholds and understandsthe world in 3D, usingboth physiologicaland psychological depth indicators. Physiological depth indicatorsconsist of " accommodation,i. e. changein foci of eyelens, " motion parallax,i. e. imagechanges due to motion of the viewer, " convergences,i. e. inward rotation of the eyes, " binoculardisparity, i. e. differencesbetween left and right eyeimages. And Psychologicaldepth indicators include linearperspective, i. e. objectsat distantappears smaller, colour, i. e. distantobjects look darker, shadingand shadowing,i. e. they indicatepositions relative to light source, occlusion,i. e. more distantobjects are hiddenby nearerobjects, aerialperspective, i. e. distantobjects appear less distinct or cloudy, texture gradient,i. e. distantobjects have less details.

Many displaytechniques have been developed to provide someor all of these depth indicatorsin order to presenta semblanceof 3D, [1], [3], et.al. Thesetechniques are:

237 Chapter 7. Manipulation and visualisation

StereoscopicCRT In this approach,the left-eyeimage is presentedto the left eye only, while the right-eyeimage is presentedto the right eyeonly. It providesstereopsis but still lacks motion parallaxand large angles of view, and requirerendering two times, once for eacheye.

Head-Tracking TechnologiesThese technologies associated with stereoscopicCRT approaches(head-fixed displays) provides motion parallaxand anglesof view, with the addedbenefit of unlimitedvolume. But they still havedisadvantages arising from the needto renderin siliconfor eacheye and the currentphysical intrusiveness of the technology,e. g. cumbersomeheadgear.

Varifocal Mirror DevicesBy integratinga vibrating reflectivesurface with a point plotting CRT. The verifocalmirror givestrue 3D perceptionof a distribution of glowing points of light.

Computer-Generated Holography This methodprovides horizontal parallaxand a moderatehorizontal angle of view, but without vertical parallax.It requiresa huge computationalburden for a smalldisplay volume. Since rendering and calculatingthe holographicinterference patterns have to be donefor eachindividual angleof view [4], [5].

Rotating-Screen DevicesThese devices slice a physical3D volume with a rotating 2D screenand displaythat screenwith correspondingslices from a 3D data set. There are manytechniques used to build thesedevices as discussedin [1], [6], et.al. This approach generatesimages directly in a volumesupporting both physiologicaland psychological depth indicators.But sometechnical challenges for this technologysuch as image intensity,update rate anddata throughput are yet to be overcome.

2D Image Plane or CRT ScreenAt present,this schemeseems to be the most widely used.And in this project, we employthis schemeto visualiseand manipulatethe

239 Chapter 7. Afanipulation and visuallsation reconstructedobjects. Certainly, representing 3D objectson a 2D imageplane inevitably obscurespart of the information,The physiologicaldepth indicatorssupplied by an actual3D object cannotbe providedby this scheme.Therefore a numberof techniqueshave been developed to dealwith this problem.The techniquesmainly create illusory 3D imagesand sceneson 2D imageplanes by computingand displaying psychologicaldepth indicators such as calculatingperspective, removing hidden lines and surfaces;adding shading, lighting and shadowsand so forth. And also the objects can be manipulatedin manyways to aid visualisation.If one object is nestedinside another,the user canreveal the innerobject by makingthe outer one transparentor move the inner object to be investigateseparately. To seethe whole obj ects,the computermay keepthe objectfixed andmove the viewer aroundit, or fixed the viewer and rotate the object.Also, the usermay use a cutting planeor more elaborateprobe to removeportions of the objectto revealtheir internalstructure. There are two main approachesto manipulateand visualise 3D objects:

surfaceoriented approach, and volume orientedapproach.

The volume basedapproach was discussedin [13], et.al. Both approacheshave advantagesand disadvantages.But in this thesis,we use only the surfaceoriented approachbecause our objectsare represented by B-rep models.

7.3 Some essential tools for manipulation and visualisation

There are numerousgraphics packages that canbe usedto manipulateand visualisesolid objectsrepresented by B-rep models,or to be more specific,polygonal models,with a wide choiceof functionsor tools. In this section,we broadly discuss someessential tools requiredin this project.These methods are alreadywidely implementedand used. We studythem only to know somebasic concepts in order to use them efficiently.We do not intendto go deeplyinto detail to try to improve or

239 Chapter 7. Manipulation and visuallsation developany new methods.Therefore only basicconcepts are presentedwith references for thosewho are interestedto searchmore information.

7.3.1 Viewing functions

To seea better view of the obj ects,the computermay keepthe object fixed and move the viewer aroundit. We chooseto fix objectsand move the viewer aroundthem becausein the real situation,we assumethat all the objectsare locatedin the world co- ordinate systemE' which actslike a referenceco-ordinate system. The user is allowed to wonder aroundbut not allowedto moveany objectwithout keepingthe record of the original position.Detail on theseviewing functions can be found in [9], [ 10] et.al.

7.3.2 Surface and materials

Appropriatehandling of the object'ssurface (triangular faces in this project) helpsthe userto easilyunderstand the object.From a given point of view, those surfacesthat are completelyunseen by the userare calledhidden surfaces. All othersare called visible surfaces,even if only a part of the surfaceis actuallyvisible. Hidden surfacesnormally need not be displayby the computeron the screen.This meansthat a picture can be drawn more quickly if the hiddensurfaces are removed.And also the object representedon the screenby line drawingcan be muchmore easilyto understand if the hiddensurfaces are removed.Methods to determinewhich surfacesare hiddenare called hidden-surfaceremoval methods. One method which is widely usedis calledthe painter's algorithm,is to draw all the surfacesin order, from the farthestaway to the nearest.Those surfaces that arenot at all visible at the end of the processare hidden.

The optical appearanceof a surfacedepends on the materialfrom which it was made.Except in line drawing,material description must be assignedto eachsurface. As our object is reconstructpurely by shapeinformation, no materialdetail available. Thereforethe useris allowedto freely assignmaterial descriptions to the objects'

240 Chapter 7: AfanipulatiOnand v1suallsation surfacesas long as it makesthe objectseasier to understand.A material'sappearance may be categorisedinto two generalcategories: reflection and transparency(or transmission).The most significantproperty involved in reflection is surfacecolour. This property controlswhich frequenciesof light are reflectedby the surface,which in turn determineswhat colour is perceivedby a userlooking at the object. Light that is not absorbedmay be reflectedin two differentways: diffuseand specular.Diffilse reflection is usualof matteobjects, e. g. a pieceof rough cloth. This kind of material scattersthe incominglight in all directionsuniformly. In contrast,specular reflection is typical of shinyobjects. The light bouncesperfectly off the surface.A material descriptionnormally includes diffuse and specular reflection coefficients.

Transmittedlight is the light that haspassed through a transparentobject. This type of light is normallyaffected by the body colour or opaquenessof the object. Speculartransmission is the passageof light through a materialwith a very regular internal structure.Diffuse transmissionappears when light passesthrough a material with a non-regularor rough structure.The light is scatteredin all directionswith equal colour intensityupon exiting from the object.

7.3.3 Lighting and shading

Objectsin a scenegenerated by a computermust be illuminatedto be visiblejust as in the real world. The objectillumination consists of the materialspecification to determineshading, which is the task of finding the colour and intensityof light leavinga point on a surface.The radiationof a light sourcecan be describedby a plot calleda goniometricdiagram [I I]. The diagramspecifies the light intensityin various directions away from the source.And further specificationof the light includesa goniometric colour diagram,which describesthe colour of the light in various directions.

Ambient or indirect light usuallyappears in real scenes.It is the light that bouncesaround the sceneso that everyobject receivessome light. The appropriate

241 Chapter 7.- Manipulation and visualisation simulationof this kind of light canmake an imagelooks more realistic,but it can be very time-consumingprocess.

To calculatethe colour of light leavinga point on a surface,the illunýinationand the materialdescription are combinedin a processcalled shading. The result is the intensity and colour of the light leavingthe point. This light is a blendingof reflection, (diffuse and specular)and transmission (also diffuse and specular).

Lambert (or faceted)shading employs only the face normalto calculatethis light. Each point receivesalmost the sameshading because every point on a polygon hasthe sameface normal. The resultis the obj ect appearsto be madeof flat surfaces. Although Lambertshading is a very fast technique,the resultingimages have poor quality comparedwith the following techniques.

Gouraudor shade-interpolatingshading attempts to smooththe appearanceof a polygonalmodel by blendingshades across the polygon.Each vertex is shadedusing the lighting model andthe vertex normal.The colours,computed at eachvertex are blendedover the face of the polygon.

Phong (or normal-interpolating)shading attempts to makea polygonalmodel appearsmoother. The methodsinterpolate the vertex normalsacross the face of the object. When a point needsto be shaded,a new surfacenormal is built at that point from a combinationof the vertex normals.Then that point is shadedwith the newly built normal.Phong shading produces much more realisticimages but the methodis far slower than eitherLambert or Gouraudshading. All three shadingtechniques share the problemthat the silhouetteof a polygonalsurface is not smooth,and this can give away the underlyingpolygonal model if the polygonsare big enoughto be noticeable.

242 Chapter 7. Manipulation and visualisation

7.3.4 Rendering

The processof mappingobjects in E' into E' or 2D imageplane is called projection; when the projectionis intendedfor rendering,it is calledthe viewing projection. The orthogonalprojection maps objects by sendingevery visible point in the viewing volumeto the imageplane along a line that intersectsthe screenat a normal angle.A more realisticprojection includes perspective which leadsto the perspective projection. In this projection,the sharpnessof a perspectivecan be controlled by specifyingthe half-angleor full-angleat the tip of the pyramid;a larger anglegives a greaterperspective. There are only a few major classesof techniquesfor rendering (from a user's point of view). Scan-linealgorithms produce rendered images by filling in pixels one after the other acrossa horizontalscan line. The algorithmsoften derive much of their efficiencyby reusinginformation from pixel to pixel as they work. Some curved surfacecan only be renderedwith subdivisionalgorithms which divided the object into smallpieces to makeit easierto render.The algorithmthat works on each point or pixel on the screenindependently is calleda random-samplingalgorithm. Different piecesof the screencan be renderedby entirelydifferent machines.All of thesetechniques may be precededby a pre-processingalgorithm, which enhancesan object with someadditional information before rendering to improve its quality in some respect.

There are a numberof methodsfor rendering.We discussonly three popular techniques:Z-buffer, ray tracing, andradiosity becausemost renderingmethods share at least someof the principlesused in thesethree; they representa crosssection of the widely usedand availablemethods [11].

Z-buffer algorithmis of the scan-linealgorithm variety. It makesuse of the nearestobject encounteredthe Z-buffer that storeson a pixel-by-pixelbasis the Z-depth of the nearestobject encounteredso far at that pixel. That object appearsin the image buffer, which is originally clearedto a backgroundcolour.

243 Chapter 7. Manipulation and visualisation

The basicZ-buffer techniquedoes not supportshadows, transparency or reflections,although all of thosefacilities can be addedat the cost of extra computationaltime and complexityof the program.Simplicity, versatility and the fact that they can handlean unlimitednumber of objectsare the advantagesof Z-buffer techniques.Shadows can be addedto the Z-buffer methodby renderingthe sceneonce from the point of view of eachlight source.The result is storedin an illuminationbuffer for that light.

Whenrendering the sceneftom the eye'spoint of view, points may be transformedto the point of view of eachlight, and from there comparedwith the appropriatevalue in the shadowbuffer for that light. This comparisoncan reveal whether or not a point is in shadowwith respectto that light. One drawbackof this I schemeis the greatincrease in computationaltime and memoryspace requi red.

Another renderingtechnique is ray tracing, which is of the random-sampling variety. Ray tracing startswithan eyeray that beginsat the eye and travelsinto the scenethrough the computerscreen. When the ray hits an object, a transmittedray and a reflectedray are sentout into the volumeof interestto determinethe colour of the light that will be reflectedand transmitted by this objectback to the observer.Each of these rays follows the sameprocess of striking an object's surfaceand sendingout additional rays.

The result is a ray tree beforerendering to increasespeed at the expenseof someshading quality. If a ray neverhits anobject it will finally strike the boundary sphere,which givesthe ray a backgroundcolour. Shadowsare createdthrough illumination rays.These travel from the shadingpoint to a light source.If an object obstructedthe ray beforeit reachesthe light, thenthe shadingpoint is in shadowwith respectto that light; otherwisethat light's illuminationis addedin to the incidentlight at the shadingpoint. Onceray tree hasbeen constructed, it is shadedfrom the bottom

244 Chapter 7.ý Manipulation and visuallsation up, finally endingwith a colour for the eyeray, which is placedinto the appropriate pixel on the screen.

Radiosity is basedon the principlesof energytransfer, which describeshow fight bouncesamong parts of an environment.This methodcan be usedto handle diffuse inter-reflectionsor colour bleeding.The sceneis first divided into small elements,which togetherform the originalmodel. Each pair of elementsis examined, and a form-factor is calculatedfrom their relativegeometry. The form-factor describes the percentageof light radiatedfrom one surfacethat will strike the other. Oncethe form-factors are computed,we canbegin to balancethe light in the environment.This operationis donein a balancingloop.

The first stepis to determinethe emittanceof eachobject in the volume of interest.Then a guessis madeto approximatethe incidentlight at that point, and from that value the reflectedlight is calculated.The total radiosityof that elementis the sum of the emittedand reflectedlight. Theneach pair of elementsin the sceneis examined. The light leavingeach object's surfaceis scaledby the form-factor and appliedto the other as incidentlight. Oncethis hasbeen done for all pairs of elements,each element has a new guessfor its incidentlight. The computerrecalculates the radiosity of each element,and the loop is repeated.

After a while the systemconverges, meaning that it hasbecome balanced. At this point we havea solution,which describesjust how much light is bleedingfrom every elementto everyother. The modelis then modifiedto take thesediffuse inter- reflectionsinto account,and canthen be rendered.Part of this processis that objectsin the scenecast shadowson other objecteven when they are not in the direct path of light source.Radiosity can greatly enhance the realismof a renderedscene by taking into accountlight balancingand shadows.

245 Chapter 7.,Manipulation and visualisation

For photorealism,ray tracingand radiosity are usuallythe renderingmethods of choice.Because they closelyapproximate the real world althoughthey are also quite fixed in their approach.On the other hand,the Z-buffer techniqueis often more flexible and can offer more new kinds of expressionin rendering.

All renderingmethods must take into accountthe aliasingproblem. The effects of this problemcan be reducedby usingan anti-aliasingalgorithm which can be roughly consideredas a carefullycontrolled type of blurring. It is basedon mathematicaltheory and an understandingof the humanvisual system, so that when an imagehas been properly anti-aliased,all the edgewill appearas cleanedges, instead of staircasing edges,from a distance.In ray-tracingapproach, anti-aliasing can be handledwith an extensionof ray tracing calledstochastic ray tracing.Problems that cannotbe solved with this methodare caustics,i. e. bright spotsof focusedlight, and diffuse inter- reflections.

All the detailson the methodsdescribed in this sectioncan be found in [12] et. al.

7.4 Manipulation and visualisation software

There are a largenumber of commercialgraphic packages available for the manipulationand visualisation of reconstructedobjects. In this research,we have chosenthe Autodesk3D Studioversion 3.0 which is a PC-basedgraphic package. 3D Studio is capableof 3D modelling,rendering, and animationthat createsimages and animationsthat appearphotorealistic. We can createcomplex scenes with the advanced modelingtools of 3D Studio.(Note that we usedthese modelling tools to createthe test objectsdescribed in Chapter3. ) Textures,surfaces and reflectionscan be addedto give 3D object's depthand a photographicquality. The reconstructedobjects can be convertedinto DYF file format andexported to 3D Studio to be manipulatedand visualised.

246 Chapter 7. Manipulation and visualisation

Threebasic functions of this packageare modelling,rendering, and animation. Modelling is the processof buildingor creating3D scenes.Rendering uses the scenes that were modelledand generates still pictures.Animation involves choreographing the movementof objects.The programcreates many still imagesand then play them back at a high speed,much as motion picturesare createdand shown.

7.5 References

[I] T. E. Cliftion III andF. L. Wefer,Direct volume display devices,IEEE Computer Graphics& Applications,July 1993,57-65.

[2] D. F. McAllister, 3D displays,Byte, Vol. 17,No. 5, May 1992,183-188.

[3] H. Fuchs,M. Levoy and S. M. Pizer,Interactive visualisation of 3D medical data, IEEE Computer,August 1989,46-51. [4] S. A. Benton,Experiment in holographic video imaging, SPIE Institue Series, Vol. IS8, SPIE, Bellingham,1990,247-246.

[5] S. A. Benton andM. Lucente,Interactive computational of display holograms, Proc. ComputerGraphics International 92, Springer-Verlag,New York, 1992.

[6] L. D. Sher, The oscillating-mirror technique for realizing true 3D, Stereo true 3D technologies,D. McAllister, Princeton computergraphics and other ed., University Press,1993.

[7] C. Barillot, Surface and volume rendering techniquesto display 3D data, IFTE Engineering in Medicine and Biology, March 1993,111-119.

[8] M. Arrott and S. Latta, Perspectiveson visualisation, IEEE Spectrum,September 1992,61-65.

247 Chapter 7. Manipulation and visuallsation

[9] 1.0. Angell, High-resolutioncomputer graphics using C, Macmillan,London, 1990.

[10] D. F. Rogersand J. A. Adams,Mathematical elementsfor computer graphics, 2nd ed., McGraw-IEII, 1989.

[I I] A. S. Glassner,3D computergraphics, 2nd rev. ed., Herbert, London, 1991.

[ 12] A. H. Watt, 3D computergraphics, 2nd ed., Addison-Wesley,Wokingham, 1993.

[ 13] S. L. Wood, Visualization and modeling of 3D structures, IEEE Engineering in Medicine & Biology, June 1992,72-79.

249 P TE,W, 8-

8.1 Introduction

In this chapterwe presentthe resultsfrom the reconstructionof the phantoms describedin Chapter3. We haveconducted experiments on both singleobjects and multiple objects.In the experiments,we first testedour methodon singlecomputer- modelledobjects. Then we appliedthe methodto multiple physicalobjects. All the X- ray imagesor projectionplanes used in the experimentswere 256x256,8-bit, gray-scale images.The softwarewas written in standardC language.

8.2 Single objects

We usethree computer-modeledobjects (parallel projections): an ellipsoid,a dimple-shapedobject and a bone-shapedobject. The ellipsoid,which is a convex object, was usedin order to seethe distributionof commontangent points. The dimple-shaped object was reconstructedto demonstratehow our methoddeals with a surfacewith a smoothcavity. We alsoreconstructed a more complicatedobject, the bone-shaped object and alsoused this objectas an exampleto seethe effectsof geometrical misalignmentof the X-ray imageson the reconstructionmethod.

249 Chapter8: Results

8.2.1 Ellipsoid

An ellipsoidwas modeledin a computerand usedas an object to simulatea set of 10 X-ray imagesor a set of projectionplanes T shownin Fig 8.2a.The methodof X-ray imagesimulation was explainedin Chapter3. The projectiondirections spread uniformly as shownin detail in TableC4 in AppendixC. Thenwe usedthe methodand the softwaredescribed in Chapter4 to definethe contrastcurves in theseX-ray images. The resultsare also shownin Fig 8.2a.

In this case,There is only one contrastcurve in eachimage therefore there is no needfor the userto interveneto definesensible occluding contrast curves. These contrastcurves from the 10 X-ray imagesthen were processby our methodto determinea set of commontangent points as the result shownin Fig 8.2b.

250 Chapter S. Resuh-s

(0) (1) (2) (3) (1)

5) (6) (7) (8) (9)

Fig 8.2a (0)-(9) Simulated X-ray images and contrast cunes of the ellipsoid at projection directions 0- 9 in Table C4 in Appendix C.

No of vertices N

40 35 30 25 20 15 10 6 0 0.1 0.3 0.5 0.7 U.9 1.1 1.3 1.5 1.7 1.9

Error distance In pixel

Fig 8.2h Error distance distribution of the result in Fig 8.2b. At each common tangent point, the error distance is defined as the shortest distance from the common tangent point to the surface 251 of the computer-createdellipsoid. Chlll)tei-Sý Results

SSS" : "? " S.

(2)

" "--: --: -:- ". ". " ::". " "" 'F : ."... 1 ""IS. S.. " "" ". ". S".. - ". S ". "S "... """S" I. "I "" "..

(3) (4

«1)

(5)

(7) (8)

Fig 8.2b The 90 common tangent points on the surface of the ellipsoid reconstructed from the contrast curves in Fig 8.2a: (1) The common tangent points. (2) Top vicw. (3) Sidc view. (4) Front vicw. (5) The common tangent points superimposedon the ellipsoid's surface. (6) Top view. (7) Side view. (9) Front view.

8.2.2 Dimple-shaped object

A dimple-shapeýobjectwas modeled in a computer. It is used as an object to simulate a set of 4 X-ray images as shown in Fig 8.2c, by using the method described in Chapter 3. The projections of these X-ray images spreaduniforn-dy as shown in detail in Table C2 in Appendix C. We then used the method and the software described in Chapter 4 to define the contrast curves from the X-ray images. The results are also shown in Fig 8.2c.

252 Chapter's Rcsult.ý

Fig 8.2c (0)-(3) Simulated X-ray images and contrast curves of the dimple at projection directions 0-3 in Table C2 in Appendix C.

Fig 8.2d The reconstructed dimple-shaped object, reconstructed by using the contrast curves in Fig 8.2c.

253 Chapter S.- Results

By using these contrast curves, our reconstruction method automatically reconstructed two disconnectedsurfaces representing the dimple-shaped object, These two reconstructed surfacesare the occluding surface and the cavity of the dimple, see Fig 8.2d, where the thin wire frame representsthe occluding surface and the thick wire frame representsthe cavity.

8.2.3 Bone-shape object

A bone-shapedobject was modeled in a computer and used as an object to simulate the set of 10 X-ray images or the set of projection planes shown in Fig 8.2e, The method of X-ray simulation is describedin Chapter 3. The projection directions spread uniformly as shown in detail in Table C4 in Appendix C. The method and the software described in Chapter 4 were used to define the contrast curves from these X- ray images. The results are also shown in Fig 8.2e.

No of vertices ON

35- 30- 25- 20- 15. 10

Error distance In pixel

in Fig 8.2d. At Fig 8.2i Err or distance distribution Of the result each defined distance vertex, the eff or distance is as the shortest dimple. from the vertex to the surface of the computer-created

254 ChopterS. Result.%

(ji) (I) (2) (3) (4)

(5) (6) (7) (8) (9)

Fig 8.2e (0)-(9) Simulated X-ray images and contrast curves of the bone-shaped object at projection directions 0-9 in Table C4 in Appendix C.

255 Chapter S- Re.vults

Fig 8.2f The reconstructed bone-shapedobject (total vertices: 315, total triangular faces: 626). reconstructed from the contrast curves in Fig 8.2c.

Then we used our method describedin Chapter 5 and Chapter 6 to reconstruct this bone-shapedobject and the result is shown in Fig 8.2f

No of vertices NO

40 35 30 25 20 is 10 5 0 0.1 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 Error distance In pixel

Fig 8.2j Error distance distribution of the result in Fig 8.2f. At each vertex, the eff or distance is defined as the shortest distance from the vertex to the surface of the computer-created bone.

256 Chapter 8: Results

Fig 8.2g The reconstructedbone-shaped object (total vertices:301, total triangular faces:620), reconstructedfrom 10 misalignedX-ray images.

To seethe effectsof geometricalmisalignment of the projection planeson our reconstructionmethod, each of the X-ray imagesin Fig 8.2ewas randon-dyrotated in anticlockwiseor clockwisedirection within the rangeof 10 degreesand translated within the rangeof 10 pixelsfrom the origin of the original image.The reconstructed bone-shapedobject from thesemisaligned images is shownin Fig 8.2g.

8.3 Multiple objects

We took 10 X-ray imagesor projectionplanes of the syntheticknee joint phantomdescribed in Chapter3. Theseimages, taken at uniforn-flydistributed projection directions(Table C4 in AppendixC), are shownin Fig 8.3a. Then we used

257 Chopicr S.- Results the method and the software explainedin Chapter 4 to define the contrast CUrvesin these X-ray images.The results are also shown in Fig 8,3a.

(U) (1) () )

(5) (6) (7) S)

Fig 8.3a (o)-(9) X-ray images and contrast curves of the synthetic knee-joint phantom taken at projection directions 0-9 in Table C4 in Appendix C.

In this experiment we manually defined the set of sensibleclosed occluding

contrast curves. We superimposedthe corresponding contrast curves on each of the X- ray imagesand selectedand linkedthe contrastcurves as appropriateto form a set of closed occluding contrast curves. As described in Chapter 4, it is not necessaryfor the

258 Chapter 8: Results

user to define only the valid occluding contrast curves (becauseour method will automatically eliminate any invalid occluding contrast curves in the object identification stage). Then we used our method describedin Chapter 5 to identify and reconstruct each object separately.In this experiment, five objects were identified and reconstructed. There are the outer and inner surfacesof the Femur (Fig 8.3d), the Patella's outer surface and the outer and inner surfacesof the Tibia and the Fibula (Fig 5.2e). Note that the Tibia and the Fibula are identified as one connected object.

We use Fig 8.3b to illustrate how a tangent point is determined in practice. For example, common tangent plane ý rotating about the line CP,CP., intersects projection plane or X-ray image y, (line 1. is the intersection line) and touches contrast curve AB at point C which is a tangent point. Fig 8.3c is used to describe how tangent points spread along contrast curve GH in projection plane ýi, All the tangent points on this contrast curve were determined by pairing ý/, with the rest of projection planes.

No of vertices (0/-)

30

25

20

is

10

5

0 5 10 15 20 25 30 35 40 45 50 55 60 Error distance In pixel

Fig 8.2k Err or distance distribution of the result in Fig 8.2g. At each vertex, the error distance is defined as the shortest distance from the vertex to the surface of the computer-created bone.

259 Chapter S. Results

Fig 8.3b Tangent point C, on contrast curve AB on X-ray image number 5 (projection plane y, ) in Fig 8.3a, and tangent point F, on contrast curve DE on X-ray. image number 7 (projection plane W) in ý. Fig 8.3a, are determined by common tangent plane

260 Chapter S.- Results

Fig 8.3c Distribution of tangent points on contrast curve GH

These reconstructed objects not only can be displayed and manipulated separately but also can be put together, displayed and manipulated in the same scene as an example shown in Fig 8.3f where the reconstructed outer surfaces were displayed in one scene at two different viewing angles.

Fig 8.3d Left. the reconstructed outer surface of the Femur (total vertices: 529, total triangular faces. 1054). Right: the inner surface of the Femur (total verticesý 490, total triangular faces: 980) reconstructed from the contrast cun, es in Fig 8.3a.

261 Chapter 8: Results

0

Fig 8.3e Top: the reconstructedouter surfaceof the patella (total vertices:34, total triangular faces: 64). Left: the reconstructedouter surfaceof the Tibia and Fibula (total vertices:833, total triangular faces: 1699). Right: the reconstructedinner surfaceof the Tibia and Fibula (total vertices 750, total triangular faces: 1499).All of theseobjects were reconstructed from the contrastcurves in Fig 8.3a.

262 Chapter 8: Results

Fig 8.3f All the reconstructedouter surface in one scene,viewed from two angles.

To improvethe visibility of theseobjects, in this experimentwe assigneda non- smoothgrey plastic(high opacity)as the materialfor the objectssurfaces and rendered the scenein Fig 8.3f by addingsome light sourcesand using Phong shading. The results are presentedin Fig 8.3g-h.The materialassigned to a surfacecan be changedas appropriate, For example,we assignedthe blue glass(low opacity) to the reconstructedFemur and the red glass(also low opacity)to the reconstructedTibia and Fibula to visualisethe hiding patellaas shownin Fig 8.3i.

263 ter S. Rcsull.s

--- - .-I

Fig 8.3g Rendered sceneof the left knee joint in Fig 8.3f.

264 Chapter 8.- Resulls

Fig 8.3h Renderedscene of the right kneejoint in Fig 8.3f

Fig 8.3i Opacity of the objects can be changed to visualise a hiding object (the Patella in this figure).

265 C-HAPTER 9,ý Do S N -C'U-, -ho N.,,D,,,, C ON CZ-ý-U,

9.1 Discussion

9.1.1 General discussion

In the previouschapter we haveshow how our Object-based3D X-ray imaging can be usedto reconstructeddifferent kinds of objects.An ellipsoidwas reconstructed to demonstratethe distributionof surfacepoints. It canbe easilyseen that the distribution was optimum,or in otherwords the commontangent points spreadon the surfaceclosely on highly curveparts and widely spreadon smooth,flat parts. We also in reconstructeda dimple-shapedobject order to showthe problemthat happenedwith the object's cavity. This problemwill be discussedlater in this chapter.A more complicatedobject, a bone-shapedobject was also consideredboth in normal conditions and in a specialcondition when the X-ray imageswere misaligned.We useda kneejoint phantomto demonstratehow well our methodcould copewith a real physicalphantom j in and the casewhen there were morethan one ob ect the volume of interest.

266 Chapter 9: Discussionand conclusions

- The resultshave shown that we haveindeed achieved the objectivestated in Section 1.4.2in ChapterI andthe methodcan be a usefultool in 3D reconstructionof completeand still solid objects.This methodcan be usein medicalapplications, e. g. medicalvisualisation, organ modelling, surgical planning, or in computervisions, e. g. object recognition,geometric reasoning (path planning,obstacle avoidance etc. ). No prior knowledgeabout an objectis requiredand the methodautomatically and efficiently adaptto the complexityof the object's surface.We use 2D projection images which provide continuousinformation unlike cross-sectionimages which some important informationbetween slices may be missed.

In termsof numberof 2D imagesrequired, our methodcan producea reasonablygood result of a kneejoint by usingonly 10 images,while others suchas the work reportedin [1] requiredabout 100cross-section images at 2 mm. interval or the GE's latest3D X-ray machine(using ART method)required at least30-40 2D X-ray projection images.

At this stage,the methodgives it bestresults for highly contrastedobjects such as bone, etc. We haveconsidered only objectswhich havewell-defined surfaces. Amorphousfeatures have not yet beencontemplated. This project hasbeen evaluated visually. More work is neededto be doneto quantitativelyevaluate the method. However, the methodworked very well with the test objectswhere the resultscan be visualised.

In this research,the stability of the algorithmsand compulatiol,was emphasisedover speed.Though the computationtime required(depends on the object's complexity)is rather small,e. g. the reconstructionof kneejoint requiredabout 3 minuteson DEC3520not includingthe time spentby the user to intervenein some processes.To evaluatethe projectin termsof speed,the whole systemis neededto be properly integratedand effectiveuser interfaces have to be created.Such things are beyondthe scopeof this research.

267 Chapter9: Discussionand conclusions

9.1.2 Data acquisition

In this researchwe assumedthat all 2D imagesused were acquiredcorrectly and preciselyin termsof alignmentand the quality of the images.

In termsof alignment,all the algorithmshave not yet beendeveloped to cope with seriousalignment problems. As we can seethe resultsin Section8.2, the bone reconstructedfrom misalignedX-ray imageswas distorted.In the circumstances,the method could not exactlylocate the right positionsof the surfacepoints. In other words, the tangentpoints on eachpair of the imagescould not be matched appropriately.This resultedin the reconstructedtangent curves were then mispositioned.This effect can alsobe seenon the surfaceof the reconstructedknee joint. Someparts were distorted.For this kneejoint, the misalignmentproblem occurredbecause we changedthe projectiondirections by adjustingthe phantom manually.This could not guarantee100% precision. And as a result, someimages were still slightly misaligned.

In termsof the quality of the images,in this researchwe assumedthat every tangentor singularcurve in 3D mustappear as a contrastcurve in 2D image. Therefore the quality of the X-ray imageis vital for the reconstructionprocess. The effect can be clearly seenfrom the reconstructionof the kneejoint in Chapter8. The Patella'sinner surfacewas not identifiedas an objectbecause the occludingcontrast curves of the surfacecould not be fully detectedin someimages. This is very crucial due to our method basedon the assumptionthat therealways exists a closedoccluding contrast curve of an object on any projectionplane (Assumption 5.8a). If the occludingcontrast curvescannot be fully detectedon all projectionplanes then the object cannotbe identified and as a resultthe objectwHI not be reconstructed.

In practice,a problemabout the quality of imageswhich we facedwas the imagedata transfer problem. The datacould not be directly transferredfrom the X-ray

268 Chapter 9: Discussionand conclusions machineused in the experimentto our workstationbecause we did not know the data format. Having tried to get the dataformat but without any success,the only way we could do in the givenPhD time framewas to tap the imagesfrom the machine'svideo monitor as explainedin Chapter3. Evenwe knew very well that this methodvery much degradedthe imagequality but we had no other choiceleft.

There is anotherpoint worth to be mentioned.In real situations,especially in medicalapplications, volumes of interestare usually cylindrical, e. g. a limb, an abdomen.Even thoughwe requireuniformly distributed projection directionsand there may be a projectiondirection like d, or d2 in Fig 9.1a(l) which is impossibleto implementto a cylindricalvolume in practice,this shouldcause no problemto our method.We can seethat, by usingthe setof 10 projection directionsin Table C4 in Appendix C, the rangeof 0 in Fig 9.1a(2) is from -6.47" to 46.7*. This meansif the cylindrical volume'smajor axis is the x-axis as shownin Fig 9.1a(2), then the X-ray sourcesneed angular space of only 53.7", Fig 9.1a(3). Therefore,by appropriately adjustingthe set of projectiondirections, we can avoid the situationwhen a projection direction is parallelor almostparallel to the maj or axis of a cylindricalvolume like d or d2 in Fig 9.1a(l).

269 Chapter 9. Discussion anti conclusions

c PI v CR

(1)

CP

x

Fig 9.1a Range of 10 uniformly distributed projection directions: (1) Example of projection directions (d) which cannot be used for a cylindrical volume (a leg in this case). (2) Range of 0 is from 4). 470 to 46.70 and range of ý is from 0' to 360'. (3) X-ray sources (CP) need angular space of onl) 53.17' (the range of 0 ). This allows us to cope with a cylindrical volume such as a leg.

9.1.3 Tangent and singular curve reconstruction

The method presentedin Chapter 5 can reconstruct any tangent or singular

curve as long as it appearsas a contrast curve in a 2D X-ray image. This allows us to reconstructsome important features of an object suchas a crack on a fracturedbone.

270 Chapter9: Discussionand conclusions

The crack is, in fact, a singularcurve. However there is still a casethat the method cannotcope with. The algorithmsin Chapter5 will fail to find commontangent points or surfacepoints if a commontangent plane touches and object's surfaceat an infinite numberof points (Case2 in Assumption2.6a). This caseoccurs when there is a flat face or a planarcurve which lies on a commontangent plane. In the reconstructionin this researchwe just ignorethis caseand consideronly when a commontangent plane touchesan object's surfaceat a finite numberof points.

We did not haveany seriousproblem when we did the reconstructionof the test objects.But for a man-madeobject suchas a polyhedralobject, the problemmay cause the failure of the whole reconstructionprocess, if we do not adjustthe set of projection directionsappropriately. For example,imagine we are trying to reconstructa cubeby using three projectionplanes which areorthogonal to eachother and eachprojection planeis also parallelto the cube'sface. Every commontangent plane will touch the cube's surfaceat an infinite numberof points.Therefore no surfacewill be found. However, in our opinion,this extremecase is unlikely to happenin practiceespecially when we are dealingwith naturalobjects. But if it doeshappen, readjusting the set of projection directionssuch that not all commontangent planes touch the surfaceat an infinite numberof pointswould solvethe problem.

9.1.4 Surface reconstruction

The algorithmspresented in Chapter6 haveworked very well with the test objects.Nevertheless there are somecruci po nts to be discussed.The generalprimary surfacereconstruction is basedon Proposition6.2a, which we assumethat an object's surfaceis at leastlocally diffeomorphicto E'. This propositioncan be usedonly when surfacepoints are optimallydistributed or projectiondirections are uniforn-dy distributedand the numberof projectionplanes is large enough.We havenot yet developedany methodto checkwhether the points in a main connectednetwork satisfy

271 Chapter 9: Discussionand conclusions the condition in Proposition6.2a. The referenceplane T at eachpoint in the network can be adjustedif the point andits neighboursdo not satisfythe condition.

Even our methodhas been designed to dealwith objectswith arbitrary topological properties.It still haslimitation: a main connectednetwork or a wire frame must be connected.Our algorithmtreats separate networks as separateobjects. For example,the reconstructeddimple-shaped object shownin Chapter8 was supposedto be one object.But it was treatedas two separatesurfaces; the outer boundaryand the cavity. This occuredbecause the contrastcurves of thesetwo parts were not connected and thereforethe networksare not connected.

9.1.5 Incomplete objects

In this researchwe haveassumed that all the objectsare completelyin the volume of interestor, in other words,the objects' occludingcontrast curves are fully visible from all projectionplanes. Some distortion occursif an object is not completely in the volume of interest.This effectcan be clearlyseen from the reconstructedknee joint shownin Chapter8. The Femur,the Tibia and the Fibula are not completelyin the volume of interest,e. g. the endsof Tibia andFibula cannotbe seenin projection plane 0, and the end of the Femurdoes not appearin projectionplane 8 in Fig 8.3a.This resultedin the distortion at the endsof thesebones as shownin the results.In this case, the effect is not seriousbecause only smallparts of the kneejoint are not in the volume of interest.The algorithmscan still toleratethis effect. However if the large part of an object is out of the volume,then the effectwill be muchmore notable.We use the close occludingcontrast curves of objectsin the object identificationprocess. This meansif thesecontrast curves are not complete,the object will not be identified and therefore will not be reconstructed.

272 Chapter 9: Discussionand conclusions

9.2 Conclusions

In this thesiswe presenta new methodfor 3D reconstructionby using 2D X-ray imagesor other forms of penetratingradiation. Tangent and singularcurves on an object's surfacein 3D are reconstructedfrom their contrastcurves in 2D. The object's surfaceis then derivedfrom the network of thesetangent and singularcurves. The experimentalresults have indicated that the proposedmethod can reconstructeach completesurface of object separatelywith precisionwhile using smallnumber of 2D X-ray images,e. g. 10. This resultsin dramaticsavings in radiation exposureand in data acquisition,processing and storage.The methodconstitutes a solid basisfrom which new medicalapplications, computer visions, etc., can further be explored.This researchmainly'aims at the validity of the method,hence a numberof refinementshave yet to be accomplished.

9.3 Further work

9.3.1 Data acquisition

As we earlierdiscussed the effectof misalignment,the data acquisitiontask is thereforevery crucialto the project.In the experimentswe fixed the positionsof the X- ray sourceand the receptorand rotated the phantoms.In practice,this cannotbe done becausethe positionof an objectmust be fixed. We needto either find the way to effectivelyimplement existing machines for this task or designa new machinespecially for this task.

If we use an existingmachine, the positionsof the machine'sX-ray sourceand receptormust be preciselycontrollable to give an exactprojection direction required. The sourceand the receptormust also be easyand quick to repositionbecause sometimesit is rather difficult to keepan object at a standstillfor a long period of time especiallywhen the obj ect is a living entity. If we designa new machine,the machine

273 Chapter 9. Discussion and conclumons that can produce simultaneouslyall the X-ray images required may be a good idea The shape of the machine might look like the one in Fig 9.3a. All X-ray sources and receptors neededcan be positioned inside a partially-spheri cal exposure chambel.at specific projection directions. When a volume of interest is placed at an appropriate position inside this chamber, all the X-ray images required then can be taken at the same time.

CP cxposuic clianilvr

sideview front vicNý

Fig 9.3a Example of a data-acquisition machine for the Object-Based 3D X-ray Imaging showing sonic X-ray sources and receptors positioned on the inner surface of the exposure chamber.

We may also need software to correct some distortion due to an object's movement or 2D image misalignment(if the machine cannot produce precision images, projection directions). By pairing all the 2D markers and iterative algorithms could be applied to correct the distortion. This is rather important to the project and it is worth to be explored more in detail.

9.3.2 Incomplete objects

The problem about incomplete objects may be tackled by using a volume of interest's surfacewhich canbe calleda boundingsurface. An incompleteobject could be considered as a complete object by ignoring the part that is outside this bounding surface or outside the volume of interest. We reconstruct this closed bounding surface

274 Chapter 9: Discussionand conclusions and use it as a reference.Then all the object's surfacepoints found outsidethis boundingsurface are not consideredand the object is fully reconstructedby using only surfacepoints found insidethe boundingsurface. This suggestionmay soundsimple but in reality it could be very tricky to implement.As discussedearlier in Section9.1.5, the object identificationprocess proposed in this thesisis basedon the use of object's close occludingcontrast curves. If all the occludingcurves are not completeas in the caseof incompleteobjects, then we cannotdirectly use the processto identify and reconstruct the objects.This casemight be solvedby detectingfirst if there is any incompleteobject then applyinga specialobject identification process to this object. The problemabout incompleteobjects is very impotentto the project if we want to use this researchfor wider applicationsin reality.

9.3.3 Processautomation and systemintegration

The processespresented in Chapter4 still very much dependon the user's intervention.We consideredonly informationfrom contrastcurves in each2D image and ignoredthe intensityinformation. The role of the user may be reduceddramatically if we developsome algorithms to combinecontrast curve informationand intensity informationtogether. The processto find sensibleoccluding contrast curves could be completelyautomated.

This researchmainly concentrates on the validity of the method,therefor we havenot yet integratedthe processesand created a user-fiiendlyinterface. It will certainlybe muchmore convenientfor a uset-if we integrateall the processesand run them on one machineinstead of on manydifferent machines as we havedone in this research.

275 Chapter9: Discussionand conclusions

9.3.4 Surface formation

We still needan efficientalgorithm to solvethe problemwhen an object is representedby two or more disconnectednetworks of surfacepoints, e.g. the reconstructeddimple-shaped object in Chapter8. This problemmight be solve4by include densityinformation to help in makingdecisions in linking disconnected networks.If the networksbecome connected then the surfacecan be derivedas one o ect.

It might alsobe interestingto exploremore aboutthe global methodsmentioned in Chapter6. Flexiblemodels could be usedto fit the networksbut a lot more work has to be doneto makesure that we usethe modelthat hasthe sametopological properties, e.g. numberof genus,as the objectwe arereconstructing. And anotherimportant thing is, after the fitting process,all importanttangent or singularcurves, e. g. crackson a bone,must still occur on the object'ssurface. If we can solveall of theseproblems the flexible modelscould be implementedand compared with the surfaceformation method proposedin this thesis.

9.4 References

[1] A. Wallin, Constructing isosurfacefrom CT data, IF-EE ComputerGraphics & Applications,Nov. 1991,28-33,

276 Alp-pl,1-E. T 0,P, -

Al Introduction

In this sectionwe presentthe implementationto our project of someexisting basicprinciples in topology.

To studyany kind of surfacessystematically, a surfacecan be divided into triangles.The trianglescan be eithercurvy or flat. No generalityis lost by specifying that the edgesof trianglesshould be straight[I], et al. This is no restriction for compact surface,e. g. surfaceswhich are closedsubsets of a boundedregion of Euclideanspace. Indeed,with the more generalconcept of infinite triangulationa wider classof surfaces can be triangulated.These matters are dealt with in detail in [1], et al.

Definition Al Let V is a set of triangleswhich representsa surfaceS.

When a surfaceis triangulatedthe propertyused to describedthis surfacecan be divided into two categories:topological and geometric. The topological propertiesare

277 AppendixA: Topologicalproperties ofsurfaces linked togetherin a network or graphwhich representstheir interconnectionsor connectivityof how the vertices,edges and facets of the surface'striangles are related to eachother. Theseproperty contain no geometricinformation about the surface:the geometricproperties specify quantitative information, such as vertex co-ordinatesand face and curve equations.

To definethe propertiesof a classof surfaces,we consideronly on topological propertiesof surfaces.Fig Al showshow topologicalproperty can be categorised. Manifold surfaceswill be describedfirst, orientablesurfaces which are a subsetof manifold surfaceswill thenbe characterised.Finally closedsurfaces will be explainedin detail.

surface

2-manifold non-2-manifold surface surfacc

orientable non-orientableI surface surface

closedsurface II non-closedsurface closedsurface II non-closedsurface

Fig Al Surfaceclassification.

A2 Manifold surfaces

In this thesis,only 2D or E2 Euclideanspace and 3D or E' Euclideanspace are under our consideration.So a manifoldsurface in E' meansa 2-manifold surface.

Definition A2 SurfaceS is a manifoldsurface if and only if the surfacesatisfies the following conditions.

278 AppendixA: Topologicalproperties ofsurfaces

* If P is a point on the surface,there are other points of the surfaceclose to P which makeup a patchwhich is homeomorphicto a planeof E' or an open disk

of E'. That is we candeform the patchinto a planewithout either merging separatepoints or tearingthe disk. Fig A2. In termsof triangulationthis meansthat the edgesopposite P in V. of S with P as a vertex of a triangle v, r=V. form a simplepolygom A polygon is simpleif there is no pair of non consecutiveedges sharing a point. Fig A3.

NB. Point P is any point on the surfaceincluding points alongthe surface'sedges, if there are suchpoints.

(T 9

Fig A2 Point and opendisk

( ) iýl

Fig A3 Simplepolygon

Thus from Definition A2, wheresurfaces are joined to one anotheror to themselvesat an edgeor vertex or touch at a point or at a curve segment,the neighbourhoodsof such points are not simplepolygons, and so thesesurfaces are non-2-manifoldsurfaces, Fig A4.

279 AppendixA: Topologicalprqpertles ofsurfaces

Fig A4 Non-2-manifoldsurface

A3 Orientable surfaces

There are 2-manifoldsurfaces that do not havephysical counterparts in E, i.e. that cannotbe constructedin real world three-dimensionalspace at all, e.g. the Klein bottle. Thesenonrealisable 2-manifold surfaces can be distinguishedfrom realisable ones,[2] et al., by the conceptof orientability.

Derinition A3 A 2-manifoldsurface S, is orientableif the surfacesatisfies the following condition.

All trianglesin V,, canbe givenan orientationin sucha way that two triangles with a conunonedge are alwaysoriented coherently, i. e. one edgeoccurs in its positive orientationin the directionchosen for its triangle, and the other one in its negativeorientation.

From the definitionswe cancheck if a surfaceis an orientablesurface by consideringany point P anddefine arbitrarily a clockwiseorientation around it. maintainingthis orientation,move along any closedpath on the surface.If there existsa path suchthat it is possibleto return to P with an oppositeorientation, then the surface is not orientable;otherwise it is orientable.For example,a sphereand a torus are orientablebut a Mobius strip anda Klein bottle are not orientable.

280 AppendixA: TopologicalpqWrties ofsurfaces

A4 Closed surfaces

In this section,we definea closedsurface, and closedsurfaces will then be categorisedinto standardclasses topologically.

Derinition A4 A 2-manifoldsurface S is a closedsurface if

0 V. satisfiesthe intersectioncondition and

Vn * 'Sconnectedand

e for every vertex P of triangle v, E V,,, the link of P is a simple closed polygon.

Intersection condition;

Vk two trianglesVj1, e V, either

are disjoint or haveone vertex in commonor havetwo vertices,and consequentlythe entire edgejoining them, in common.

Connectednesscondition;

satisfyingthe intersectioncondition is calledconnected if there is a path along the edgesof the trianglesfrom anyvertex to anyvertex. A link is the set of edgesopposite a vertexP in V. satisfyingthe intersectioncondition.

It is very importantto try to classifygeneral surfaces into standardclasses, becauseto tackle a problemof reconstructionof surfaces,we shouldknow how many possibletypes of surfaceswe haveto face.Then we can decidehow manytypes of surfacewe shouldconcentrate on. All closedsurfaces can be divided topologically into two standardclasses.

281 AppendixA: Topologicalproperties of surfaces

Theorem Al Every closedsurface S, is horneornorphicto one of the following surfaces: 9a spherewith g handlesor genusand g ý: '? 1. *a spherewith c cross-caps,c -

A5 References

[I] P. J. Giblin, Graphs,surfaces and homolqSy,Chapman and Hall, London, 1977.

282 1 PP 1, N>,,ýýýý, X. B., 4ýo4.4ý>_PE-CeS ý.- ND 4,4. CWONT ,

In this appendix,we presentsome zero-genus surfaces. These surfaces were usedas examplesin order to studythe characteristicsof contrastcurves and occluding contrastcurves.

283 11.x- I?, ý;Iirlýices and contraw curvc. %

Jt

Ell'

Table BI (1)-(18) Zero-genus surfaces

- (14F

Table B2 (1)-(18) Contrast cun, es of the surfaces in Table B 1.

284 Appendix B: Surfaces and contrast curves

CD 0

(1) (2) (3) (4) (5) (6) - 7 .

(7) (8) (9) (10) (11) (12)

CD 0

(13) (15)

Table B3 (1)-(18) Occludingcontrast curves of the sufacesin Table B 1.

285 A. P-E,. ND IX C 4P -. PR V' ON, 0,1 CTIONS' ,

This appendixcontains sets of unifonnly distributedprojection directions derivedby using regularpolyhedra.

Cl Cube

Fig Cl Cube

Edge/Inter-radius= -52, Dihedralangle = 90* Numberof vertices(N 8 V Numberof faces(NF) 6

Numberof projectiondirections (N )= 3 P

286 Appendix C Projection directions

Vertices Faces Projection directions Id mp-l V nv=O N F nf = O..N nv ... v nf F E) = FRIP"no F= Inv) nv V= [x y zf nf pn d =[xyz]' my nf pm 0 0 1023 0 1.0000000.000000 0.000000 1 1 4576 1 0.0000001.000000 0.000000 2 2 0154 2 0.0000000.000000 1.000000 3 3 3267 4 4 2046 5 5 1375 6 -1-11 -1 -1 -1 _7 1 ___J Table C1 Cube'svertices' positions,faces and projectiondirections derived from the cubc.

C2 Octahedron

Fig C2 Octahedron

Edge/Inter-radius= 2, Dihedralangle = 109*28' Number of vertices (Nd =6

Number of faces (NF) =8

Numberof projectiondirections (N )= 4 P

297 Appendix C. PTftctlon directions

Vertices Faces Projection directions V nv=O N F nf = O..N fdp"JpffuONP-1 "v ... v nf F E) =

nv V= [x y Z], nf F= Inv) pl? d nv nf pot=[xyz]' 0 100 0 24 0 0.577350 0.577350 0.577350 1 -100 1 205 1 0.577350 0.577350 -0.577350 2 010 2 304 2 0.577350 . 0.577350 0.577350 3 0-10 3 035 3 0.577350 -0.577350 -0.577350 4 001 4 214 5 00-1 5 125 6 134 7 3 15

Table C2 Octahedron'svertices' positions, faces and projectiondirections derived from the octahedron.

C3 Tetrahedron

z

y

Fig C3 Tetrahedron

Edge/Inter-radius= 2-121,Dihedral angle= 700321 Numberof vertices(N 4 V Number of faces (NF) 4

Numberof projectiondirections (N P

288 Appendix C.- Projection directions

Vertices Faces Projectiondirections l Np -1 v nv= 0 N F nf = O..N [d nv ') ... v nf F E) pa pimo Inv) nv V= [x y zf nf F=f pn d =[xyz]' mv n pm 0 111 0 321 0 -0.577350-0.577350 -0.577350 1 1 -1 -1 1 230 1 -0.5773500.577350 0.577350 2 -1 1 -1 2 103 2 0.577350-0.577350 0.577350 3 -1 -11 3 012 3 0.5773500.577350 . 0.577350

Table C3 Tetrahedron'svertices' positions, faces and projectiondirections derived from the tetrahedron.

C4 lcosahedron

Fig C4 Icosahedron

r5- Edge/Inter-radius= - L, Dibedralangle = 138* 1V Numberof vertices(N 12 V Numberof faces(NF) 20

Numberof projectiondirections (N 10 P

289 Appendix C.- Projection di"clions

Vertices Faces Projectiondirections Mp V nv=O N F =0 [d -1 nv ... v nf 'nf -NF 0= PRIP"no JnvJ nv V =[xyzf nf F= pl d nv nf For=[xyz]' 0 1.610 0 084 0 0.577350 0.577350 0.577350 1 -1.610 1 0510 1 0.577350 0.577350 -0.577350 2 1.6-10 2 249 2 0.577350 -0.577350 0.577350 3 -1.6-10 3 2115 3 0.577350 -0.577350 -0.577350 4 101.6 4 168 4 0.356822 0.934172 0.000000 5 10-1.6 5 1 107 5 -0.356822 0.934172 0.000000 6 -101.6 6 396 6 0.934172 0.000000 0.356822 7 -10-1.6 7 3711 7 0.934172 0.000000 -0.356822 8 01.61 8 0108 8 0.000000 0.356822 0.934172 9 0-1.61 9 1810 9 0.000000 -0.356822 0.934172 10 01.6-1 10 2911 11 0-1.6-1 11 3 119 12 420 13 592 14 613 15 73 1 16 864 17 946 18 1057 19 1175

Table C4 Icosahedron'svertices' positions,faces and projectiondirections derived from the icosahedron.

C5 Dodecahedron

Fig C5 Dodecahedron

290 Appendix C Projection directions

Edge/Inter-radius= 3--15-,Dihedral angle= 1161134' Numberof vertices(N 20 V Numberof faces(NF) 12

Numberof projectiondirections (N )= 6 P

Vertices Faces Projection directions F O..NF Id V nv= 0 N nf nf = nv ... v E) 'p = vpIlp"NO F= JnvJ nv [x y zf nf pn d =[xyz]' V=nv nf pa 0 111 0 1801213 0 0.8506510.5257310.000000 I I1 -1 1 4951514 1 -0.8506510.5257310.000000 2 1 -11 2 21031312 2 0.5257310.0000000.850651 3 1 -1 -1 3 71161415 3 0.5257310.000000-0.850651 4 -111 4 2120117 4 0.0000000.8506510.525731 5 -11 -1 5 1133 1918 5 0.000000-0.8506510.525731 6 -1 -1 1 6 41461716 7 -1 -1 -1 7 71551919 8 0.61.60 8 416089 9 -0.61.60 9 21761110 10 0.6-1.60 10 118598 11 -0.6-1.60 11 71931011 12 1.600.6 13 1.60-0.6 14 -1.600.6 15 -1.60-0.6 16 00.61.6 17 0-0.61.6 18 00.6-1.6 19 0-0.6-1.6

Table C5 Dodecahedron'svertices' positions, faces and projectiondirections derived from the dodecahedron.

C6 References

[I] H. M. Cundyand A. P. Rollett,Mathematical models, 3 rd ed., Tarquin Publications,Norfolk, UK, 1981.

[2] J. Blin, Platonic solids,IEEE Comp.Graphics & Applications, Nov. 1987,pp. 62- 66.

291 TN' CA, L T, OILS -, 10.

In this sectionwe presentsome essential geometrical tools usedin this project. All of thesetools are takenfrom [I).

DI 2D geometrical tools

The intersection of two Lines

f, Supposethere are two lines p+ gq and r+ Xs, where p= [xj yj ]', ]ands=[x, ]" q=[x, y, r=[x, y, y, for -oo

p+gq = r+ks that is, a pointwhich is commonto bothlines. This vector equation can be written as two separateequations

292 AppendixD: Geometricaltools

XV4 PX2 - = X3 - XI

AY2- XY4ý Y3-A (D2)

Thesetwo equationsare solvedby multiplyingEq DI by y,, Eq D2 by x, and (x2y4-y2x4) subtracting.if =0 thenthe lines areparallel and there is no point of intersection,otherwise

(X3 (Y3 XI)Y4 - - -Yl)x4 (D3) (x2Y4 -Y2x4)

Translation of origin

In this casethe co-ordinateaxes of the old and new systemsare in the same [I. direction and are of the samescale; however, the new origin is a point t= tj relative to the old axes.Hence in the new systemthe old origin hasco-ordinates [-t. is -ty]. Thereforethe transformation matrix

0 01

-001.

Rotation of axes

If a new axesare derivedby rotatingthe old onesthrough an angle0 Radians anticlockwiseabout the origin. (This is the usualmathematical way of measuring is angles)then the transformationmatrix

293 AppendixD: Geometricaltools

coso sin0 0- -sinO coso 0 0

D2 3D geometrical tools

Derinition of a plane

We now considera planein 3D space.The generalpoint v= [x y z]' on the planeis given by the vector equation

nov=k (D4)

where k is a scalar,and n is a directionvector which representsthe set of lines perpendicularto the plane.These lines are saidto be normalto the plane.if a is any point on the planethen naturallyn*a=k, and so by replacingk in the aboveequation, we may rewrite it as

n*(v-a)=o (D5)

The point of intersection of a line and a plane

Supposethe line is givenby b+ pd andthe planeby n*v=k. The two either

do not intersectat 0 (if they areparallel), intersect at an infHte numberof points (if the line lies in the plane)or havea uniquepoint of intersectionwhich lies on both the line and the plane.We haveto find the uniquevalue of g (if one exists)for which

no (b+gd) =k (D6)

that is

294 Appendix D: Geometricaltools

k-!! ob provided n9d#0 (D7) n*d n*d=0 if the line andplane are parallel and so thereis no uniquepoint of intersection.

The point of intersection of two lines

Supposewe have two lines b, + pd, and b, + pd,. Their point of intersection, if it exists(if the lines are not co-planaror are parallelthen they will not intersect),is identifiedby finding uniquevalues for p andX which satisfythe vector equation(three separateco-ordinate equations)

b, + gd, =b2 + pd2 (DS)

Three equationsin two unknownsmeans that for the equationsto be meaningfulthere mustbe at leastone pair of the equationswhich are independent, and the remaining equationmust be a combinationof thesetwo. Two finesare parallelif one direction vector is a scalarmultiple of the other. So we take two independentequations, find the valuesof g and X andput them in the third equationto seeif they are consistent.Note that if the two independentequations are

9+t7d' k tý I ": ý'= k2 cýjp+ a22

A= a,, then the determinantof this pair of equations, t7l It712- t7,, v will be non-zero (becausethe equationsare not related,and we havethe solutions

(a, 2k - C-ýA)/A (D9) (DIO)

295 Appendix D: Geometricaltools

The minimum distance between two lines

It was mentionedabove that if two lines are eitherparallel or non-coplanarthen they do not intersect.There is thereforea minimumdistance between two suchlines which is greaterthan zero.We shallnow calculatethis distance.The caseswhere the lines are paralleland non-parallel are different.We considerfirst the non-parallelcase. Supposethe two linesare a+ Vc and b+M. The minimumdistance between thesetwo lines mustbe measuredalong a line perpendicularto both. This line must, therefore,be paraflelto the directionI=cxd. Now, sinceboth a+ pc and b+ %dare perpendicularto 1,they both lie in planes with I as normal.Also, sincewe know pointson both lineswe may uniquelyidentify (v (v theseplanes: I* - a) =0 and Ie - b) = 0. Theseplanes are, of course,parallel, and so the requiredminimum distance is simply the distancefrom a point on oneplane, say b, to the other plane.We have derived a formula for this, giving the requiredanswer

l(cxd)oa-(cxd)obl i(cxd)o(a-b)l (D 11) Icxdl Ic x dl

If the lines are coplanarthen this expressionyields the result zero, sincethe lines must intersectas they are not parallel. Now supposethe two finesare parallel. In this cased= 71cfor somen: # 0 and consequentlyIc x dl =0 andthe aboveexpression is undefined. However supposethe two linesare normal to the sameplanes. Take the plane containinga with normalc (parallelto d)

Co(v-a)= (D12)

We simplyfind the point of intersection,e say,of the line b +J%dwith this planeand the requireddistance is ja - el

296 AppendixD: Geometricaltools

b+%d (D13)

co(a-b) where cod

Let m and n arethe two closestpoints on thesetwo lines or the distance betweenm and n is the shortestdistance between the two lines.By using the method presentedin [3] et al., the pointsm andn canbe obtainedas follows:

m=a+yc (D14)

and

n=b+ýd (DI5)

where

C2C C3 - C4

(c C2 2 I C4

TC2 + CS C4

and C, lcl', C2= (a = cod, C3=co(a-b), C4=ldl', C, = do -b).

297 AppendixD: Geometricaltools

The plane through three given non-collinear points

Supposewe aregiven three non-collinear points p,, p. and P3.Then the two vectors P2- p, and P3- P2 representthe directions of two lines coincident at p,, both of which lie in the plane containing the three points. We know that the normal to the plane is perpendicular to every line in the plane, and in particular to the two lines mentioned above. Also, becausethe points are not collinear, p. - p, is not parallel to is (p2 (P3 P3- P2 so the normal to the plane n= - PI) X - P2)* We know that p, lies in the planeso the equationmay be written

42 (P3 (V - PI) X - PA * - PI) =0 (D16)

The line of intersection of two planes

Let the two planesbe p,ev= [p, A p3]r ov = k, and

k2, qov=[q, q2 q31T *V = We assumethat the planes are not parallel. The line

commonto the two planesnaturally lies in eachplane, and so it must be perpendicular to the normalsof both planes.Thus the directionof this line must be d=pxq and the line can be written in the form b+ pd, whereb canbe any point on the line. In order to completelyclassify the line we haveto find one suchb. We find a point which is the intersectionof the two planestogether with a third which is neitherparallel to them, nor cuts them in a commonline. Choosinga planewith normal d will satisfythese conditions,We still needa valuefor k3,but anywHI do, so we take k3=0,assuming that this third planegoes through the origin. Thus b is givenby the columnvector

pl p2 A 4' k2 b q, q2 q3 x (D17) P3q,-pq3 pq2 -P2q, 0. -p2q3-P3q2 _

298 AppendixD: Geometricaltools

D3 REFERENCES

[I] I. Angell, High-resolutioncomputer graphics using C, Macmillan,London, 1990.

[2] S. Prakoonwit,Object-orientedX-ray tomography, Msc; dissertation, Imperial Collegeof Science,Technology & Medicine,London, 1990.

[3] R. J. T. Bell, Coordinategeometry of threedimensions, Macmillan, 1931.

299 APP-Ei.NX D-Dix-E, 0,V1 T-ý W-,, Vý-7FW-0 N; --, Ui-ITTIR-.ýSYSTEMS

All the computersystems used in this project are shownin Fig El. A digital X- ray machine(1) was usedto producedthe X-ray imagesof the real physicalphantom. Sincethe format of theseimages was unknown to us andwe hadtried very hard to get whole of this format but without success,we decidedto tap the video signalfrom this machine.Then a video framegrabber on a SiliconGraphics Indy Workstation (2) was usedto grabbedthe video signal.The computer-generatedobjects were createdby the 3D studio modellingpackage on a 486-basedPC (5). The X-ray imagesof theseobjects then were producedby a programon DEC3520(4). All the X-ray imagesfrom both the real physicalphantoms and the computer-generatedobjects are processedon a Sun Sparc2 Workstation(3) usinga softwaredeveloped by 1. Matalas(for contrastcurve detectionand representation)which run on the MIDAS packagewritten by P. Freeboroughof ImperialCollege. The analyticallyrepresented contrast curves were transferredinto the DEC 3520to be usedin the 3D reconstructionprocess. The reconstructedobjects then were convertedinto DXF file format andwere transferred into the 3D Studio packagefor manipulatingand visualising on the PC. Note that the main softwarein this researchhas been done on the DEC machineand was written in ANSI C with X-window graphicfunctions.

300 Appendix E.- Overview on computer systems

digital X-ray machine

Silicon GraphicsIndy workstation

Sun Sparc2 workstation

DEC 3520workstation

1486-basedPC I

Fig El Computersystems used in this project.

a 301