INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directfy fi’om the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of conq)uter printer.

The quality of this reproduction is dépendait upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard marginQ, and inqjroper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete mamiscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note wiH indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, begmning at the upper left-hand comer and continuing from left to right in equal sections with small overly. Each original is also photographed in one exposure and is included in reduced form at the back o f the book.

Photographs included in the original manuscript have been reproduced xerographicaUy in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directty to order.

UMI A Bell & Howell Information Company 300 North Z eeb Road. Ann Arbor. Ml 48106-1346 USA 313/761-4700 800/521-0600

F e a t u r e R e c o g n it io n F rom S c a n n e d D ata P o in ts

DISSERTATION

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate

School of The Ohio State University

By

Nien-Lung Lee, B.S., M.S.

*****

The Ohio State University

1995

Dissertation Committee: Approved by

Dr. Chia-Hsiang Menq Dr. H. Busby Æ ^ d vidviser Dr. G. Kinzel Department of Mechanical Engineering OHI Number: 9544616

UMI Microform 9544616 Copyright 1995, by DMI Company. All rights reserved.

This microform edition is protected against unauthorized copying under Title 17, United States Code.

UMI 300 North Zeeb Road Ann Arbor, MI 48103 © Copyright by

Nien-Lung Lee

1995 This is dedicated to my parents.

u A cknowledgements

I would like to express my sincere appreciation to my adviser Prof. Chia-Hsiang

Menq for his patient guidance and instructive suggestions throughout my Ph.D. study.

Without his continuous support and encouragement, this work would not be possible.

Thanks go to the other members of my Ph.D. advisory committee, Dr. Gary L. Kinzel and Dr. Henry R. Busby, for their suggestions and comments.

My appreciation and thanks are extended to my colleagues in Dr. Menq’s research group for their friendships. In particular, I have been enjoying the discussions with

Dr. K.C. Hsia, Dr. B.S. Chen, Dr. Z.C. Yan, Dr. E.M. Lim, Mr. B.D. Yang, Mr.

J.L. Yang, and Mr. Y.M. Hong during the course of this research.

I want to share this joyful achievement with my parents and sisters. Without their constant moral and spiritual support, this dissertation would not have been completed. I want to thank my girl friend Li-Hui for her love, support, and under­ standing during my long period of studying.

The work reported here was primarily supported by the National Science Foun­ dation under grant No.DDM-9215600. This support is gratefully acknowledged.

ui V ita

August 1, 1964 ...... Born - Taipei, Taiwan, R.O.C.

1986 ...... B.S. National Cheng-Kung University, Tainan, Taiwan 1990 ...... M.S. Tennessee Technological Univer­ sity, Cookville, Tennessee 1990-present ...... Research Associate, Department of Mechanical Engineer­ ing, The Ohio State University.

Publications

Lee, N.-L., Menq, C-H., “Automatic Recognition of Geometric Proms from B-rep Models,” ASME Int. Computers in Engineering Conf., 1995.

Yang, J.L., Lee, N.-L., Menq, C-H., “Application of Computer Vision in Reverse Engineering for 3D Coordinate Acquisistion,” ASME IMECE{foTmeily WAM), 1995.

Fields of Study

Major Field: Mechanical Engineering

Studies in: Topic 1 Reverse Engineering Topic 2 Feature Representation and Feature Extraction Topic 3 CAD/CAM Solid Modeling Technology Topic 4 Computer Integrated Dimensional Inspection

IV T a b l e o f C o n t e n t s

DEDICATION ...... ii

ACKNOWLEDGEMENTS ...... iü

VITA ...... iv

LIST OF TABLES...... viii

LIST OF FIGURES ...... ix

CHAPTER PAGE

I Introduction ...... 1

1.1 Background and Motivation ...... 1 1.2 Research Objectives ...... 10 1.3 Literature Review ...... 14 1.3.1 Curvature and Surface C haracterization ...... 14 1.3.2 Feature Technology ...... 18 1.4 Organization of Dissertation ...... 27

II Segmentation of Scanned Data Points ...... 29

2.1 Definition of Character L ines/P oints ...... 32 2.1.1 Classification of Character L in e s ...... 33 2.1.2 Identification of Character Lines/Points ...... 37 2.2 Curvature Calculation From Scanned Data P oints ...... 40 2.2.1 2D Curvature and Error Analysis ...... 41 2.2.2 Surface Curvature A n aly sis ...... 49 2.2.3 Curvature Calculation From Orthogonal Uniform Grid . . . 52 2.2.4 Curvature Calculation From Non-Orthogonal Uniform Grid 53 2.2.5 Curvature Calculation From Non-Uniformly Distributed Grid 54 2.3 Segmentation: Discrete Approach ...... 57 2.3.1 A Five-Step P ro c e ss ...... 61 2.3.2 Hypothesis T e s ts ...... 64 2.3.3 2D E xam ples ...... 68 2.3.4 3D Exam ples ...... 75 2.4 S u m m ary ...... 77

III Identification of Surface Types and Parameters From Scanned Data Points 81

3.1 Identification of Surface Types: Curvature Approach ...... 81 3.2 Quadratic F i t ...... 83 3.2.1 Best Fit Coefficients ...... 85 3.3 Identification of Surface Types: Quadratic-Fit A p proach ...... 88 3.3.1 Determining Locations ...... 89 3.3.2 Determining O rientation ...... 90 3.3.3 Recognizing Surface T y p e ...... 92 3.4 S u m m ary ...... 96

IV Curves and Surfaces Approximation ...... 97

4.1 Nonuniform Rational B-splines(NURBS) ...... 98 4.1.1 Analytic and Geometric Properties: ...... 99 4.2 Least-Squares Fitting of NURBS Curves ...... 102 4.3 Least-Squares Fitting of NURBS Surfaces ...... 107 4.3.1 2^D Data P o in ts ...... 108 4.3.2 3D Data P oints ...... I l l 4.4 S u m m ary ...... 114

V Automatic Recognition of Geometric Forms from B-rep Models ...... 116

5.1 Introduction ...... 117 5.2 Geometric F o rm s ...... 121 5.2.1 Homeomorphism ...... 121 5.2.2 Gauss-Bonnet T h e o re m ...... 123 5.2.3 Form Features vs. C u rv a tu re ...... 124 5.2.4 Two A ttributes ...... 126 5.2.5 Transition F e a tu re ...... 128 5.3 Neutral Basis(I): Two A ttrib u te s ...... 129 5.3.1 Characterization of Surfaces ...... 130

VI 5.3.2 Characterization of E d g e s ...... 130 5.3.3 Characterization of V ertices ...... 134 5.4 Identification of Character Lines ...... 135 5.4.1 Identification and Characterization of Feature Loops .... 137 5.4.2 Identification and Characterization of Character Loops . . . 139 5.4.3 Construct Entity-Loop-Attribute Grap h ...... 140 5.4.4 Recognition of Basic F eatures ...... 143 5.5 Implementation and R esults ...... 145 5.5.1 Example 1 : ...... 147 5.5.2 Example 2 : ...... 149 5.5.3 Example 3 : ...... 151 5.5.4 Example 4 : ...... 153 5.5.5 Discussion ...... 153 5.6 S u m m ary ...... 155

VI Recognition of Detailed Features and Feature Interactions ...... 156

6.1 Features in Manufacturing Processes ...... 157 6.1.1 Machining ...... 157 6.1.2 Die Casting & Injection M o ld in g ...... 160 6.1.3 Sheet Metal Forming ...... 163 6.2 Classification and Recognition of Detailed Features from Basic Features 164 6.3 Feature In teractio n s ...... 169 6.3.1 Recursive Interactions ...... 170 6.3.2 Relative-Position Interactions ...... 183 6.4 Summary ...... 190

VII Conclusions ...... 191

7.1 On going R esearch ...... 195 7.2 Recommendation ...... 198

LIST OF REFERENCES...... 200

V ll L i s t o f T a b l e s

TABLE PAGE

1 ka for Normal, Uniform, and Maxwell Distributions at Different Con­ fidence In te rv a l 68

2 Identification of Surface Types From Curvature ...... 84

3 Characterization of Surfaces ...... 130

vm L i s t o f F i g u r e s

FIGURE PAGE

1 The Role Reverse Engineering in a CIM Environm ent ...... 2

2 Flow Chart for Reverse Engineering ...... 11

3 The Role of Proposed Neutral Basis for Feature Recognition in Con­ current Engineering Design ...... 13

4 Types of Edges ...... 29

5 Character Lines of Discontinuity from Points on Two Intersecting P la n e s ...... 34

6 Character Lines of Discontinuity from Points on Two Intersecting S urfaces...... 34

7 Character Lines of Discontinuity ...... 34

8 Character Lines of Zero Gaussian C urvature ...... 34

9 Character Line of D° Continuity ...... 35

10 Fitting Approach for Identifying Character Points from 2D Data Points 38

11 NURBS Surface Fitting from Points on Two Intersecting Planes . . . 39

12 Curvature for a 2D curve ...... 41

13 Curvature of a Circular Arc from Two Calculating Algorithms .... 47

ix 14 Curvature of a Circular Arc from Different Size of Sampling Interval . 48

15 Curvature of a Circular Arc from Different Size of Sampling Interval . 48

16 Meusnier theorem; C and Cn have the same normal curvature at P . 51

17 Using 7 Discrete Data Points to Calculate Principal Curvature .... 56

18 Curvature Behavior of Character Points ...... 58

19 Applying Gaussian Operator to D^, D^, and Character Lines With Different V ariances...... 60

20 A Five-Step Process to Identify Character Lines ...... 62

21 Three Types of Distribution Curves ...... 67

22 Example Part Model: A Wheel H ub ...... 69

23 Example Curve Segment ...... 71

24 Hypothesis Tests At Two Locations On The C u r v e ...... 73

25 Example 1: 2D Segmentation R esu lt ...... 75

26 3D Example 1: Character Lines ...... 76

27 3D Example 1: Character Lines ...... 78

28 Curvature G raph ...... 82

29 Parameter Assignment by using Linear Interpolation for Surface Ap­ proximation from 2|D Data P oints ...... 109

30 Lofting: Surface Interpolation Through Cross-Sectional Curves .... 112

31 Orientable Closed Surfaces ...... 122

X 32 A Form with Positive Total Curvature Grown on a Block ...... 124

33 A Form with Positive Total Curvature Grown on a Block ...... 126

34 Character Line Between Two Planes ...... 133

35 Character Line of Continuity ...... 133

36 Character Line of Continuity ...... 133

37 Character Line of Zero Gaussian Curvature ...... 133

38 The Definition of Adjacent Geometric Elem ents ...... 138

39 A Part Model and the Corresponding Entity-Loop-Attribute Graph . 141

40 Block Diagram of Feature Recognition Module ...... 146

41 Example 1: A Block with 6 F e a tu re s ...... 148

42 Example 2: Rectangular and Cylindrical S lo ts ...... 150

43 Example 3: Torus Surface On A B lo c k ...... 152

44 Example 4: CAM-I Example P a r t ...... 154

45 Typical Features for Machining Process ...... 158

46 Typical Features for Die Casting and Injection M olding ...... 161

47 Typical Features for Sheet Metal Forming ...... 161

48 Simple Recursive F e a tu re s ...... 171

49 Features Sharing Part of Loops: Open Character L o o p ...... 173

50 Features Sharing Part of Loops: Overlapping Character Loops .... 175

xi 51 Open Character Loop Connected by an Edge ...... 176

52 Open Character Loop Connected by a Planar Face ...... 177

53 Open Character Loop Connected by an Edge ...... 178

54 Two Positive Features Interact Without Character L o o p ...... 180

55 One Positive and One Negative Features Interact Without Character L o o p ...... 181

56 One Positive and Negative Features Interact Without Character Loop 182

57 Feature Crossing the Boundary of Two Planar Faces ...... 183

58 Two Positive Features Sharing Character L o o p s ...... 184

59 Features Crossing Character L o o p s ...... 186

60 Two Positive Features Sharing an E d g e ...... 188

61 One Positive and One Negative Features Sharing an Edge ...... 189

xu CHAPTER I

Introduction

1.1 Background and Motivation

In recent years, digitized scanned data have become available from various scanning devices. Compared to gray-scale intensity images as data which indicate the bright­ ness on a regularly spaced grid, scanned data obtained from scanning sensors measure the distance from a reference point (usually on the sensor itself) to objects in the field of operation of the sensors. In computer vision, they are primarily used for 3D objects recognition. In medical applications, they can be used for automatic structure identi­ fication from clinical magnetic resonance imagery. For robots equipped with scanning devices, scanned data can be used for robot navigation and obstacle avoidance. They can also help factory automation in a production line. In reverse engineering, scanned data can be used to construct part models. In these applications, scanned data pro­ vide not only explicit depth relationships between sensors and object surfaces but also a three dimensional shape which approximates the corresponding object surfaces in the field of view. Thus, they can be applied to other fields with similar applications.

There are a number of ways to obtain scanned data. Based on the measuring methods, we can classify scanning devices as contact and non-contact types. The coordinate measurement machine(CMM) is an example of contact scanning device.

The laser digitizing device is an example of noncontact scanning device. The currently available laser digitizing device, such as a planar scanner, can read 15,000 3D points with a resolution of approximate O.OOSin.

In a computer-intergrated manufacturing(CIM) environment, automation in prod­ uct development is achieved by computer-aided design(CAD), computer-numerical control(CNC) machining, and automated inspection by a CMM as shown in Fig.l.

Cutter paths for machining and inspection paths for a CMM are generated from the

Designer

Drafting CFD Reverse Engineering

CAD/CAMFEM IGES Surface Fitting

Data Segmentation

CMM InspectionNC Machining CMM InspectionNC Digitization

Manufactured Shop Floor Prototype Part Modification

Designer Craftsman

Figure 1: The Role Reverse Engineering in a CIM Environment

CAD model of a part. In fact, all the downstream activities rely on the existence of

a CAD database. However, in a case where a part exists independently of a CAD model, because the CAD model either has been lost or was never created in the first place, engineers have to use nondestructive scanning technologies to produce the CAD model. For example, designers in an automotive company want to redesign a side- view mirror prototype because the mirror assembly was producing severe vibrations during wind tunnel tests. Designers experimented with various alternative contours in clay by testing each in the wind tunnel. Eventually, a non-vibrating design was developed that met the functional and aesthetic requirements. No design documen­ tation existed apart from the clay model. The scanning system needs to be used to extract part geometry directly from the clay model in order to create a CAD model for mass production. Another example is to design a handle for a knife. In order to have an ambidextrous knife, a number of prototypes were created by using clay.

The feel of handles in both hands of human beings could be sampled in a way that

no CAD model could ever do. When a model with the desired feel was found, the

scanning system could be used to extract the part geometry in order to develop a

corresponding set of a CAD model.

One could use a number of triangular facets whose vertices pass through all the

digitized scanned data and construct a model which visually resembles the original

part. However, a lot of useful information is missing in the model and it is difficult

to use the model in the downstream activities. Thus, a useful representation needs to

be the foundation which can support current CAD/CAM modeling. There are two

major types of object representations: constructive solid geometry(CSG) and bound­

ary representation(B-rep). In a CSG representation, a solid is defined by a number of volumetric primitives and a set of Boolean operators. These volumetric primitives in a CSG model may not always be recoverable from the surface information, thus making it difficult to use in a recognition paradigm. On the other hand, B-rep uses the bounding surfaces to define a solid object. The B-rep models represent solids by segmenting the object’s boundary into a finite number of bounded subsets of surface patches. Each surface patch can accurately describe the local geometry of a part, and some other surface properties such as surface curvature and surface normal can be derived easily. Since information such as bounding surfaces can be obtained from scanned data by applying surface fitting techniques, B-rep is better suited to con­ struct a part model than the CSG approach. Therefore, in this research, we are going to focus on how to construct a B-rep model from the data points scanned from a mechanical part.

The process to create CAD models from scanned data is called reverse engineering and its role in a CIM environment is shown in Fig.l. The objective of reverse en^- neering in a CIM environment is to create CAD models from scanned data points.

Accurate descriptions of surfaces are needed in many aspects in design, analysis, and manufacturing planning. Four procedures are normally required to accomplish this objective. They are segmentation of scanned data points, classification of the data points, creation of curves and surfaces, and interface to CAD/CAM solid modeling systems. In computer vision, several approaches have been proposed to segment the scanned data points. However, most of these approaches can only identify jump and roof types of boundaries from the scanned data points. These types of boundaries may not be sufficient for producing accurate B-rep models. This is resulted from that the objective of these approaches is for object recognition or pattern recognition.

Therefore, for the purpose of contructing B-rep models from scanned data points, it is highly desirable to divide the scanned data points into a number of subsets such that each subset of data points can be used to construct a surface patch for produc­ ing the B-rep model. The concept of “character lines” is proposed in this research to accomplish this objective. “Character line” is defined as a curve or a line consisting of data points which divide the scanned data points into a number of subsets. If the character lines can be identified successfully, surface characterization and surface fitting can be performed to identify surface types and surface parameters for each subset of data points and a CAD model can then be created based on the fitting surfaces and character lines.

Once a CAD model is created from the scanned data through the reverse engi­ neering process, it can be used to support downstream activities, such as process planning, manufacturability evaluation, cost estimation, etc.. High competition in modern industry has brought more pressure for cutting cost, for improving product quality, and for minimizing the time from concept to production. These requirements, coupled with the ever-growing complexity of functional requirements of products and of production systems, have contributed to a rising interest in life cycle design opti­ mization. The concepts of simultaneous engineering, concurrent engineering, design for manufacturability, and design for assemblabdity have been proposed to shorten

the design cycle time, to increase productivities and, more importantly, to come up with optimum or near-optimum design and production plans. They all refer to the practice of coordinating various life cycle values of products into the early stages of design. Thus, besides the creation of parts which meet their functional requirements, the selection of proper manufacturing process, estimation of cost, and assessment for manufacturability and assemblabdity are all incorporated in the design of the prod­ uct and the manufacturing process to achieve full desired function, higher quality and lower cost of products. Most of the design and manufacturing functions such as process selection, manufacturability evaluation, process planning, cost estimation, etc., require decision making using information suitable for reasoning about such part geometry. For example, when designing dies for die casting or molds for injec­ tion molding, a process planner has to examine the part model, determine the shape whether it is suitable for die casting or injection molding, and may modify the part geometry if it is necessary. Reasoning is necessary and is done by querying informa­ tion about the overall shape features of the part and their spatial relationships to produce the evaluation result. This trend results in a major change of the functional role of a geometric modeler from being merely a drafting to being a common information model within the product cycle.

Current CAD systems have been developed for a wide variety of applications.

Among these applications, one common purpose is to construct a design based on

geometric constraints. However, it becomes recognized that the low level geometric

abstraction alone is not sufHcient to support the various activities in modern design

and production. For example, functional features need be identified so as to argue the possible design alternations and tolerance specifications. Production related geomet­ ric characteristics need be extracted from the geometric data so that manufacurability evaluation of design work can be performed and automatic process planning can be­ come possible. In other words, current CAD are not really capable of addressing the life-cycle design issues in a concurrent manner. Therefore, it would be highly de­ sirable to have a means of capturing the natural and/or logical links between the geometric characteristics of a design and various product cycle activities and to de­ velop an information framework that would support complex reasoning incurred in the early stages of design and planning.

In order to support the above-mentioned objectives and achieve concurrent en­ gineering design, features have been proposed as a means of providing high-level semantic information for the life cycle design concerns[Gu, 1994; Chen and Miller,

1992; Gomes and Teixeira, 1991; Requicha and Vandenbrande, 1989; Shah and Rogers

1988]. Research work can be generally categorized as two distinct approaches: design by features and feature extraction.

In the design-by-features approach, designers are encouraged to construct parts based on explicit feature elements which are predefined in a feature library or are sup­

ported by the system. Manufacturing information can be tagged as related attributes

along with the feature definition. Therefore, the constructed features make the per­

tinent data available for down stream applications and ensure the reasoning directly

looking at the relevant regions. By taking this advantage, several systems proposed

for manufacturability assessment have been reported[Ovtcharova, et al., 1994; Hen- 8 deison and Anderson, 1994; Lion, 1991; Chung, et al, 1990; Shah, et al, 1990; You, et al, 1989; Chang, et al, 1988; Luby, et al, 1986]. However, these systems can only evaluate design in very limited regions which are contracted by using predefined fea­ tures and tagged attributes. For a part which comprises complex shape and multiple features, interactions among the constructed features are likely to occur, and conse­ quently new features can be generated. These features may serve important functions and most likely need be addressed in various life-cycle activities. The other drawback is that the feature model is not interchangeable among different applications. For example, when designing a die-cast part by using die-cast features, the constructed features can be assessed for the die-cast process. However, a die model, which can be produced by subtracting the part model from a die block, may not convey suffi­ cient feature information for machining process assessment. Nevertheless, design by features has its merit and will continue to be an active research area.

Feature-extraction approach releases the burden of restricting designers’ freedom to the limited modeling method within predefined feature elements. A significant amount of research has been performed for the purpose of automatic feature recog­ nition. In these studies, features are defined in terms of string characters, volumes, specific patterns or graphs consisting of faces, edges, and vertices. The recognition is normally performed by two procedures: first, characterize faces, edges, or vertices and; second, rules, grammar, or graphs are applied to identify features by matching feature pattern in database of the part model. However, most of these approaches

can only deal with prismatic parts which contain only planar faces and straight-line edges in the model. Some of them may include cylindrical surfaces. This limitation is resulted from the fact that features are characterized by the attributes of edges or vertices only. Thus, the identifiable features are normally simple in geometry.

These approaches cannot process parts having complex shapes, such as rounds, fil­ lets, sculpture surfaces, etc. Moreover, if two or more features intersect each other, feature recognition using current approaches may become very difHcult. On the other hand, Lentz and Sowerby[1992] proposed a feature extraction method for sheet-metal parts by using surface curvature properties. Attributes are defined for surfaces only and features can be identified from concave and convex regions and their intersec­ tions. This approach has the advantage of identifying features from rounds, fillets, sculpture surfaces, but it has very limited capabilities to recognize features which are formed by edges and vertices, such as holes [Lentz and Sowerby, 1994], because surface curvature properties cannot be applied to edges and vertices. In addition, recogniz­ ing features by dealing with low level topologic entities such as vertices, edges may produce combinatorial problems[Gadh and Prinz, 1992]. The variations in geometry and topology of features can result in an overwhelmingly large number of patterns and create a substantial barrier to feature recognition. Most of current approaches are able to handle few sources of difficulty; however, practical problems often have many sources of difficulty occurring simultaneously. Thus, without a generic approach which can characterize various geometric entities and their variations by using a set of common attributes, it would be difficult to recognize highly complex features that of­ ten exist in real world designs. Therefore, it is highly desirable to better characterize 10 the surface geometry and topology and to extract common attributes from low-level geometric data so as to identify higher-level abstraction of a design and to support various life-cycle design activities.

1.2 Research Objectives

The goals of this research are to shorten the time from an existing prototype to production, and thus to increase productivity, to improve the dimensional accuracy, and to bridge some of the missing links in the computer-integrated manufacturing environment. In order to achieve these goals, two specific objectives are identified for the proposed research dissertation. One of the objectives is to develop a framework to automate the process of reverse en^eering as shown in Fig.2. Several key modules in the framework wiU be investigated intensively, including the data segmentation module and the surface identification module. The faster scan data can be converted into an accurate surface model, the more quickly products can be manufactured and brought to market. The other objective is to develop an information framework that facilitates geometric reasoning in mechanical CAD/CAM functions for concurrent engineering design. The specific purpose of the proposed project is to develop a neutral basis that bridges the gap between the geometric data of a design work and the higher-level geometric abstraction that supports complex reasoning incurred in life-cycle design optimization. In order to accomplish this goal, several key issues are identified as follows: 11

CMM Sampling Module Adaptive Sampling Data Acquisition

Structured-Light Adaptive Sampling Image Processing Data Segmentation

Data Grouping

Surface Identification Identification of Surface Type y

Surface Boundary Quadrics Fit NURBS Fit Evaluation (Surface Intersection) Surface Representation

B-Rep Model (Topologic Check)

Feature Extraction Figure 2: Flow Chart for Reverse Engineering 12

1. Characterize surface differential geometry and object topology so as to develop

a neutral basis that bridges the gap between the geometric data of a design

work and the higher-level geometric abstraction. The proposed neutral basis

can be applied to different types of geometric data and the resulting higher-level

geometric abstraction can support various life-cycle design activities.

2. Develop a framework for feature recognizer so as to characterize feature interac­

tions. Based on the proposed neutral basis, the feature recognizer is designed to

transform the geometric data of a design to high-level semantic information for

various applications. Among many applications, design for manufacturability

for various manufacturing processes, including machining, die casting, injec­

tion molding, and sheet metal forming, will be used to illustrate the proposed

concept and to demonstrate the implementation of the proposed approaches.

As shown in Fig.3, the geometric data of a design work can be originated from a traditional CAD modeling system, design by features, a sophisticated parametric de­ sign system, or from a reverse engineering process, and the developed neutral basis wiU be used to facilitate feature extraction so as to support various life-cycle de­ sign activities. In the figure, only two major activities are given, namely design for functions and design for manufacturability. Depending on the specified downstream application, the neutral basis can be converted and transformed to the appropriate feature representations based on the rules from the domain knowledge of the appli­

cation. In the proposed research, the application to design for manufacturability for

various manufacturing processes, including machining, die casting, injection molding. 13

Models Created From Traditional CAD Design by Parametric Design Reverse Engineering Modeling Systems Features Systems Process

Automatic Recognition of Neutral Basis

Neutral Basis : . (OTwoAttribotes (II) Basic Features

Automatic iFeatureiiReco^izer

Design for Design for Functions Manufacturability

■ * - Tolerance Machining

Material Die Casting

Surface Property Injection Molding

Sheet-Metal Forming

Figure 3: The Role of Proposed Neutral Basis for Feature Recognition in Concurrent Engineering Design 14 and sheet metal forming. In order to develop the neutral basis for feature recognition, the surface differential geometry and topology of various geometric data will be char­ acterized and the invariant properties of various geometric entities wiU be identified.

B-rep models wiU be adopted in this research since it has the advantage of providing explicit geometric and topologic information. Two common attributes are proposed to characterize three basic geometric elements including surfaces, edges, and vertices.

These attributes can also be used to characterize the forms and shapes of a design model. Based on these attributes, primitive forms and shapes of an object can be extracted and represented by basic feature categories. The links between features de­ fined in manufacturing processes including machining, die casting, injection molding, and sheet metal forming and those primitive forms and shapes will be investigated.

Features for various manufacturing process can then be recognized based on those basic feature categories along with attributed B-rep information. Software programs will be implemented to demonstrate the proposed approaches.

1.3 Literature Review

The literatures related to this research work can be categorized into two groups: curvature and surface characterization and feature technology. They are discussed and reviewed as follows.

1.3.1 Curvature and Surface Characterization

Several methods have been developed to calculate curvature from 2D curves and surface curvature from 3D surfaces. Besl and Jain[1986] provided an extensive review 15 on surface curvature and curvature calculation techniques. The calculation of surface curvature can be found in their work and other standard texts [Carmo, 1976; Spivak,

1975]. To calculate surface curvature from scanned data points, two methods are available: direct calculation and indirect calculation. Direct calculation applies the finite difference method to calculate the first and second derivatives of surface data points along x and y axes for curvature calculation. Note that most literatures apply this method on uniform grid only. On the other hand, when the input data has noises, the input data usually are convolved with Gaussian function or are smoothed by polynomial fit to reduce the noises and then calculate curvature from the smoothed data points or surface function. Based on the earlier work by Sander and Zucker[1987],

Stokely and Wu[1992] compared five different methods to calculate curvature from discrete data points, including the coordinate transformation method, the orthogonal walking method, the cross patch method, the surface triangulation method, and the turtle walking method. The results show that all five methods converge to correct

curvature value as the sampling step decreases. But when the sampling step is small,

the digitization errors have more effect and cause larger fluctuations on calculated

curvature. The cross patch method is the method of choice of the authors since it

has lower computation complexity. However, there is no quantitative measure to

describe the errors of the calculated curvature in their work. Fan, et cl. [1987] applied

curvature in four different directions 45° apart to calculate principal curvatures and

their orientations. This method still requires four specific orientations for sampling

grid lines and thus cannot be applied to nonuniformly distributed scanned data. 16

In computer vision, several approaches have been proposed to describe surfaces of

3D objects. Since these approaches also divide scanned data into subsets for recog­ nition, they are worth mentioning here. Besl and Jain[1985] have done an intensive review on this subject. For surface segmentation, various methods have been proposed in the past, and they can be classified into the following two categories:

1. Edge-Based Approach: In the edge-based approach, the step edges in scanned

data can be located easily by rather simple edge operators [Duda, et. of, 1979].

The more difficult problem is in the detection of smooth edges, such as folds.

Successful methods have been developed for detecting edges when they can be

modeled as the intersection of two planar patches [Duda, et of., 1979; Mitiche

and Arggarwal, 1983]. Langridge[1984] gave a preliminary investigation of de­

tecting and locating discontinuities in the first derivatives of surfaces. Pone

and Brady[1985] used surface curvature properties to infer significant bound­

aries. Their work involves smoothing the scanned image with Gaussian fil­

ters of varying scales and detecting and localizing four types of discontinuities:

steps(discontinuous surface depth), roofs(discontinuous surface normal), shoul-

ders(consisting two roofs), and smooth joins(discontinuous curvature). Their

output is a surface primal sketch. However, as pointed out by Fan, et af.[l987],

this approach is complex in computation. Fan, el of.[1987] proposed to segment

surfaces by three types of edges: jump boundaries, folds, and ridge lines which

correspond to smooth local extrema of curvature. The detection is done by

examining the zero-crossing and extremal values of surface curvature measures. 17

In their work, surface curvature can be calculated by four different directions

45° apart. The objective of this work is similar to the work by Ponce and

Brady [1985]. However, this method fails to identify smooth joins between sur­

faces.

2. Surface-Based A pproach: For the surface based approach, Nackman[1984] dis­

cussed the use of a critical point configuration graph for surface characterization.

Medioni and Nevatia[1984] used zero crossings of the Gaussian curvature and

the maximum principal curvature, and the maxima of the maximum princi­

pal curvature to describe a surface. Vemuri, et al. [1986] applied the signs of

Gaussian and mean curvature to classify scanned image regions into five types:

parabolic, umbilic, hyperbolic, planar, and elliptic. Jump boundaries are also

computed and marked by a simple thresholding procedure. Besl and Jain[1986]

applied the signs of surface curvature to classify scanned image regions into

one of eight basic surface types. Their output consists of images displaying

various characteristics of the surface and no edges are identified. Besl and

Jain[1988] developed an algorithm that simultaneously segments the scanned

data into regions of arbitrary shape and approximates scanned data with bivari-

ate functions. Surface curvature sign labeling provides an initial coarse image

segmentation, which is refined by an iterative region growing method based on

variable-order surface fitting. However, this approach has a limitation due to

the selection of the fitting function. Different fitting function wiU give different

fitting results. In addition, the fitting result will be sensitive to adjustments of 18

merging parameters and they require a rather complex control structure.

Note that most of the above approaches are used to generate surface descriptions for object recognition in computer vision. The edges, along with surfaces, are used for both segmentation and matching/recognition purposes.

1.3.2 Feature Technology

Features can be generally classified into the following categories [Shah, 1991]:

• Form Features: elements related to nominal geometry

• Precision Features: acceptable deviations from nominal form/size.

• Material Features: material composition, treatment, condition, etc.

• Assembly Features: part relative orientation, interaction surfaces, fits, kine­

matic relationships.

• Technological Features: performance parameters, etc.

Among these feature categories, form features have been studied much more inten­ sively than the other features. From the geometric point of view, they can be further categorized as volumetric features, which are solids, and surface features, which are collections of faces in a workpiece[Requicha and Vandenbrande, 1989]. In some ap­ plications, they are categorized as Primary Features and Secondary Features [Chen, et al., 1991]. Both classifications include two sub-catagories: positive features and negative features which represent protrusion and depression, respectively. The Prod­ uct Data Exchange Specification[PDES, 1988] classifies features into six categories: 19 passages, depressions, protrusions, transitions, area features, and deformations. In these classifications, the positive and negative features represent the major geometric forms of interest.

There are a number of approaches to feature creation. Most of them can be classified into the following three broad groups:

• Design by Features

• Interactive Feature Recognition

• Automatic Feature Recognition

The design-by-features approach normally consists of several modules including feature modeling, feature library, validity check, and manufacturability evaluation.

Luby, et a/.[l986] developed a feature-based design system for aluminum castings.

Chung, et a/. [1990] introduced two prototype systems for investment casting and sheetmetal. Chang, et of.[1988] developed a Quick Turnaround CeU(QTC) system which integrates design, manufacturing, and inspection and will generate the process plan, part program, and inspection plan automatically after the user finishes the de­ sign. Requicha and Vandenbrande[1989] proposed definitions for machinable features

and described system architectures for feature-based design and manufacturing. Four

types of validation rules are discussed including presence rules, non-intrusion rules,

accessibility rules, and dimensional rules. Shah, et a/.[1990] developed the ASU Fea­

tures Testbed which consists of two shells: the Feature Modeling Shell for design and

the Feature Mapping Shell for mapping and applications. Chen, et a/.[1991] proposed 2 0 a framework for a feature-based design environment, the procedures for feature-based design, and the construction of high-level (semantic) part models suitable for geomet­ ric reasoning in a knowledge-based environment. Most of these approaches provide a feature-based design environment for a specific application.

For the interactive-feature-recognition approach, a geometric model of a part is created first. Then, features are defined by human users picking geometric entities on the part. This approach has been used for process planning and NC tool-path generation[Chang, et al., 1988; Nau and Gray, 1986; Nitschke, et al., 1991]. The limitation of this approach is that it is time consuming and the recognized features could be user dependent.

A significant amount of research has been performed for automatic feature recogni­ tion and they can be classified into three subcategories: subgraph matching approach, grammar-based approach, and volumetric approach as follows.

Subgraph Matching

In the subgraph-matching approach, features are defined in terms of a graph of faces,

edges, and vertices(called the F-E-V graph) in the B-rep model of a part. Attributes

such as concave or convex are identified and assigned to edges or vertices. Fea­

tures can then be recognized by a matching process[Kyprianou, 1980]. Joshi and

Chang[1988] proposed the attributed adjacency graph(AAG) for the recognition of

machined features from a 3D boundary representation of a solid. In the AAG, each

face is represented as a node and the arc between two nodes stores value 0 for a con­ 2 1

cave edge and value 1 for a convex edge. Each feature should correspond to an AAG pattern which belongs to a face-edge graph. Feature extraction can be performed by searching the AAG pattern in the database of the part. This technique is limited to polyhedral parts with polyhedral features. The recognition of holes requires other algorithms. For a well-defined single feature, the use of AAGs is very effective. How­ ever, when features intersect, there might be enormous different cases which require all different pre defined AAGs to match.

Henderson and Anderson[1984] used to develop a regional shape pattern recognizer. Patterns are described as rules in Prolog. A 3-D solid model in a B-rep format is converted to facts in Prolog. Then patterns are recognized with Prolog’s matching mechanisms. The concept of an entrance face is nested to find accessibility to a recognized shape pattern and to establish the machining pattern graph for correct machining order. Gavankar and Henderson[1990] extracted protrusions and blind holes from a boundary model. The boundary model is represented by a face-edge graph where each face is represented by a node and two nodes(faces) sharing one common edge will be connected by an arc in the graph. The use of the cut node in the graph was introduced in order to separate nested features. A heuristic algorithm was developed to detect cut nodes and isolate the biconnected components in the graph. The feature extraction was based on the topology of solid objects; hence, the final classification of shape may require specific rules to identify. However, this approach does not apply to blind holes and pockets which open up into more than one face. Chuang and Henderson[1990] applied vertex-edge graphs to recognize features 2 2 from the B-rep of a part model. 17 types of vertices were classified in their research based on the convexity properties of the edges, including convex, concave, and smooth.

Only the cylinder and plane are considered as face types. Features are represented by shape graphs which are defined by specific patterns of V-E graphs. The recognition is implemented by Prolog from the B-rep of Romulus. The approach only applies to topological information. Thus, some pathological problems do exist; i.e., two parts with different shapes may have the same shape graphs. Nitschke, et a/.[1991] presented a framework of a facility which would enable part models from any type of

CAD system to be converted to a format which could be analyzed using a knowledge- based system. This facility relies on the user to recognize and isolate the individual features of the model and to extract the feature information within the model. A feature graph is then constructed. The feature extraction was implemented on a

CATIA CAD system.

Gadh and Prinz[1989] proposed a shape feature recognition based on the Differ­ ential Depth Perception Filter which reduces the number of topological entities. In addition to F-E-V entities in a B-rep model, a higher abstraction level called loops was applied in the feature recognition. Defining features in terms of loops and connecting non-looped entities (as opposed to defining them in terms of F-E-V graphs) allows features of the same class with different topologies and geometries to be grouped to­ gether. This method has the advantage of identifying a general class of features with a different topological description. The recursive interacting features, features shar­ ing edges, and features sharing faces are addressed. However, this approach cannot 23 identify features from free form surfaces.

Ganu[1990] implemented classic graph theoretic algorithms discussed in Tarjan[1985] to isolated biconnected components of edge-face graphs of boundary models. The do­ main of his nonheuristic algorithms is limited to the extraction of biconnected and triconnected components from boundary models. Sakurai and Gossard[1988; 1990] developed an interactive regional shape-pattern defining mechanism. All topology descriptions existing in the shape pattern, together with some geometric factors are used for recognition. The recognized shape pattern can be removed from the object model when necessary. Then the recognizer can distinguish each individual shape pattern even when its geometry is connected to those other shape patterns. Floriani and Bruzzone[1989] applied a symmetric boundary graph to extract form features, like protrusions or depressions on a face, through-holes or handles by loop identifica­ tion and connected component labeling. Each form feature was represented by the object decomposition graph which is a directed labeled connected multi-graph. The features can be classified as DP-feature and H-feature. Floriani and Falcidieno[1991] discussed a hierarchical face adjacency hypergraph (HFAH) representation of a solid model. The HFAH is based on a relational description of the object boundary called a face adjacency hypergraph(FAH) which considers faces as the primary topological entities defining the object boundary. The HFAH consists of a hierarchy of FAHs each describing one or more form features. The format of object representation must con­ form to the HFAH. Lentz and Sowerby[1993] proposed a feature extraction method by identifying concave and convex regions and their intersections from sheet-metal parts. 24

Since the convex and concave properties are only applicable to surface regions, neither edges nor vertices are considered in the feature recognition. Lentz and Sowerby[1994] improved their previous appoach by developing a hole extraction methodology for sheet metal components.

Most of the above approaches adopt the F-E-V graph and assign attributes to edges or vertices. Due to the definition of the attributes, only limited categories of part models are applicable for the recognition algorithm, such as prismatic parts containing planar and cylindrical faces only. Moreover, part models consisting of rounds or fillets are not recognizable.

Grammar-Based Recognition

In the grammar-based approach, objects are described by a string grammar such that they can be processed by a rule-based system. Choi, et a/.[1984] developed algorithmic procedures to recognize and classify various types of features including holes, slots, and pockets and generate process planning based on the syntactic pattern recognition methods. The definition of regional shape patterns and the recognizer were written

as rules in Pascal. Karinthi and Nau[1989; 1990; 1991] applied feature algebra to

generate multiple sets of features from a given set of features. Syntactic pattern

recognition deals only with an object that can be described by a string grammar for

its 2D cross-section. Rule-based systems can deal with general 3D shape patterns, but

to write a rule for every specific pattern is difficult and time-consuming[Jajybiwski,

1982; Staley, et al, 1983]. 25

Volumetric Approach

In the volumetric approach, Woo[1982] developed the alternating sum of volumes(ASV) technique to obtain a series expansion of the solid model of an object in terms of con­ vex components with alternating addition and subtraction of volumes. This method is useful for converting a B-rep model into a CSG representation. However, this method suffers from several problems including nonconvergence and features not necessarily being all convex. Tang and Woo[1991] addressed some problems in the ASV method.

They improved the efficiency of différence operation between an object and its convex hull. An algorithm to detect nonconvergence is also developed. The modified ASV, strictly speaking, is not feature recognition since it does not provide for definition of features. Based on Woo’s ASV, Kim[1991] proposed a convergent convex decompo­ sition called Alternating Sum of Volumes with Partitioning(ASVP). Three remedial partitioning methods are discussed to partition a ASV-irreducible polyhedron. ASVP decomposition can be viewed as a hierarchical volumetric representation of form fea­ ture. Feature definition is also given based on the topological description for the simple components in the ASVP decomposition. Dual ASVP decomposition is also addressed since there are usually several feature descriptions of the same part.

Lee and Fu[1987] presented an algorithm for feature extraction and unification from a CSG representation of a part. This approach is used to manipulate CSG trees to obtain a certain format which represents rawstock minus a collection of machining features. Lee and Jea[1988] proposed a new tree reconstruction algorithm based on well-defined single-step move-up operation. However, no features are defined in this 26 approach.

Vandenbrande and Requicha[1990] discussed an experimental feature recognizer that uses a blend of artificial intelligence and computational geometry techniques.

The recognizer is implemented in a rapid prototyping test bed consisting of the

KnowledgeCraft^^ AI environment tightly coupled with the PADL-2 solid modeler.

It is capable of finding features with interacting volumes. The feature finder searches initially for feature hints. A generate-and-test strategy is used in production rules which generate hints or clues for the existence of features.

Shiptialni and Fisher[1991] proposed a method to extract machining features 6 om a CSG representation of a part. The method involves feature extraction from the delta volume which is defined as the subtraction of workpiece from the stock volume. Each isolated delta volume is rearranged to the compact CSC trees. Perng, et of.[1990] proposed a method for automatically extracting the machining features from 3D CSC solid model. The method involves converting a part’s CSC tree representation into its equivalent DSG tree representation and then identifying the types of machinable features from the DSG tree. This approach is limited to prismatic parts and only

18 predefined machining features can be recognized. The algorithm proposed cannot handle intersecting features, e.g., two through holes intersecting at an arbitrary angle in a part.

In the volumetric approach, features are represented by solid primitives or their combinations and features can be organized as a tree structure based on the CSG tree. However, it may have difficulties to recognize features from a model containing 27 free form surfaces since CSG solid modeling system can provide implicit information only.

1.4 Organization of Dissertation

This dissertation consists of seven chapters. Chapter I is the introductory part of the dissertation. In Chapter I, the background and motivation of the research work is stated and the research objectives are formulated. Based on the proposed research, a survey of previous research work related to curvature calculation, surface characteri­ zation, and feature technology is conducted.

Chapter II presents the proposed systematic approach to segment the scanned data points for the purpose of reverse engineering. It begins with the definition and classification of character lines, which divide the scanned data points into a number of subsets. Following the derivation of curvature calculation for scanned data points in various formats, the proposed five-step process to segment the scanned data points is presented. Examples for points scanned hrom 2D curves and 3D surfaces are used to illustrate the proposed approach.

Since different curve and surface types may require different functions and fitting techniques to create curves and surfaces from data points, creation of curves and surfaces from data points requires prior knowledge of curve and surface types and

parameters. Chapter III presents two approaches to characterize the scanned-data-

points set, including curvature approach and quadratic-fit approach. Thus, curves and

surfaces can be created according to the identified types and parameters and these

are discussed in Chapter IV. In Chapter IV, several techniques to approximate free 28 foim curves and surfaces from various formats of scanned data points are presented.

In Chapter V, a new approach is proposed to recognize geometric forms firom B-rep models of objects. Two attributes are proposed to characterize the geometric entities including surfaces, edges, and vertices by applying the Gauss-Bonnet theorem and the concept of homeomorphisms. By associating the entities with the proposed attributes and applying the adjacent relationship information among topologic entities, three basic feature categories including positive features, negative features, and transition features are first identified. Several examples are given to demonstrate the proposed approach.

In Chapter VI, it begins with introducing feature categories in typical manufac­ turing processes including machining, die casting, injection molding, and sheet metal forming. Detailed geometric features can be classified and extracted from the basic features recognized in Chapter V and can be used to support various manufacturing processes. Thus, a systematic analysis of feature interactions is presented. Chapter

VI concludes with a feature-based CIM environment.

Chapter VII summaries the results obtained from this research and addresses issues of future research. CHAPTER II

Segmentation of Scanned Data Points

The objective of the segmentation of scanned data points is to divide the scanned data points into a number of subsets of data points. In the held of computer vision, a considerable effort has been devoted to data segmentation from gray scaled intensity images which is similar to the segmentation of scanned data points. A number of operators can be applied to extract edges from a scene. Edges considered here include step edges(jumps), roof edges(folds), and edge-effect edges as shown in Fig.4. For

Step

roof

edge-effect " AI \_____ y

Figure 4: Types of Edges example, Roberts’ operator[1963], Sobel’s operator, and their variations[Shirai, 1987] belong to gradient operators which are used to detect step edges. In order to detect roof and edge-effect types of edges, Laplacian operators can be applied. Marr and

Hildreth[1980] suggested the Laplacian Gaussian operator, V^G, which convolved the

Laplacian operator with Gaussian smoothing operator; this has the advantage in the noisy image data. Sarkar[1990] successfully applied Laplacian Gaussian operators

29 30 to extract ‘^character lines” ùom measurement data points and segment the data points into a number of patches where the “character lines” are referred to as the sharp changes in the part shape. However, these operators have several drawbacks to extract edges from scanned data. First, these operators are all calculated based on square windows which are equivalent to the uniform sampling grid of scanned

data. This will restrict the format of input scanned data and a wide range of a part may not obtain the complete surface descriptions if they are scanned under this

constraint. Secondly, the identifiable edges &om these operators include only step,

roof, and edge-type edges. There are some other types of edges which are important

during constructing surfaces for B-rep models and they cannot be identified, such as

the edges around fiUets or rounds of a part.

Several approaches to segment the scanned data have been developed in the field

of computer vision. They can be classified into two categories: edge-based approaches

and surface-based approaches and they have been reviewed in Chapter 1. Most of

these approaches are used to generate surface descriptions for object recognition in

computer vision. The edges, along with surfaces, are used for both segmentation and

matching or recognition purposes.

Since the objective of reverse engineering in this research is to create a B-rep

model from scanned data, the segmentation of scanned data is best based on the

segmentation requirements for creating B-rep models. The concept of “character

lines” is proposed to segment scanned data into a number of subsets of data points

and each subset is corresponding to a surface patch in the B-rep model. The proposed 31

“character lines” are defined as the curves of points having curvature discontinuities in the scanned data points. Since the surface curvature can be calculated from scanned data by using various approaches[Besl and Jain, 1986; Fan, et al, 1987], the calculated curvature at the data points near or on the character lines wiU have curvature jumps which can be identified easily. Thus, the identification of character lines can be accomplished by identifying the curvature jumps in the calculated surface curvatures from scanned data.

There are several methods to calculate curvature from 2D curves and surface curvature from 3D surface. Prom the literature review in Chapter 1, most of the calculation techniques are performed based on the uniform sampling grid of scanned data. Using uniform grid of scanned data has several advantages because it is easier in planning, measuring, and calculation. However, the uniform sampling grid also has a serious drawback in that the scanned object must be completely “visible” to the scanning sensor. This type of object is usually called a 2|D object. For a 3D object with undercuts which is not completely “visible” to the scanning sensor, some of the surface information is missing if a uniform sampling grid is used to scan the object. Nevertheless, in order to obtain an accurate description of an object, the sampling step may need some adjustments depending on the shape of the scanned object. For example, a flat face may be sampled by a coarse grid, and a surface with a high curvature region may need smaller sampling steps in order to capture the shape variations in details. Moreover, the total sampling point number may be over ten thousand or more from a mechanical part. This leads an enormous 32 amount of computer memory and computational time for processing these data points.

Therefore, it is necessay to develop a fast and flexible technique to calculate curvature from scanned data points.

2.1 Definition of Character Lines/Points

In the B-rep model of a part, a surface patch is usually represented by a continuous function, such as quadrics, bezier functions, Bspline functions, or NURBS functions.

In most applications, all the points inside the surface patch have at least conti­ nuity. The intersecting boundaries between surface patches may have or higher order continuities. The notations (7°, C^, ..., etc. are used to denote the zeroth, first, second, and higher order continuities at the boundaries of curves or surfaces intersection. Using curves as examples, continuity is the simplest kind of continuity which can be accomplished by joining two piecewise continuous curves at a common ending point. A continuity between two curve segments requires a common tangent line at their joining point. It implies that continuity also re­ quires (7° continuity. A (7* continuity requires that two curves possess equal curvature and coincident osculating planes at their joint in addition to (7® and continuities.

Higher order continuities can be derived in a similar manner. However, the usefulness of higher order continuities is very limited because the shape of a space curve can be characterized by speed, torsion, and curvature [O’Neill, 1966]. In other words, given

two space curves with identical functions of speed, torsion, and curvature, the shape

of the two space curves will be identical. Note that torsion is zero for all plane curves.

Thus, the shape of a plane curve is governed by speed and curvature only. On the 33 other hand, continuity is invariant under any transformation of the parameters.

Two piecewise continuous curves joined by continuity can thus be reparameterized as one piecewise continuous curve without changing the shape. Since the form and shape of a mechanical part is the major concern in creating its representation, the curve or surface will be assumed to be governed by a unique representation if the continuity condition is satisfied in the considered domain.

For the data points scanned from a visually smooth surface patch of a part model, they may be divided into subsets by identifying the points with (7° or continuities.

The curves generated by connecting these points are called “character points”.

2.1.1 Classification of Character Lines

In this chapter, the “character line” is used to represent a line or curve which is formed by connecting data points corresponding to curvature discontinuities in the data points scanned from object surfaces. Since character lines stand for discontinu­ ities between data points, a notation of D* is used to denote that (7*“^ continuity is satisfied but (7* continuity is not satisfied. Character lines can be further classified into the following categories.

1 . Character Line of Discontinuity: This type of character lines exists

when the tangent planes along the intersecting boundary of two adjacent subsets

of data points have different orientations. It is equivalent to a fold in the

literatures of image processing. Typical examples include the intersecting line

between points on two non-parallel planes in Fig.5 and the 3D intersecting

curve resulted from two subsets of points scanned from intersecting 3D surfaces 34

0 > 7 T

0<7T

Figure 5: Character Lines of Discontinuity from Points on Two Intersecting Planes

0 >7T

0<7T

Figure 6 : Character Lines of Discontinuity from Points on Two Intersecting Sur­ faces

Figure 7: Character Lines of D* Discontinuity

/Cef—0 /Cgf—ICot 0

Figure 8 : Character Lines of Zero Gaussian Curvature 35

in Fig. 6 .

2. Character Line of Discontinuity: This type of character lines exists

when two adjacent subsets of points have different principal curvatures(in either

direction or magnitude) but identical tangent planes along the intersecting line

or curve. Typical examples include points scanned from fillets or rounds in a

part model as shown in Fig.7.

3. Character Line of Discontinuity: This type of character line corre­

sponds to the step edges in computer vision. Ideally, data points scanned &om

all the surface regions of a solid object would not have any character line.

However, in practice, the scanning plans may not cover all the surface regions.

There may exist undercuts or the locations where the scanning sensor cannot

reach. Thus, the scanned data may contain voids where the surface information

is incomplete. This will introduce character lines in the scanned data. Fig.9

gives some examples of D® character lines.

Scanning Sensor Approaching Direction

Invisible Area Invisible Area

Figure 9: Character Line of Continuity 36

4. Character Line of Zero Gaussian Curvature: This type of character lines

is defined as the intersecting curves between two adjacent subsets of points that

not only have common tangent planes but also have one zero normal curvature

in the direction perpendicular to the tangent of the character line(Fig. 8 ). Based

on the Gaussian curvatures at the vicinity of the intersecting curve, three sub­

classes can be classified.

(a) The Gaussian curvatures on both sides of the intersecting curve have op­

posite signs.

(b) One of the subsets of points is on a plane.

(c) The Gaussian curvatures on both sides of the intersecting curve have the

same sign.

The first two sub classes of character lines denote the segmentations when two

adjacent subsets of points have different curvature properties.

Generally speaking, a character line can be a continuous curve or a straight line consisting of scanned data points. Note that a character line could form a closed

curve itself. Based on the definition of the character line, we can find that the subset

of points enclosed by a closed chain of character lines must have an identical sign

and property of Gaussian curvature at every point. Since character lines include all

the points with curvature discontinuities or zero curvature in the data points scanned

from a part model, points on surfaces intersecting boundaries with third or higher

order discontinuities will be considered as a single surface representation because of 37 continuous curvature property.

2.1.2 Identification of Character Lines/Points

Generally, two approaches can be used to identify character points &om scanned data points, including fitting approach and discrete approach. In the fitting approach,

NURBS curves and surfaces are used to approximate the scanned data points by op­ timizing parameters and knot vectors. The character points and lines can be identified from the locations of multiple knots or the locations near multiple knots. Chapter IV will give a brief review for NURBS functions. Fig.lO shows two examples of applying

NURBS curves to approximate the input 2D data points. The first example shows points sampled Rom two circular arcs containing a character point in Fig.lO(a).

The curvature curves of the fitting NURBS curves and knot vectors are shown in the respective Fig.lO(c) and (e). The second example shows points sampled from three curve segments containing a and a D® character points in Fig. 1 0 (b). The

curvature curves of the fitting NURBS curves and the knot vectors are shown in the

respective Fig.lO(d) and (f). From the fitting results, the character points can be

identified with the sufficient number of knots and good initial knot vectors. However,

this approach cannot be used to obtain character lines from general 3D scanned data

points. Using an example in Fig.11(a) where data points sampled from two planar

faces intersecting by an angle along a line which is neither parallel nor perpendicular

to the edges of the planar faces, a NURBS surface which consists of two parameters

u and V with degree 3 and 10 segments is used to fit the discrete data points. The

multiple knots in u or u parameters can lead to character lines along the parametric 38

(b) 3 Curve Segments Containing (a) 2 Circular Arcs D°, Character Points

(c) Curvature of Fitting Curves for (d) Curvature of Fitting Curves for 2 Circular Arcs 3 Curve Segments

4—t- i H- -H-

H------h

-H-

(e) Knot Vectors for 2 Circular Arcs (f) Knot Vectors for 3 Curve Seg­ ments

Figure 10: Fitting Approach for Identifying Character Points from 2D Data Points 39

(a) Data Points Sampled from Two Inter­ (b) NURBS Surface Approximated from secting Planes Data Points

0.4 0.6 0 £

(c) Knot Values for u, v Parameters

Figure 1 1 : NURBS Surface Fitting from Points on Two Intersecting Planes 40 curves. However, there is no way to represent the character line in this example since it does not lie on the parametric curves. Thus, the NURBS surface fitting approach is not applicable for surface segmentation. However, if the scanning data points can be decomposed into cross-sectional sets of points, the NURBS curve fitting approach can be still applied to each set of points. Surface segmentation can then be accomplished by linking the character points identified in each cross-sectional set of points.

For the discrete approach, character points are identified firom the curvature jumps where the curvature of each data point is calculated directly from the neighboring discrete data points. This approach has the advantage of computation efficiency since no curve or surface fitting is needed. However, it is sensitive to noises in the data points. In the following sections, a new curvature calculating technique and the proposed segmentation approach to extract character lines from noisy data points are investigated.

2.2 Curvature Calculation From Scanned Data Points

In this section, the Taylor expansion and central difference operator are applied to calculate approximate curvature from scanned data. The error of the calculated cur­ vature is analyzed thoroughly and the optimum approach for 2D curvature calculation

can be achieved by a specified criteria. The 2D curvature calculation technique can

then be applied to calculate surface curvature from 3D scanned data points. In the

proposed method, using only 6 neighboring points of a point considered in the space,

which are not necessary on the uniform sampling grid lines, the complete surface

curvature information of the point can be obtained. 41

2.2.1 2D Curvature and Error Analysis

Let a: R — * R? be a plane curve parametrized by arc length s and {6 1 , 6 3 } be the natural basis of R^. The curvature, n, of a at s is defined by

dt = Kn (2.1) ds da where t = — , n(s) is the normal vector, and basis {t(a),n(s}} have the same ori­ entation as the basis {ei, 6 2 }(Fig.l 2 ). Therefore, curvature k gives a measure of how rapidly the curve puUs away from the tangent line at a in a neighborhood of s.

Note that by a change of translation and orientation of the curve, the tangent vector

changes its direction. However, a!'{ 3 ) and curvature remains invariant.

In practice, the curves are normally parametrized by Cartesian coordinate x,y

rather than the arc length s. The curvature formula for plane curve y — /(œ) in

K<0

k>Q

Figure 12: Curvature for a 2D curve 42

Cartesian coordinate is given by

— = 7 . 0-.3/2 = (1 4-//2J3/2 (2-2) 1 + (Î)' where f and f" are the first and second derivatives of / with respect to æ. If / is a given continuous function, curvature k can be obtained from f and f" . On the other hand, if a set of discrete data points on the curve y = /(®) are given, curva­ ture K can still be obtained approximately by applying proper numerical methods to calculate f and f". Several methods are available, including Taylor expansion and

Difference operator. The central difference approximation is adopted here because of its simplicity and accuracy considerations [Nakamura, 1991].

Let fi = f{xi) and h,- = æj+i — Xi which represents the interval between two consecutive points on the æ-axis. Then, the first and second derivatives of / can be expressed respectively, including the truncation error effect, as

+ < (2 4) where , (hi- hi-i „ [hi - hi^if + hihi^i \ ■i = - [ 2! ^ ' + ------3!------(2.5) and

_ / / n ( ^ i ~ ^ * - 1 £111 , ( ^ » - h i - x f - t - hihi^x ^ Ei =-2 3i—/i + Jj /i + - j (2.6) are the truncation errors. Substituting the above equations into Eq.(2.2) and dropping out higher order error terms, the approximate curvature from discrete data points can 43 be expressed as

( < _ fV 4. (o 7^ (1 + yr)'/' V(l + /f)3/2 (l + /?2)5/2(/i^i+ 2 V ( ' )

Ideally, curvature k should remain invariant under rotation. However, the calculated

K from discrete data points varies along with the orientation of reference coordinate system due to the truncation error in Eq.(2.7).

Since k is a constant at a point on the curve, f" varies with f according to

Eq.(2.2) when rotating the reference coordinate axes. Thus, |/"| attains minimum value when /'^ is minimized. The minimum value ]/'l=0 can be obtained when the reference x-axis is tangent to the curve at the point considered on the curve. On the other hand, when |/"| is minimized, f"'=0 can be obtained. Therefore, the error terms in Eq.(2.7) can be minimized when // = 0 because ej and e" can be minimized when f- = 0. In other words, the curvature with minimum truncation errors for // and //' can be obtained by rotating reference coordinate axes such that /■ = 0. Using three discrete data points Pj_i, Pi, and Pj+i as an example, minimum error can be obtained by aligning the reference x-axis to the line passing through points Pj_i and

Pi+i. If we calculate curvature based on this new coordinate system, the curvature with the least truncation error can be obtained by substituting // = 0 into Eq.(2.7),

« = (2.8)

~ fi + 0{h^) -b O(Ah^) (2.9) where = hi-ihi and Ah = h{ — hf_i. This result implies that the truncation error 44 of the approximate curvature k is composed of two parts. The first part of the error terms, 0{h^), shows that the truncation error is proportional to the square of the data interval on the reference x-axis. If we can increase the density of the sampling grid, the truncation error of this part can be reduced significantly. The second part of the error terms, 0(Ah), shows that the truncation error is proportional to the difference between two consecutive intervals on the reference æ-axis. It is obvious that a uniform interval wiU lead to zero on this part of the error terms. That is, a uniform grid can always have better accuracy than a non-uniform grid when they have a similar average grid interval. Thus, a uniform grid is always preferred in the consideration of accuracy.

Besides the two truncation error terms in Eq.(2.8), there is another source of error from the calculated f-' which will affect the accuracy of the calculated curvature k.

From Eq.(2.4), f" is calculated from input data point coordinates. Thus, the accuracy of the discrete data points will determine the accuracy of //'. Most of the time, the input discrete data points are from digitizing devices, such as laser scanning devices or CMM. These devices can provide limited accuracy of data point coordinates. The typical accuracy for laser scanning devices is 10“^m and CMM can have a better accuracy of 10“®m. This will constrain the accuracy for calculated Let the accuracy of the input discrete data points be and it should satisfy

\{fi)ideal - fi\ < (i (2.10)

where fi represents the digitized coordinate value and {fi)ideal is the ideal value of fi.

Substituting the above equation into Eq.(2.4) excluding the term e", the truncation 45 error from the calculated /" can be quantified by

Thus, the truncation error (" introduced by f" is proportional to the inverse square of the grid interval. Comparing with the truncation errors 0{h^)+0{Ah) in Eq.(2.9), these two truncation errors have different effects and interpretations. The errors firom

0(h^)+0{Ah) will determine the deviation of calculated curvature apart from the ideal curvature. On the other hand, the error from wiU add a variation on the calculated curvature in addition to 0(fc^)+0(Afe). As h is getting close to zero, the deviation of the calculated curvature from ideal curvature is getting close to zero; however, the variation of the calculated curvature is getting larger because of increasing Due to this contradictory effect, very large or very small h may not yield good results. Thus, the optimum h will depend on the selection of the objective function. For example, if we want to control the sum of two truncation errors to be

minimized, the objective function can be written as

Fobj = Min (0 { ^ ) + 0{h^) + 0(AA=)) (2.12)

If a uniform sampling grid is assumed, the optimum grid interval, h^f, can be obtained

from solving the following equation.

A ( o ( p ) + O (t“) ) = 0 (2.13) àh

where O (^) and 0{h^) are defined in Eq.(2.11) and Eq.(2.9), respectively. Therefore,

an optimum grid interval, feopt, can be defined and represents the minimum grid 46 interval to obtain the most accurate result from the discrete measurement data points.

For a circular arc, hopt can be written as

^opt — ~ (2-14) where r is the radius of the circular arc and //= 0 is assumed. Given the device accuracy the best sampling step to minimize the sum of the truncation errors for a circular arc with radius r can be obtained from the above equation. For example, if we want to measure a circular arc with radius 50mm by a CMM with 10~®mm accuracy, the optimum sampling interval, hopt, will be 5.623mm. To measure a complete circle, it takes 56 measurement points. Note that the above results are derived based on the central difference method. If other numerical techniques are used, the resultant equations may be different, but the curvature can be analyzed in a similar manner.

In a summary, the truncation errors of plane curvature are affected by three pa­ rameters: reference axis orientation. Ah, and h?. One example is given below to illustrate the effects of these factors. A circular arc with radius 50mm is sampled along the æ-axis of a fixed coordinate system by 30 points. In Fig.l3, two different approaches are applied to calculate curvature on uniform and non-uniform grids. The ideal curvature should be a constant —0.02. The calculated curvature from rotating

reference axes such that /' = Q has better results than the one from the uniform

grid of the fixed coordinate system, especially in the curve with high slope. For the

nonuniform grid which is generated randomly along x-axis, the calculated curvature

based on /'= 0 axis still keeps similar accuracy. But the calculated curvature based

on the fixed x-axis has larger variations. This result shows that curvature calculated 47

-0.014 fixed x-axis on uniform grid -e— f= 0 as x-axis on uniform grid -h— -0.015 Fixed x-axis on non-uniform grid - a - r=4) as x-axis on non-uniform grid x ...

-0.016

-0.017

U -0.019

- 0.02

- 0.021

- 0.022

-0.023 -50 -30 -20 -100 10 20 30-40 40 50

Figure 13: Curvature of a Circular Arc from Two Calculating Algorithms

based on /'= 0 as x-axis can provide consistent accuracy and is robust to the varying grid interval.

In Fig. 14, curvature is calculated from the same circular arc in the above example with uniform grid interval along / ' = 0 axis by using different numbers of sampling points including 10, 20, 30, and 50. Fig.15 shows the mean, maximum, and minimum of the curvature from each set of sampling data points. The effects of two different truncation errors from and 0(h^) can be observed from Fig. 14 and Fig.15.

The average error of the calculated curvature values is approximately proportional to h^. But the variation of the calculated curvature is getting larger as the number of sampling points increases. The sampling number 28 from the optimum grid interval 48

-0.0199

- 0.02

- 0.0201 \r 1/

- 0.0202

-0.0203 10 points 20 points H— 30 points G 50 points •0.0204

-0.0205

-0.0206 -50 -40 -30 -20 -10 0 10 30 50 x-axis

Figure 14: Curvature of a Circular Arc from Different Size of Sampling Interval

-0.0199

- 0.02

- 0.0201

- 0.0202

-0.0203 Mean Maximum *+— Minimum o -

-0.0204

-0.0205

-0.0206 10 15 20 2530 35 40 45 50 Number of Sampling Points

Figure 15: Curvature of a Circular Arc from Different Size of Sampling Interval 49 in Eq.(2.14) closely matches the minimum sum of two truncation errors in Fig.15.

In Eq.(2.10), ^ is defined as the device accuracy. In practice, the sampled data points from the workpiece may include form errors which may be larger than the accuracy of the measuring device. Form errors normally consist of two parts: waviness and roughness. Normally, the roughness is modeled as the random error. In order to measure the curvature of the form, a grid interval h greater than the width of roughness is normally assumed to avoid measuring the curvature of the roughness.

If the bounds of the roughness is known and it is larger than the measuring device accuracy, it can substitute the device accuracy ^ into Eq.(2.10) and Eq.(2.1l) to calculate the optimum grid interval.

Note that the central difference approximation for needs at least p + 1 data points. If more data points are used, a more accurate difference approximation may be obtained. With given data points, a difference equation of the highest accuracy is such that the error term is of the highest order possible[Nakamura, 1991].

2.2.2 Surface Curvature Analysis

Differential geometry plays an important role in the analysis of surfaces. Some ele­

mentary results of surface differential geometry that wiU be used later can be found

in the texts by Carmo[1976] and Spivak[1975].

The parametric surface form x = x(u,u) is used in the foUowing analysis. The

unit surface normal at each point of x(«,u) is defined in terms of the parametric 50 derivatives x„ and x„ by

N = (2.15) jXu X X „|

At each point of a non-singular parametric surface x.{v,,v), a unique unit surface normal can be defined by the above equation.

On the surface, let (7 be a curve passing through a point P and t is the unit tangent vector of C at P. The curvature vector of C at P, k, is equal to dtjds. k can be decomposed into two components k„ along surface normal N and k, tangential to the surface, i.e.,

k = dijda = k„ - f k^ = k„N - f k , (2.16) where is called normal curvature and vector k , is called geodesic curvature vector.

By differentiating equation N ■ t = 0, k „ can be expressed as

dt dN dx • dN «71 = — •N=-t- — = — — ds da dx. • dx

edu^ 4 - 2fdudv + gdv^ Edu^ + 2Fdudv + Gdv^ ^ ) where e = Xu„-N, / = Xu«-N,g = x„„-N, and E = x„-x„, F = Xu-x„, G = x„-x„. The denominator of the above equation is the first fundamental form and the numerator is the second fundamental form. The coefficients e, /, g, E, F, and G are constants at point P, so that is fully determined at P, by the direction dvjdu. Thus, all curves

through P tangent to the same direction have the same normal curvature. This is

known as Meusnier’s TheoremfCarmo, 1976]. This theorem can be expressed in the

form of

KCOS«^ = K„ (2.18) 51 where ^ (0 < 0 < tt/2) is the angle between N and n and k is the curvature of C at

P. N is the surface normal vector and n is the vector perpendicular to the tangent vector of curve C at P on the plane containing C as shown in Fig.16.

Figure 16: Meusnier theorem: C and Cjv have the same normal curvature at P

The normal curvature varies with different tangent directions at P on the sur­ face. From Eq.(2.17), the direction in which attains extreme values occurs when dKnfdX = 0 where A = dvjdu. Two extreme values of normal curvature can he obtained from

- 2Hk -f- ÜT = 0 (2.19) 52 where the Gaussian curvature K and mean curvature H are defined by

^ = B G - F ^ ~

^

The two solutions, aci and Kg, of Eq.(2.19) are the extremum values of the normal curvature. They are normally called the principal curvature and represent the upper and lower bounds of the normal curvature at a given point. The corresponding principal directions can be obtained by substituting Ki or Kg for k in one of the following equations;

If angle a is defined as the angle between the principal directions dvjdu and the tangent vector of the curve under consideration, the normal curvature can take the form:

K„ = Ki cos^ a + Kg sin* a (2.23)

This relation is known as Euler’s theorem[Carmo, 1976], which expresses the normal curvature in an arbitrary direction in terms of ki and Kg.

By applying both Meusnier’s and Euler’s theorems, the full information concerning the curvature of any curve passing through P on the surface can be obtained.

2.2.3 Curvature Calculation From Orthogonal Uniform Grid

To take measurements for a set of discrete data points on a surface, uniform sampling along æ-axis and y-axis has the advantage of easy calculation and sampling planning.

Generally, the uniform sampling data points can be parametrized as (æ,y,/(æ,y)) or 53 a map function z = f{x,y). Substituting z = f{x,y) into Eq.(2.20) and Eq.(2.21),

Gaussian and mean curvature can be expressed as

fxxfyy fx K = Z - T i7\2 (2.24) (1 + f l + ÎIY TT _ (1 "b fy)fxx ^fxfyfxy + (1 + fx)fyy in »c\ 2(1 + /2 +/2)3/2 ^2.25 )

The first and second derivatives of / can be calculated based on Eq.(2.3) and Eq.(2.4).

A h = 0 is equal to zero along both x and y axes since a uniform grid is used. The formula to calculate the derivatives of / based on the central difference approximation are pven below:

T fi+hi - X _ /«’.i+l - /i.i-1 rn ~^y

X _ /»+l.j~ 2/i.j + X _ /f,j+l - ^fi,j + Ax2 ’ Ay2

X _ /i+l.J+1 - /î+l.j-1 - /t-l.j+1 + fn no\ - 4 Â ^

where Aæ and Aÿ are the grid intervals along x and y axes, respectively.

2.2.4 Curvature Calculation From Non-Orthogonal Uniform Grid

If the sampling discrete data points are distributed on non-orthogonal gird with uni­

form intervals Aæ' and A y' along the respective x' and y' axes, the calculation of the

derivatives of f based on x'-y' axes can follow the formula given in the previous sec­

tion. However, a coordinate transformation is required to transform the derivatives

of / w.r.t. x' and y' to the ones w.r.t. x and y. Assume the angle between x' and

y' axes to be 6. The transformation of first and second derivatives of / between two 54 coordinate systems can be expressed as

1 1 (2.29) — cot 9 Î sind 1 0 — cotô 0 sin 6 (2.30) cot 6 1 cot 6 — 2 sin 6 sin* 6 • After calculating f^', fy>, fx'x>, fx'y>, and fyiyi in x'-y' coordinate system, curvature can be obtained &om substituting the above equations into Eq.(2.24) and Eq.(2.25).

2.2.5 Curvature Calculation From Non-Uniformly Distributed Grid

In the previous two sections, the discrete data points are sampled along two straight lines with a uniform interval on each line. The calculation can be performed because

the first and second derivatives which require 3 consecutive data points on a straight line can be calculated along x and y axes. However, this technique cannot be applied

to the sampling points which are not sampled along a straight line. For non-uniformly

distributed data points, another approach to calculate normal curvature is developed

as follows.

Given a set of non-uniformly distributed discrete data points on a surface, assume

the normal curvature for a data point P needs to be calculated. Since three points can

determine a plane in space, we can find three planes containing point P where each

plane also passes through another 2 data points adjacent point P. Thus, a total of 7

data points including P can determine 3 planes and each plane contains 3 data points

on the surface. Three planes intersect the surface with 3 curves and each curve passes 55

3 discrete data points on each plane. Since we can calculate plane curvature from 3 discrete data points, curvature can be obtained for three curves on the surface. The surface normal at P can be approximated from the cross product of any two tangent vector of the three curves. The Meusnier’s formula can then be applied to calculate the components of the plane curvature of the three curves along the surface normal.

In other words, three normal curvatures can be obtained from three directions at P on the surface.

Let Ki, « 2 denote the principal curvatures at P, ti, ( 2 , and ( 3 are the tangent directions of the three curves, k„i, Knz, and « ^ 3 are the normal curvatures along the three tangent directions, and 6 1, 6 2, and 6 3 are counterclockwise angles of the three tangent directions from the principal direction of /ci(Fig.l7). From Eq.(2.23) of Euler’s formula, three normal curvatures at point P along three directions on the surface can be expressed as

Knl = Kl cos* 01 + «2 sin* 01 (2.31)

Kn2 — Ki COS*( 0 1 + A 0 1 2 ) 4" Kg siu*(0i + ^ 0 1 2 ) (2.32)

/tn3 = Ki COS* ( 0 1 + A 0 1 3 ) + « 2 siu*(0i + A 0 1 3 ) (2.33)

where A 0 1 2 = 0 2 — 0 1 and A 0 1 3 = 0 3 — 0%. Note that Km,t=i, 2 ,3 > A 0 1 2 , and A 0 1 3 can be calculated from the 7 discrete data points. There are three unknowns, /ci, K 2 , and

0 1 , in the above three equations.

Rearrange the above equations and express in a matrix form.

''Knl\ 1 1 0 Kn2 1 = 1 COS 2 A012 sin 2 A 0 1 2 (2.34) \« n 3 / 1 cos 2 A 0 1 3 sin 2 A 0 1 3 56

/ ^-----

Figure 17: Using 7 Discrete Data Points to Calculate Principal Curvature

where

a = —(fti + Kg) (2.35)

b = ——COs2 0 i(/C2 — Kj) (2.36)

c = ^ sin 2 ^ 1 (Kg — Ki) (2.37) a, b, and c can be solved simultaneously from the above equations with the calculated

Kni,i=i,2 ,3 , A 6 ig, and A^ig. 6i can then be solved from dividing Eq.(2.36) by Eq.(2.37). 57

Thus,

9\ = ——tan ^ - (2.38)

Substituting 9i into Eq.(2.36) or Eq.(2.37), and Kg can be solved together with

Eq.(2.35) in the form:

From the above derivation, the fuU curvature information of a data point on a surface can be obtained by using totally 7 points, the point cosidered and its 6 neighboring points. The curvature K„i can be calculated by applying Eq.(2.7) for the least trunca­ tion error. The truncation error in the principal curvature will be larger than the one in Knt since surface normal vector is obtained approximately and introduces another

source of truncation errors. The calculation of optimum grid interval hopt can also be

used for surface curvature analysis along two sampling directions on a surface.

2.3 Segmentation: Discrete Approach

We can apply the method from the previous sections to calculate the curvature of

the data points scanned from surfaces, and to study the curvature behavior so as

to identify character lines. Since the curvatures are calculated directly firom discrete

data points, we call it “discrete approach”. The curvatures for four different types of

character points on 2D curves can be characterized and are illustrated in Fig.lS. The

following observations apply to this figure. 58

Range Data

Curvature

r ~ T (a) (b) (c) (d)

Figure 18; Curvature Behavior of Character Points

1 . Fig.18(a) represents a D® character point. The curvature exhibits two consec­

utive peaks with opposite signs and a zero-crossing.

2. Fig.18(b) represents a character point. The curvature exhibits a peak. The

magnitude of the peak depends on the angle between two tangent lines and the

curvature of the two curves at the character point.

3. Fig.18(c) represents a character point. The curvature exhibits a shift. The

magnitude of the shift depends on the curvature difference between two curves

at the character point.

4. Fig.18(d) represents a zero-curvature character point. The curvature has a

zero-crossing at the character point.

This analysis can be applied to 3D surface curvature with caution. From differential

geometry, the surface curvature can be characterized by two principal curvatures. Two

curvature maps can be obtained by using the maximum and minimum curvatures at

each point on the surface. These two maps can be examined for jumps and zero- 59 crossings. The character lines can then be identified by merging the results from these two maps by using an operator similar to logical “OR”.

Note that the digitized data points from scanning devices may contain various types of noises, such as form errors from the object surfaces, uncertainties from the scanning deviecs, ..., etc. When the noise-signal ratio is relatively small, we can still use the above approach to identify character lines from scanned data. However, when the noise-signal ratio is large and the curvature maps become very noisy, the character lines cannot be identified directly from the curvature jumps and zero-crossings. The most common solution to reduce the noise level is to apply a smoothing filter to the input scanned data. For example, Ponce and Brady[1985] and Fan, et a/. [1987] ap­ plied the Gaussian smoothing operator to the original scanned data at different scales by using the scale-space approach[Witkin, 1983]. In their approaches, edges which are equivalent to U® and character lines can be identified, but character lines cannot. Since scale-space tracking methods have been applied to identify step edges successfully, we can apply it to the curvature map to identify curvature jumps. In

Fig.l9, the Gaussian operators with different variances

0.8 1

0.8 0.4 02=1 0.0 -Ay - 0.4

- 0.8

0.8 - 0.4

0.0 — •

- 0.4 -

- 0.8 - 0.0 5.0 10.0 15.0 20.0 25.0 30.0

Figure 19: Applying Gaussian Operator to I?°, D^, and Character Lines With Different Variances

point in Fig.19, the curvature jump after convolving the Gaussian operator with large ai can be identified by the inflection point(i.e., the zero-crossing of the second deriva­ tive). In fact, it also corresponds to the zero-crossing in the Laplacian-of-Gaussian mask. Thus, all three types of character points can correspond to the zero-crossings, extrema, and inflection points in the curvature maps of the scale space. However, the curvature map calculated from the original signal may have a smoothly varying

curvature function and contains zero-crossings, extrema, or inflection points which do

not correspond to any of the character points with curvature jumps. Therefore, we 61 propose to perform hypothesis tests to identify the character points with curvature jumps from those zero-crossings, extrema, and inflection points.

2.3.1 A Five-Step Process

The following five-step process, which is also shown in Fig.20, is proposed to identify character lines from calculated curvatures based on the concept of the scale-space tracking approach and hypothesis test.

1. Compute the principal curvatures at every scanned data point: Different for­

mats of data points may require different techniques to calculate curvature.

Based on the curvature calculation methods we developed in the previous sec­

tion, curvature can be obtained from the data points with various formats in­

cluding 2D points, orthogonal uniform grid of points, non-orthogonal uniform

grid of points, and non-uniformly distributed grid of points.

2. Smooth the principal curvatures with Gaussian distributions at a set of scales cri

and yield curvature maps Si’. Gaussian smoothing operators at different scales

are applied to smooth curvature maps. However, two scales are sufficient in

most applications with moderate noises. The first scale is small in order to

remove some random noises with very high frequency. The second scale is large

enough that most of the random noises are removed. Intermediate scales can

be used to increase the confidence during scale-space tracking.

3. Find points lying on extrema, inflection points, and zero-crossings in the cur­

vature maps: Based on the curvature map from the largest scale, the extrema. 62

Scanned Data Points

Compute Curvature

Smooth Curvature at Different Scales

Identify Zero-Crossings, Extrema, and Inflections

Scale-Space Tracking

Hypothesis Test

Character Points/Lines

Figure 20: A Five-Step Process to Identify Character Lines 63

inflection points, and zero-crossings are identified from at least two directions

on the surface, and the results are merged using the logical “OR” operator.

4. Apply scale-space tracking technique to track the corresponding character points

at different scales: The corresponding curvature in the curvature map with the

smallest scale can then be tracked. The tracking between different scales con­

forms to the following rules. When tracking an extremum, if there is a fork,

that is, the choice between two extrema at the next finer scale, the extremum

with the larger absolute curvature value is chosen. If there is a fork for an

inflection point, the inflection point with larger curvature shift at the next finer

scale is chosen. If there is a fork for a zero-crossing, the zero-crossing with larger

curvature shift at the next finer scale is chosen.

5. Perform the hypothesis test to identify D®, jD^, D^, and zero-curvature char­

acter points and lines: the hypothesis test is performed based on the vicinity

points around the identified zero-crossings, extrema, and inflection points in

the calculated curvature map or the curvature map with the smallest scale if

the noise-signal ratio is large. The objective of the hypothesis test is to deter­

mine if the jump corresponding to the “possible” character point from the noisy

curvature map is abnormal compared to the jumps in the neighboring points.

When the curvature jump is abnormal compared to the jumps from noises, the

curvature jump can be defined as a “true” character point.

The procedures of the hypothesis test are described as follows. 64

2.3.2 Hypothesis Tests

The objective of the hypothesis test is to determine if the jump corresponding to the

“possible” character point from the noisy curvature map is abnormal compared to the jumps in the rest of the window. When the curvature jump is abnormal compared to

the jumps ùom noises, the curvature jump can be defined as a character point. The

procedures of the hypothesis test are described as follows.

1. Calculate curvature difference between every two adjacent points and store all

the results to a list X from n number of points surrounding the character point

to be identified.

2. Depending on the type of character point to be identified, remove one, two, or

three curvature differences from the list X. For a character point as shown

in Fig.l8(a), it consists of three consecutive curvature jumps. However, the

zero-crossing in the scale-space with the largest scale should correspond to the

center curvature jump which has the largest curvature difference value among

three. Thus, three values are removed from the X and the largest one is stored

in Jmax- Similarly, a character point consists of two consecutive curvature

jumps which are adjacent to the maximum or minimum point as shown in

Fig.l8(b). Thus, two values are removed from the X and the largest of two

values is stored in Jmax- For a character point as shown in Fig.l8(c), it may

consist of only one curvature jump or two consecutive curvature jumps with the

same sign. Therefore, one or two values are removed from the X and the one

value or the sum of the two values is stored in Jmax- 65

3. Calculate mean fi and standard deviation cr for the X from the following equa­

tions

y. = (2.41)

where Xi is the value in the X.

4. Apply x^-test(Pearson, K., 1900, 1901) to identify the type of the distribution

for the X. Pearson’s %^-test is based on measuring the deviation of experiment

data from a hypothetical distribution by the same quantity which serves for

designing a confidence region for an unknown density. Suppose that all the

values in the X belong to the region of possible values of a random variable

and the region is partitioned into r equi-distance intervals. Let Pi, .., P,

be random frequencies of occurence of the random variable in these intervals

obtained from the X with n numbers, and Pi, P 2 , and P, be the probabilities

of occurrence of the random variable in the same interval evaluated using the

hypothetical distribution. The random variable

Z = (2.43) t)=l measures the deviation of experiment data &om the hypothetical distribution.

When Z=0, it represents a perfect fit between experiment data and the hypo­

thetical distribution. The larger the Z value, the larger the deviation of the

experiment data results from the hypothetical distribution. Neyman and Pear-

son[1928] showed that the random variable Z determined by the above formula 66

has a %^-distribution in the limit as n ^ oo. Thus, this theorem can be used to

determine the divergence of experiment data with a hypothetical distribution

by means of distribution tables.

In this proposal, three different hypothetical distributions are used to test

the conformance between experiment data X and the hypothetical distribu­

tions including normal distribution, uniform distribution, and Maxwell distri-

bution[Pugachev, 1984]. The distribution curves for these three distributions

are shown in Fig.21. After applying %^-test(Eq.(2.43)), the hypothetical distri­

bution with the smallest Z value will be chosen since it has best fit among the

three distributions.

After determining the hypothetical distribution, the user has to specify a con­

fidence interval a. For a specified confidence interval a, the confidence interval

for the curvature jump can be estimated as kaC, where is a coefficient which

can be determined from the statistical model for the specified confidence level

a and the number of points in the X, and

largest curvature difference Jmax corresponding to the character point is larger

than ka(T, it is identified as a character point with the specified confidence. Oth­

erwise, it is not a character point if the curvature difference Jmax is less than or

equal to ka

Note that the coefficient ka for normal distribution can be obtained by solving the following equation.

a = f .— - exp"z(-^)' (fg (2.44) 67

(a) Normal Distribution: f[x]fi,

(b) Uniform Distribution: /(æ) = | ^othervdse^

(c) Maxwell Distribution: f{x]fi,(r) =

Figure 21: Three Types of Distribution Curves 6 8

The coefficient ka for uniform distribution is y/3aa'. The coefficient ka for Maxwell distribution can be obtained by solving the following equation.

(2.45) J-ka(T v ^ o -

Some frequently used numbers of ka for the three distributions are listed in Table 1.

Table 1: ka for Normal, Uniform, and Maxwell Distributions at Different Confidence Interval

a 0.900 0.950 0.975 0.980 0.990 0.995 0.999 Normal Distribution 1.282 1.645 1.960 2.054 2.326 2.474 3.090 Uniform Distribution 1.559 1.645 1.689 1.697 1.715 1.723 1.730 Maxwell Distribution 1.282 1.645 1.960 2.054 2.326 2.474 3.090

2.3.3 2D Examples

Fig.22(a) shows the part model of a wheel hub. It can be constructed by revolving a 2D cross-sectional curve about an axis. Thus, the 2D cross-sectional curve shown in Fig.22(b) can be used to describe the shape of the 3D model. In this example, a

CMM equipped -with a touch trigger probe has been utilized to digitize the wheel hub to obtain measurement data points. We use the following four steps to convert the

3D measurement data points to the 2D cross-sectional points.

1. Find axisymmetric axis riy and assign a point on riy as the origin. In this case,

the axisymmetric axis can be defined by the axis of the hole at the center of

the wheel hub. The origin is assigned at a point on the axis about 10mm away

from the hole as shown in Fig.22(a). 69

(a) The Shading Pictuie of the Wheel Hub

40.0

20.0

0.0 0.0 20.0 40.0 60.0

(b) The Cross-Sectional Curve From the Measurement Data Points Figure 22: Example Part Model: A Wheel Hub 70

2. Define fi* and n*, together with which are mutually perpendicular to one

another and form a part coordinate system, fi, can be defined by the user’s

convenience. For example, a vector parallel to the working table can be defined

as Ux if Uy is also parallel to the working table.

3. Suppose the 2D cross-sectional curve is defined on the plane. Take mea­

surements on the surface of the part model and store the coordinates to Q{. Qi

can be represented as

Qi = V{si)ny H{si) cos{6)nx + S{si) sin(0)fiz (2.46)

where V(s*) is the vertical component of the 2D curve, H^si) is the horizontal

component of the 2D curve, and d is the angle between two planes that one

plane is defined by rây and the measurement point Qi and the other plane is

defined by % and n*.

4. Jff{si) and V(ai) are the coordinates of the cross-sectional curve on the ûx^ny

plane corresponding to point Q*. They can be written as

F(ai) = Q fn, (2.47)

H(ai) = le i-V (si)n „ l (2.48)

Note that H{si) and can be obtained without calculating 9 in advance

because of axisymmetry.

Fig.23(a) shows the a portion of the 2D cross-sectional curve cutting from the top surface of the wheel hub in Fig.22(a). The curvature at each point is calculated by 71

Cross-Section View of a Wheel Hub 12

11

10 9 8

7

6 5

4

3

2

1 265270260 275 280 285 290 295 300 305 310 315 (a) An Example Curve Segment from the Top Surface of the Wheel Hub

1.5 curvature after Gaussian smoothing original curvature

0.5

•0.5

260 265270 275280 265 290 295 300 305 310 315 (b) Calculated Curvature and the Smoothed Curvature

Figure 23: Example Curve Segment 72 using the discrete approach described previously and they are shown as the dashed lines in Fig.23(b). Since there exist some noises in the 2D cross-sectional curve, the character points cannot be recognized directly from the calculated curvature curve.

After applying the Gaussian smoothing filter with

Since the curvature curve after applying the Gaussian smoothing operator with a large scale has a couple of extrema and inflection points, it indicates that the 2D cross- sectional curve in Fig.24(a) may contain character points. Here we use the inflection point at æ=266 and the minimum point at æ=291 as two examples to illustrate the hypothesis test process.

Select a window around the inflection point at æ=266 and the curvature curve is shown as the dashed line in Fig.24(a). After scale-space tracking for the inflec­ tion point at 2=266, the largest curvature difference Jmax=—0.6985 can be obtained and the curvature differences between every two adjacent points are stored in the list X. They are connected by a solid line in Fig.24(a). After removing the Jmax from the %, the mean and the standard deviation of the X are 0.028 and 0.0535, respectively. Applying Eq.(2.43) to three hypothetical distributions, the Z values are evaluated as 4.598, oo, and 85.712 for the respective normal, uniform, and Maxwell distributions. Thus, the normal distribution is chosen because it has the smallest Z value, which indicates that it may provide the best fit among the three hypothetical distributions(Fig.24(b)). Assuming the confidence interval o:=99%, the confidence interval for curvature jump will be fcatr=0.124. Since \ Jmax — /x|=0.727 is larger than 73

2«4 2*4) 2() 2*M 2«* 2*4) 2(7 2*7.) 2*1 •0.1) -0.1 AO) 0 (b) The Three Distribution Curves vs. (a) An Inflection Point at z = 266 the Distribution Curve from the Mea­ surement Data Points

DUUtaikn<(«29l - NoraiÉl D ttiM ta n - Uniform D tsrtetlan - Mn«*S Dm twm -

44 4.3 42 41 at 02 03 04 (d) The Three Distribution Curves vs. (c) An Minimum Point at z = 291 the Distribution Curve from the Mea­ surement Data Points

Figure 24: Hypothesis Tests At Two Locations On The Curve 74 the curvature jump with the specified confidence interval, it can be identified as a character point.

Similarly, select a window around the minimum point at z=291 and the curvature curve is shown as the dashed line in Fig.24(c). After scale-space tracking for the min­ imum point at æ=291, the largest curvature difference Jmcz=—0 242 can be obatined and the curvature differences between every two adjacent points are stored in the list

X. They are connected by a solid line in Fig.24(c). After removing the Jmax and the value of the point next to the Jmax &om the X, the mean and the standard deviation of the X are 0.011 and 0.120, respectively. Applying Eq.(2.43) to three hypothetical distributions, the Z values are evaluated as 15.318, oo, and 116.161 for the respective normal, uniform, and Maxwell distributions. Thus, the uniform distribution is cho­ sen because it has the smallest Z value, which indicates that it may provide the best fit among the three hypothetical distributions(Fig.24(d)). Assuming the confidence interval a=99%, the confidence interval for curvature jump will be fc„o'=0.260. Since

\Jmax — /i]=0.253 is smaller than the curvature jump with the specified confidence

interval, it is not a character point.

Fig.25 shows the segmentation result from the measurement data points sampled

from the top surface of the wheel hub in Fig.23. After applying the five-step process,

19 character points are identified and divide the data points into 20 subsets. The

identified character points are drawn by small diamonds in Fig.25. 75

I 1 1 1 — 1------11 - Character Point o ...... - 0 9 - 0 -

7 - - 5 - - ....o...... 0 3 - ^ ....^ -... _ , ■"■■■■o...o ; 1 1 1_____ 1 J 260 265 270 275 280 285 290 295 300 305 310 315

Figure 25: Example 1: 2D Segmentation Result

2.3.4 3D Examples

Two examples are given below to illustrate the segmentation of data points scanned from 3D models.

Exam ple 1: Character Lines

Fig.26(a) shows the data points sampled from a 3D model which consists of six planar faces. In this example, all the character lines should belong to character lines.

101x101 Data points are sampled &om the CAD model and note that the sampling

points may not pass through the edges in the model. Fig.26(b) and (c) show the

respective principal curvature K^ax and Kmin at each data point. After applying the

five-step process, the character points are identified from the points close to the

edges in the CAD model. The character lines, which can be formed by connecting the

character points, divide the data points into six subsets and each subset corresponds

to a planar face in the CAD model.

Since character points may not be sampled from the edges, the character lines

may not be directly used to create the edges for the reverse engineering purpose. The 76

(a) Points Sampled &om a Part Model

(b) Calculated Mean Curvature (c) Smoothed Mean Curvature

Character Point region 1 region lîHj-ori 4 roti:ori Ô

100

(d) Segmentation Result

Figure 26: 3D Example 1: Character Lines 77 planar face can be best fitted from each subset of data points and the intersecting line between two adjacent planar faces can be used to construct the edges in the model.

Therefore, the CAD model can be created from the proposed five-step process.

E xam ple 2: Character Lines

Fig.27(a) shows the data points sampled from a 3D model which contains a spherical surface on a planar face and a blending surface between them with tangent continuity condition at the intersecting boundaries. In this example, all the character lines in the model should belong to character lines. 101 x 101 Data points are sampled from the

CAD model and note that the sampling points may not pass the edges in the model.

Fig.27(b) and (c) show the respective calculated mean curvature, .^min ^ and smoothed mean curvature at each data point. After applying the five-step process, the character points are identified from the points close to the edges in the CAD model. The character lines, which can be formed by connecting the character points, divide the data points into three subsets and each subset corresponds to a surface patch in the CAD model. The planar face, spherical surface, and blending surface can be created by using fitting techniques which will be discussed in the following two chapters. The edges can then be created by intersecting these surfaces and a CAD model can be constructed.

2.4 Summary

In this chapter, a 5-step process to automatically segment the scanned data points is

proposed. Several techniques to calculate curvature from various formats of scanned 78

(a) Points Sampled fiom a Pait Model

(b) Calculated Mean Curvature (c) Smoothed Mean Curvature

Character Point region 1 region ^ region 3

50

(d) Segmentation Result

Figure 27: 3D Example 1: Character Lines 79 data points aie derived. The pros and cons of the proposed 5-step process are dis­ cussed as follows:

• Curvature has intrinsic properties, so it is invariant no matter what the coor­

dinate system is. Therefore, the Gaussian filter applying to curvature maps

will not generate orientation dependent problems [Ponce and Brady, 1987]. This

will make the results robust to the position and orientation of the coordinate

system.

• The proposed 5-step process applies the scale-space tracking technique to the

curvature map instead of original curves or surfaces. Since the scale-space track­

ing technique has been applied to detect jump edges successfully, we apply it

to curvature map to identify curvature jumps. Thus, U®, D^, and character

lines/points can all be identified. The use of the Gaussian operator at multiple

scales can increase our confidence in detecting character lines or points without

loss of localization.

• Using hypothesis tests can justify the results from the smoothing filter. Note

that in our approach, not all the inflection points and extrema in the curvature

maps calculated from scanned data correspond to character lines or points.The

use of hypothesis tests will justify our results to generate character lines without

overhead problems.

• Applying hypothesis tests can eliminate the need to specify threshold for de­

tection of curvature jumps. Threshold is a simple method to detect the jump 80

boundaries; however, it also involves a lot of difficulties when the signals are

noisy and the noise-signal ratios are different in the scanned data. In the pro­

posed 5-step process, the user only needs to specify a confidence interval which

can be applied to the signals with a much broader range of the noise-signal lev­

els. Consequently, the results are much more accurate, consistent, and stable

than the threshold selection technique.

The limitations of the proposed discrete approach include:

• For a character point, when the curvature shift is so small that the curvature

map with large Vi cannot generate an inflection point, the character point

cannot be identified.

• When the input scanned data contain a large noise-signal ratio, the curvature

jumps of character points do not have significant differences from the curvature

differences between other scanned data points. In this case, the character points

cannot be identified by using the hypothesis tests. CHAPTER III

Identification of Surface Types and Parameters From Scanned Data Points

After segmenting the scanned data points into a number of subsets, each subset can be used to create a surface patch in a B-rep model because of continuous curvature property. In order to obtain an accurate surface representation, surface types and parameters need to be identified. Surface types can usually be identified first and, thus, the identified surface type can be used to determine the surface function and the necessary parameters to perform the surface fitting procedure. In this chapter, two approaches are investigated including curvature approach and quadratic-fit approach in the following sections.

3.1 Identification of Surface Types: Curvature Approach

Since the surface curvature can be obtained at each discrete data point, we can apply this information to determine the surface type. If only the signs of the principal curvatures are used to determine the basic surface types, six surface types result: peak, pit, ridge, valley, flat, and saddle. The signs of mean and Gaussian curvature yield eight basic surface types because saddle surfaces can be resolved into saddle ridge, saddle valley, and minimal surfaces [Besl and Jain, 1986]. These classifications

8 1 82 also can be described by the curvature graph in Fig.28 where the minimuTn and maximum principal curvature are x and y axes, respectively[Bhann and Nuttall, 1989].

/ ridges H>0 spheres K<0 Minimal saddle Surfaces ridges Peaks

H<0 K<0

H<0 K=0 valleys

H<0 K>0 Pits

dishes

Figure 28: Curvature Graph

The eight basic surface types provide a good indication of the surface property; however, a more specific description of surfaces is required for calculating the param­ eters of a surface function. Therefore, all the identifiable features should be separated from the basic surface types. For example, spheres and ellipsoids can be separated from peak surfaces. Cylinders and cones can be separated from ridge surfaces. In fact, it is proven that all the ridge surfaces can be represented by developable surfaces in the form(Carmo, M., 1976)

x(t,u ) = a{t) 4- uw(t) (3.1) 83

where (w x w') • a'= 0.

Based on the specific curvature properties, some practical features including spheres, cylinders, and cones can be identified easily from the principal curvatures which are summarized in Table 2. Table 2 also includes most of the quadric surfaces for com­ parisons. Some of the quadric surfaces may not be easily identified directly from curvature values, but they can be identified by using the quadratic fit technique which is discussed in the following section.

3.2 Quadratic Fit

Besides the direct curvature calculation from discrete data points, polynomial fit is another approach to obtain surface information. A general surface in three dimensions can be written as a polynomial function of x, y, z in Cartesian coordinate, i.e.,

S = {(z,y, z) : F(x,i/,z) = 0} ( 3.2) which is referred to as an implicit representation. Among all kinds of polynomial functions, quadratic function is one of the most popular fitting functions used by most researchers because of its simplicity and versatile surface shapes. A quadric surface can be classified into 5 types of surfaces based on the properties of the charac­ teristic roots [Adrian, 1949]. Each type may still contain several sub classes of surface shapes. Higher degrees of polynomials may have more types of surface properties.

In this research, only the quadratic-fit technique is discussed. Other polynomial-fit

techniques can be analyzed in a similar manner. 84

Table 2: Identification of Surface Types From Curvature

Surface Type HK Kl K2 Feature

sphere (ki=K 2= — 1/R) ellipsoid Peak < 0 > 0 < 0 < 0 elliptic paraboloid hyperboloid of two sheets

cylinder (k 2= —1/R)

cone (k 2= —1/r)

Ridge < 0 = 0 = 0 < 0 elliptic cylinder, parabolic cylinder, and hyperboloid cylinder developable

> 0 < 0 hyperboloid of one sheet Saddle < 0 < 0 |k i|< |k 2| hyperbolic paraboloid

Plane = 0 = 0 = 0 = 0 plane

> 0 < 0 hyperboloid of one sheet Minimal = 0 < 0 Kl = |k 2| hyperbolic paraboloid

dishes (ki=K 2= - 1/R) int. ellipsoid Pit > 0 > 0 > 0 > 0 int. elliptic paraboloid int. hyperboloid of two sheets trough (ki =1/R) int. cone (Ki=l/r)

Valley > 0 = 0 > 0 = 0 int. elliptic cylinder, int. parabolic cylin­ der, and int. hyperboloid cylinder developable

> 0 < 0 hyperboloid of one sheet Saddle Valley > 0 < 0 1 |k i|> |k 2| hyperbolic paraboloid R is a constant which represents radius, t varies from 0 to R. 85

A general quadratic equation can be represented by

S\\X^ + + 3 3 3 2 ^ + 3i2®ÿ +S 13XZ + a23ÿ^ + 2341® + 2342ÿ +2 ^ 4 3 2 + S4 4 = 0 (3.3) or

x‘b = 0 (3.4) where x denotes the vector

X = (1, ®, ÿ, 2, ®^, y^,z^, xy, xz, yzY (3.5) and b denotes the vector of coefficients

b = ( 344,2341,2342,2 3 4 3 ,311,322} 3 3 3 , 312,313, 3 2 3 )*. (3.6)

In order to recognize surface type of the quadratic equation for n data points, the process can be decomposed into the following two tasks:

# Best Fit Coefficients: apply least-squares approach to obtain best fit coefficients

of the quadratic equation

• Identification of Surface Type: By properly translating and rotating the surface

principal axes to the global coordinate axes, the surface type can be determined

from the signs of the coefficients.

They are further discussed as follows.

3.2.1 Best Fit Coefficients

In Eq.(3.3), there are 10 coefficients and 9 independent variables. One simple ap­ proach to solve the simultaneous equations is to divide 1 0 coefficients by a nonzero 86 coefficient in order to reduce the coefficient number to 9 and apply pseudo inverse technique to calculate the best fit coefficients. However, this approach requires the prior knowledge of the nonzero coefficient and it cannot be obtained in this case.

Thus, another approach[Cernuschi-Frias, 1984], which is more complex but do not need the prior knowledge of nonzero coefficient, is adopted in this research and is discussed as follows.

Given n measurement data points (œ,-,ÿi,Zi), the error vector can be written as

e = X ‘b (3.7) where r 1 »! yi zi y{ ® 12/1 ®l2l 2/l2l 1 ®2 2/2 Z2 vl ® 22/2 ®222 2/222 X = 1 ®3 2/3 23 ®3 2/3 A ®32/3 ®323 2 / 3 2 3 ( 3 .8 )

. 1 3 5 „ y „ z „ xl ®n2/n ®n2n 2/n2n The square of the error ER can be represented by

E R = e‘e = b'X'Xb = b'Rb (3.9) where R=X*X is a 10x10 symmetric matrix. The matrix R can be further ma­ nipulated to reduce the dimension of the problem to six and it can be partitioned as

H-11 H-12 (3.10) 21 H-22 J where R n is a 4x4 symmetric matrix, Rig^Rg^ is a 4x6 matrix, and R 2 2 is a 6x6 symmetric matrix. Similarly, b can be correspondingly partitioned as

b = [bi b2]‘ (3.11) 87

where b i= [ 6 i 6 2 6 3 6 4 ]* and b 2 =[6 g be 6 7 6 3 6 9 6 io]'- The sum square of the error in

Eq.(3.9) can then be written as

ER = bjRiibi + 2b|Ri2b2 + bgR 2 2 b 2 (3.12)

For quadric surfaces, the vector b 2 represents coefficents of quadric terms in the equa­ tion of the surface. While fitting implicit surfaces using the least-squares approach, one always ends up with a system that has a trivial solution. To avoid a trivial solution, we can put a constraint on b 2 [Cernuschi-Frias, 1984]:

bgDb2 = 2 (3.13) where 1 0 0 0 0 O' 0 1/2 0 0 0 0 0 0 1/2 0 0 0 D = (3.14) 0 0 0 1 0 0 0 0 0 0 1/2 0 .0 0 0 0 0 1. where the constraint is shown to be view-invariant; i.e., the left-hand side of Eq.(3.13) is a constant irrespective of the translation and rotation of the object. The value of

2 for the expression is chosen to scale the parameters obtained for the surface. Since the constraint does not involve bi, the value of bi that minimizes the error for a given value of b 2 is

bi = -R[^Ri 2 bz (3.15)

Now the problem is reduced to that of determining the b 2 that minimizes subject to the constraint in Eq.(3.13). The unknown parameters b 2 satisfy the eigenequation

[Cernuschi-Frias, 1984]

H b 2 = ADb 2 (3.16) 88 where

H = R.22 — R-2iR'11^R'12 (3.17)

It can be shown[Cernuschi-Frias, 1984] that the error in the least-squares approxima­ tion using an eigenvector of the matrix is equal to twice the corresponding eigenvalue.

Thus, the best-fit coefficient vector b 2 is the eigenvector corresponding to the smallest eigenvalue.

Thus, the surface fitting module uses scanned data points to form the real sym­ metric matrix R, and uses least-squares algorithm based on eigenvalues to compute the best-fit values of coefficients of quadric surfaces fitted to the given data points.

Note that H cannot be obtained if R n is singular. There are three cases to yield singular matrix Rn.

• ran^Rii)=3 : points scanned from a plane

• rum&(Rii)=2 : points scanned from a straight line

• ranX(Rii)=l : single point or points overlapping at the same point

Thus, before calculating matrix H, the rank of R n has to be checked. If R u has fuU

rank 4, the least-sqaures algorithm can be applied. Otherwise, a plane, line, or point

fitting approach needs to be used depending on the rank of R n.

3.3 Identification of Surface Types: Quadratic-Fit Approach

Every quadric surface has at least two mutually perpendicular principal planes, and

a central quadric has at least three mutually perpendicular principal planes. The line 89 of intersection between two principal planes is a principal axis or an axis of symmetry.

Every quadric surface has at least one principal axis and, if it has more than one, at least one other principal axis exists that is perpendicular to each[Spain, I960].

When the principal axes of a quadric surface coincides with the axis of the global coordinate system, the quadric surface equation can be reduced to the most compact form and the surface type can be identified from the sign of the nonzero coefficients.

Since the coefficients obtained in previous section may represent a general quadric surface in 3D space, it is necessary to identify the transformation matrix to properly rotate and translate the principal axes of the quadric surface to the global coordinate system. In other words, the location and orientation of the quadric surface need to be calculated first and surface type can then be identified &om the sign of the coefficients.

3.3.1 Determining Locations

In order to determine the location of the surface origin relative to the global coordi­ nate system axes, the surface origin need to be translated to the coordinate system origin. A surface whose origin is located at the global coordinate system origin can be described by

+ -I- cz' + dxÿ + ëxz -f /ÿ z j = 0 (3.18)

If the surface origin is translated to a point (t,u,v) in the global coordinate system, the surface equation will become

-\-hy^ C7? -f- dxy -f exz -f jyz -f 5 ® -f hÿ zz -f-j i = 0 (3.19) 90

Substituting x = x — t, y = y — u, and z = z — v into Eq.(3.18), the relationship between the coefficients in Eq.(3.18) and Eq.(3.19) is

0 = 0 b = b c = c (3.20)

d = d ë = e f — f (3.21)

g = —(2o< + du + ev) (3.22)

h = —(d< + 26u + fv) (3.23)

1 = —{et + fu + 2cv) (3.24)

j = j + at^ + bu^ + + dtu + etv + fuv (3.25)

Given a quadratic equation of the form of Eq.(3.19), the quadric surface can be translated to the origin of the global coordinate system by a vector (t,u, v) which can be solved from the simultaneous equations Eq.(3.22), Eq.(3.23), and Eq.(3.24) in the form of , F t! 2o d e l —gl r —g u = d 2b f —h = —h (3.26) V e f 2c —i —i Coefficient j can then be obtained by Eq.(3.25). Note that vector {t,u,v) cannot be obtained if matrix G is singular. In this case, it represents that the principal axes of the quadric surface are aligned with the global coordinate axes. Normally, there would be only one nonzero coefficient among g, h, and i.

3.3.2 Determining Orientation

The quadric surface described by Eq.(3.19) can be represented in a matrix form as

a d/2 e/21 X X [aj y z] d/2 b //2 y + [ 5 h z] y + j = 0 (3.27) .e/2 y/2 c z z 91 or

Yrt ‘DY + EY + i = 0 (3.28)

The process of rotating the coordinate system involves changing the basis vectors for which the coordinate system is defined. The transformation hrom one orthonormal basis to another can be performed by

Y = P Y or Y = P-^Y (3.29) where P is a linear transformation matrix and P =P^ since P is an orthonormal matrix. We need to find a matrix P that will diagonalize matrix D to form a diagonal matrix D. The square matrix D is diagonalizable if an invertible matrix P exists such that

P -^D P = D (3.30) is diagonal. If P is an orthonormal matrix that satisfies Eq.(3.30), then D is orthog­ onally diagonalizable. By subsituting Eq.(3.30) into Eq.(3.28), we get

(p -iy )tD (p -iŸ ) -h E P-^Y + i = 0 (3.31)

Since P~^=P*, the above equation can be rearranged as

Y ‘(P D P ‘)Y -h E P ‘Y -f i = 0 (3.32)

Then P D P ‘=D gives D P*=P‘D or D P ‘=AP‘, which can be expressed as

DPi = AfP{ where i = 1,2,3 (3.33)

Then, we obtain the eigenvector, eigenvalue relation

DP* = P ‘D (3.34) 92 and

PDP* = A = D (3.35) so P is the eigenvector matrix. Since the directions of a quadric principal axes are directed along the eigenvectors associated with matrix D, the eigenvector matrix P can be used to represent the orientation of the quadric surface and the eigenvalues become the coefficients of the quadric surface in the standard position for defining the shape of the surface[Thomas, 1972; Hall, et al., 1982].

In Eq.(3.19), ifc = e = / = i=0, the quadric surface will become a 2D quadric

curve. The location and orientation of a 2D quadric curve can be derived similarly

except that P becomes a 2x2 transformation matrix.

3.3.3 Recognizing Surface Type

From the above two sections, the location and orientation of a quadric surface can be

identified and, thus, the coefficients of the quadric surface in the standard position

can be obatined. Based on the nonzero coefficients, the following categories can be

classified [Cernuschi-Frias, 1984]

1. ront(Rii)=l : Point

2. ranl(R ii)=2 : Straight Line

3. ranÀ(Rii)=3 : Features including plane, 2D quadric curve, and free form curve.

Since all the points are on a plane, let all the data points translate and rotate to

x —y plane. Thus, any coefficient associated with z will be zero in Eq.(3.19). The

following categories can be classified based on the signs of nonzero coefficients. 93

• Case i: a (rant(D)=2)

if 0 6 > 0

if la/fe - 1 | < C

then Circle

else Ellipse

else # ( ob < 0 )

if 0

then Hyperbola

else Two Intersecting Straight Lines

• Case ii: o = 0 or 6 = 0 (ranfc(D)=l)

if ( o ^ 0 and h ÿ ^0 )or( 6 ÿ^ 0 and g ^ 0 )

then Parabola

if ^ = h = 0

then Two Parallel Straight Lines

• Case iii: Even though one can always classify the data points into one

of the above categories, the fitting error might be large and, thus, it is

inappropriate to use these categories to represent the data points. If this

is the case, plane or free form curve should be used if the fitting error is

larger than a user specified maximum limit. 94

4. ranA:(Rii)=4 : This category includes all types of the 3D general quadric sur­

faces. They can be divided into subcategories based on the rank of diagonal

matrix D and each subcategory can be further classified based on the sign of

nonzero coefficients. Let max = maximum of ]a|, |6|, and |c|, we have

• Case i: a 6 ^ c ^ 0 (runt(D)=3)

if ( a, 6, c > 0 ) or ( o, 6, c < 0 ) a if < C and -1 < ( and -1 max max max < c then Sphere

else Ellipsoid

else if ( ahc < 0 and j > 0 ) or ( ahc > 0 and j < 0 )

then Hyperboloid of Two Sheet

else if ( ahc < 0 and j < 0 ) or ( ahc > 0 and j > 0 )

then Hyperboloid of One Sheet

else if J = 0

if two nozero coefficients are differed by less than f

then Circular(Right) Gone

else Elliptic Gone

Case ii: a = 0or6 = 0orc = 0 (ranfc(D)=2)

if ( ab > 0 or 6c > 0 or ttc > 0 )

ifg^Oorh^Oorzÿ^O 95

then Hyperbolic Paraboloid

else Elliptical Cybnder

else (flf, h,i = 0)

iîj^O

if two nozero coefficients are differed by less than ^

then Circnlar(Rigbt) Cylinder

else Elliptical Cylinder

else (j = 0)

Straight Line(Tbis corresponds to a cylinder of radius zero.) else if ( a6 < 0 or 6c < 0 or ac < 0 )

ifi/0

then Hyperbolic Cylinder

else Two Intersecting Planes

Case iii: a^Oorb^Ootc^O (ron6(D)=l) ifjf^Oorfc^Oori^O

then Parabolic Cylinder else (g, = 0)

if 0

then Two Parallel Planes

else One Plane(whicb corresponds to two coincident parallel planes) 96

Note that ^ usually represents a number between 0 and 1 which is used to identify the maximum allowable ratio between coefficents for identifying circles, spheres, cylinders, and cones from respective ellipses, ellipsoids, elliptic cylinders, and elliptic cones.

3.4 Summary

In this chapter, two approaches are presented for identifying surface types from

scanned data points. The first approach applies the signs of curvature informa­ tion to characterize surfaces into eight subcategories. The second approach applies

quadratic-fit technique to characterize surfaces into subcategories of quadrics. The

first approach may have the advantage of using curvature values calculated from the

segmentation module and, thus, the computation is more efficient than the second

approach. However, the quadratic-fit approach has the advantage of providing richer

subcategories of quadrics and quantitative least-squares fitting error information com­

pared to the curvature approach. Thus, the curvature approach can be used to pro­

vide a quick classification of cloud point sets. On the other hand, the quadractic-ht

approach is recommended for practical applications which requires detailed surface

information and is adopted in this research for the software implementation. CHAPTER IV

Curves and Surfaces Approximation

After segmenting the scanned data points into a number of subsets and identifying curve or surfac type, each subset of data points can be approximated by a curve or surface in order to create the B-rep model. Based on the results from the clas­ sification module, curves and surfaces can be classified into various quadric surface types and free form curves and surfaces. Since the quadric curves/surfaces fitting have been investigated intensively, the techniques to best fit quadric curves/surfaces can be found in literatures[Cernuschi-Prias, 1982; Cernuschi-Frias, 1984; Hall, et al.,

1982; Chivate and Jablokow, 1993; Yan and Menq, 1994]. Thus, in this chapter, the techniques to approximate free form curves and surfaces from scanned data points will be investigated.

Free form curves and surfaces can usually be represented by Hermite, Coons,

Splines, Bezier, B-splines, and Nonuniform Rational B-splines(NURBS) functions.

Among them, NURBS function have the most versatility and can be used to represent the other functions. Thus, it will be used as the basis function to represent free form curves and surfaces in this work. In this chapter, the definition and the properties of the NURBS functions are briefly reviewed in the first. Then, techniques to create free form curves and surfaces from scanned data points will be presented.

97 98

4.1 Nonuniform Rational B-splines(NURBS)

The mathematical definition of a NURBS curve is a vector-valued piecewise rational polynomial function of the form(Peigl and Tiller, 1987)

C(u) = ^ ------(4.1) Y , WiNi,p{u) i=0 where the w, are the weights, the P,- are the control points, and Ni^p{u) are the

normalized B-spline basis functions of degree p defined recursively as

«■ •W = {J

where u, are the knots forming a knot vector

U = {u0 ; 1^1 ) • • • 5 (4.4)

The degree p, number of knots (m -f 1), and number of control points (n -f 1) are

related by the formula

m = n + p + 1 (4.5)

For nonuniform and nonperiodic B-splines, the knot vector takes the form

U — ■{q:, (X,. . . ,

where the end knots a and are repeated with multiplicity (p-t-1). In most practical

applications, a=0 and (3=1 are assumed and they are also assumed throughout this 99 section. The NURBS curve with the above knot vector is a Bezier-like curve. It interpolates the endpoints and is tangential at the endpoints to the first and last line segments of the control polygon.

4.1.1 Analytic and Geometric Properties:

The NURBS curve form in Eq.(4.1) can be rewritten into the following equivalent form

CM = ÊPift» (4.7)

% ,( .) = (4.8)

i=o where are rational basis functions. Their analytic properties determine the geometric behavior of curves. The most significant properties are discussed as fol- lows(Peigl and Tiller, 1987):

• Generalization: If all the weights are set to 1, then

* I otherwise ' ' '

where the O’s and I’s are repeated with multiplicity p-t-1, and Bi^p{u) denote the

Bernstein polynomials of degree p. This simply demonstrates that the Bézier

and nonrational B-spline curves are special cases of NURBS curves.

• Locality: Ri,p(u)=0 if u 0 [ui,u^+p+i). Thus, if a control point is moved or a

weight is changed, it will affect the curve only in p 1 knot spans. 100

*+p • Partition of Unity: Since ^ iZi,p(u)=l, it has convex hull property. The NURBS t curve segment C(u) is enclosed by the convex huh of Pi_p,. • •, Pf where u G

[uj, Ui+i) and p < j

• Invariance under Affine and Perspective Transformations: A general affine trans­

formation is a linear transformation including scaling, rotation, shearing, etc., »+p foUowed by a translation. Prom ^ iîi,p('u)=l, it can be proved that a NURBS t curve is invariant under Affine and Perspective Transformations(Peigl and TiUer,

1987; Lee, 1987 ). Note that nonrational Bézier and nonrational B-spline curves

are also invariant under affine transformation; however, they are not invariant

under perspective transformation.

• Differentiability: In the interior of a knot span, the rational basis functions are

infinitely continuously differentiable if the denominator is bounded away Rom

zero. On the other hand, they are (p — k) times continuously differentiable at a

knot where k is the multiplicity of the knot.

• Ri^p{u]Wi = 0) = 0. If a particular weight is set to zero, the corresponding

control point has no effect at ah on the curve.

In other words, if a particular weight, Wj, is close to infinity, the curve segment

in the span (■Ui,7 ij+p+i) wih degenerate to the point P^. 101

• Expression of Conic Sections: Conic sections are among the most important

curves in CAD/CAM and graphics. The circle, ellipse, parabola, hyperbola,

and straight line can be precisely represented by using NURBS curves. The

quadrics including the plane, cylinder, cone, sphere, and etc can also be pre­

cisely represented by using NURBS surfaces. Note that nonrational Bezier and

nonrational B-spline curves cannot represent quadric sections precisely.

• Extrema: Except for the case p=0, Ri,p{u) attains exactly one maximum value.

• Variation Diminishing Property: No plane has more intersections with the curve

than with the control polygon.

Note that a NURBS curve is a piecewise continuous curve. A NURBS curve with degree p has automatically continuity at every point of the curve if there do not exist any multiple knots except the two endpoints. However, if a NURBS curve has multiple knots with multiplicity of two or more in the interior of the knot span, then the continuity cannot be sustained at the multiple knots. If the multiplicity is

k, the continuity of the NURBS curve becomes at the multiple knots. Using a

NURBS curve with degree 3 as an example, we can have the following three cases:

• If the NURBS curve has two multiple knots, such as U = { a ,a ,a ,a ,... , 7 , 7 ,

. . . , /3,/3,/9,/3 }, it win have continuity at -u = 7 which is equivalent to a

character point.

• If the NURBS curve has three multiple knots, such as U={ ce, a, o:, a ,..., 7 , 7 , 7 ,

. . . , /3,/3,/9,/3 }, it will have C® continuity at u = 7 which is equivalent to a 102

character point.

• If the NURBS curve has four multiple knots, such as U ={ a, a, a, a , . .. , 7 , 7 , 7 , 7 ,

. . . , /3,/3,;0,/3 }, it win have C~^ continuity at u = 7 which is equivalent to a

character point.

From the above discussions, one can see that a NURBS curve not only has the flexi­ bility to represent a large variety of shapes but also allows character points between curve segments embedded in the NURBS representation. The technique to closely approximate the range data is investigated as follows.

4.2 Least-Squares Fitting of NURBS Curves

Suppose a set of 2D measurement data points j=i,...,r, are given. From Eq.(4.1), the control points of a NURBS curve can be obtained by using a least-squares fit if the degree, knot vector, weights, and parameters for the NURBS basis function are specified. Let the knot vector U = { Aq, A i,. . . , A,,, } be the non-decreasing sequence of numbers, the weights , be set to I ’s initially, and the parameter value U{ corresponding to Qj be initially assigned. The control points of a NURBS curve can be solved from a set of equations given by

Qj = ^PiRi,p(uj,u;i) (4.10) i=0 where j = 1,... ,r. The above equation can be written in matrix form as

[Q1 = 1Æ11P] (4.11) 103

where [H] is a matrix with r rows and (m 1 ) columns containing the NURBS basis functions, and [Q] and [P] are r* X 2 and (m + 1) x 2 matrices respectively. Matrix

[Q] contains the coordinates of the data points whereas [P] contains the unknown coordinates of the control points. The solution of Eq.(4.11) is obtained by finding the pseudo-inverse for [J 2 ] as

(P| = W IQ ] (4.12)

Although this solution is computationally efficient, a much more stable solution can be obtained by Householder reductions, exploiting the band structure of the matrix

[i2]. Substituting Eq.(4.12) into Eq.(4.11) and subtracting the result from [Q] gives the error vector [e] as(Sarkar, 1991)

IQl

= m w i (4.13) where [T] = f[/| — [Æ] is an orthogonal matrbc. The matrix [e] is an r X 2 matrix and the matrix [T] is a r x r matrix. The above equation can be written in component form as

[ex] = [T][QJ (4.14) where [e*] is a vector of length r having x-components of error vectors as its elements and [Qœ] is another verctor of length r having the x-coordinates of the data points as its elements. The y-component can be written in a similar form. The sum of the 104 squares of the «-component of error vectors is given by

É 4 = W N = [Q.FlTrirllQ J = lQ,f (TllQ J (4.15) * = 1 where [T]^ = [T] and [r]^[T] = [T] since [T] is an orthogonal matrix. Similar expression for the sum of squares of y-component of the error vectors can be obtained.

Therefore, the expression for the squares of all the error values becomes

B = = (4.16) •=1 i=l

Thus, the average error is defined as

Cm. — (4.17)

The quantity 6 ^ is crucial in approximation procedures, as it provides an indication of how close the data points are from the corresponding fitted points.

Since the degree, knot vector, weights, and parameters for the NURBS basis function are specified initially, the change of any of these values will result in different least-squares fitting results. An optimization method can be applied to search for an optimum solution to reduce the average error and achieve the best fit. However, because of the number of variables, it usually requires very expensive computation time to achieve globally best fit and make it impractical. Therefore, the alternative

approach is to provide a good inital guess of all these parameters, knots, and weights

and, then, iterate if it is necessary by changing the parameter values. This process is

described as follows:

1. Assign inital parameters to the data points, a knot vector, and weights 105

2 . Generate a least-squares fit by using Eq.(4.12).

3. If the fit is not acceptable, then compute new parameter values and go to step

2.

This algorithm, combined with the results from segmentation algorithm, usually gives reasonable curve fits.

Generally speaking, the initial weights for curve fitting are usually assigned to ones. There are a number of methods to specify initial parameters including uni­ form parameterization, chord length parameterization, centripetal parameterization,

. . . , etc.(Lee, 1989). Among these methods, the uniform method usually cannot give satisfatory results. The cumulative chord length method is better than the uniform method, but some disturbing results could be obtained when the data points are not evenly spaced. The centripetal method invariably gives much better results than the two previous methods. The one exception is when aU the data points are distributed

along a straight line, in which case the chord length method gives a uniform sweep­

ing of line segments. Thus, the centripetal method is applied to specify the initial

parameters for data points. The parameter Ui can be obtained from

iPf - P.--11^/^ no = 0 and n* — Ui_i — —------1 < i < n (4.18)

i=o Depending on the selection of knot sequence, two types of free form curves can be

classified, including open curves and closed curves. The assignment of knot vectors

for both types of curves are discussed as follows. 106

1. Open Free Form Curves; For open free form curves, the knot vector takes the

form

U •— {o!, cx,. .. , 0!, Ap^x, • • •, Am—p—1, • • • 5 (4.19)

where the end knots a and ^ are repeated with multiplicity (p + 1). Two

methods to assign the knot vector work well in practice.

• Uniform Knot Interval: a = 0, = 1, and A; = m — 2 p — 1 2 J+P“ i • Averaging Method: a = 0, /3 = 1 , and Aj = - ^ «i, j = 1 ,... ,tc — p P i=i 2. Closed Free Form Curves: For closed free form curves, the knot vector takes

the form

U — ^(* 0 ) ... 5 Up, Ap^x > •. • J Am—p—X ) ^Oj • • • J Pp\ (4.20)

where Op = 0 , /3q = 1 , and

oti = Op - (jSo - A m -p - i+ x ) 0 < i < p (4.21)

A = A + (A p + i+ i — Up) 0 < i < p (4.22)

Ai can be obtained by using either uniform knot interval or averaging method

similar to open free form curves.

Note that if insufficient number of data points are defined in a knot span interval, singular matrix may be yielded. To avoid this singularity, sufficient number of data points must be supplied or the number of NURBS segments needs be reduced. 107

4.3 Least-Squares Fitting of NURBS Surfaces

The curve least-squares htting technique above can be extended generalized easily to surfaces to yield

[P] = {UV'^UVy^UV'^ P (4.23) where UV = The iterative process for curve fitting can also be applied to surface fitting similarly. However, the parameter assignment for each data point will be more complex than that of curve fitting. In curve fitting, each data point has one corresponding parameter and the parameters can be assigned based on the sequence of the data points. In surface fitting, each data point has two corresponding parameters and the parameters also need be determined based on the sequence of the data points. However, the sequence of the data points in 2D parameter space corrsponding to data points in 3D space may not be as straightforward as the case in curve fitting. The two parameters for a continuous surface patch usually form a rectangular region in the u-v parameter space. The corresponding surface patch may have various geometry shape including a surface patch with both closed u and

V parameters such as a sphere, a surface patch with closed in u parameter and open in V parameter or vice versa such as a cylinder, a surface patch with both open u and V parameters such as a planar face or a sculpture surface. Even with the prior information of open or closed u and v parameters, specifying (u,u) parameters to the corresponding data points may be still considered difficult depending on the complexity of the geometric shapes of object surfaces. Thus, the parameter for each data point needs be specified not only by the sequence of the data points but also 108 based on the overall shape of the original surface. Therefore, two categories of data points are classified first based on the distribution of data points belonging to 2 |D or 3D. For 2|D data points, it represents the data points on the original surface can all be viewed without surface occlusion from at least one direction. On the other hand, for 3D data points, it represents not all the data points are visible from any view angle because of surface occlusion. Different approaches will be used to specify parameters for these two categories and they are discussed in the following sections.

4.3.1 2|D Data Points

For 2|D data points, all the data points can be one-to-one projected to a plane perpendicular to the view dirction without occlusion. Thus, linear interpolation tech­ nique is adopted to provide initial parameters for surface fitting because it is simple, computation efficient and it works well in general. Depending on the geometric shape of the projected data points, three subcategories can be further classified as follows and the linear interpolation schemes may differ among these subcategories.

1. Surface Approximation From Rectangular Cloud Point Set;

If all the data points can be projected to form a rectangle, the simplest linear

interpolation method can be applied in this case. Let Po and Pg represent the

respective lower left point and upper right point of the projected rectangle as

shown in Fig.29(a), the parameter (uj, Vi) for each data point Q, can be obtained

from

u, 109

V

Pc P. (a) Rectangular Cloud Point Set

----- f '

(b) 4-Side Convex Polygon Cloud Point Set

P o p. (c) 4 Boundary Curves Cloud Point Set

Figure 29: Parameter Assignment by using Linear Interpolation for Surface Approx- Vimation T n a from 2|D O 1 DataI ^ X Points " D —.T X _ 110

= Q \ / { P Ï - n ) (4.25)

where parameters ( 0 ,0 ), ( 1 ,0 ), ( 1 ,1 ), and ( 0 ,1 ) are assigned to points Pq, Pi,

Pg, and P 3 , respectively.

2 . Surface Approximation From 4-Side Convex Polygon Cloud Point Set:

If all the data points can be projected to form a 4-side convex polygon, a linear

interpolation between four vertices can be applied to calculate the parameters.

Let Pi,i=i,2 ,3 , 4 represent the four counterclockwise sequential vertices of the con­

vex polygon, each data point Qi corresponding to the parameter {ui,Vi) can be

represented as(Fig.29(b))

Qi = Po + «i(Pi - Po) + î^i(Qr - Q.-) (4.26)

where Q*=Po+‘Ui( Pi Po ) aud Q**=P 3 -l-Ui( P 2 - P 3 ). The above equation

can be decomposed into x and y components as

Qf = PS + «i(P?-PS) + «i[(P3-PS) + «.(PS+Pf-P?-PI)(14.27)

Qf = Pg + « i(P ? -P g ) + i;i[ ( P |- P ? ) + Ui(Pg + P 2 -P ? -P ^ )r4 .2 8 )

After rearranging Eq.(4.27), Vi can be represented by

(Pf-PS) + «i(PS +P f - P î - P g ) ^ ' '

Thus, Ui can be solved from a quadratic equation by substituing Eq.(4.29) into

Eq.(4.28). Vi can then be obtained by substituing Ui into Eq.(4.29). If the data

point is inside the convex polygon, only one set of solutions is vabd. That is. Ill

both Ui and Vi are within the interval between 0 and 1 . On the other hand, if the

data point is outside of the convex polygon, no valid solution can be obtained.

In other words, either u; or Vi will be outside the interval between 0 and 1.

3. Surface Approximation From 4 Boundary Curves Cloud Point Set:

If the projected data points cannot form a 4-side convex polygon, four boundary

curves have to be specified in advance. As shown in Fig.29(c), a number of

isoparametric curves along both u and v directions are produced in advance

and the hnear interpolation technique derived above for 4-side convex polygon

can be applied for each polygon formed from the four vertices of the two adjacent

isoparametric curves in both u and v directions. Note that if there exists any

concave polygon, it has to be converted to convex polygon by merging with

adjacent polygons.

Note that the above surface approximation approaches are based on 4-side NURBS surface patches. If the 3-side NURBS surface patch is desired, the analysis and linear interpolation approaches can be performed analogously based on the 3-side polygons or curves.

4.3.2 3D Data Points

For 3D data points, not all the data points can be one-to-one projected to a plane.

Thus, two approaches are common to construct surfaces from 3D data points. For the first approach, user has to specify four boundary curves or apply curve fitting approach to create four boundary curves in advance, and uses these boundary curves 112 to contruct a reference surface. {u,v) parameters can then be identified by projecting the data points onto the reference surface and these parameters can be used for the inital surface fitting parameters. This process can be iterated until a satisfactory fit­ ting surface is obtained. This approach is normally time consuming and tedious. The alternative approach is to create cross-sectional curves and apply “lofting” technique to create a surface. The lofting process is described as foUows(Fig.30).

QlZ P 12 Qz2

,32 .02

,31

,20

,30 ,00

Figure 30: Lofting: Surface Interpolation Through Cross-Sectional Curves

The objective of the lofting process is to create a NURBS surface which will pass

through a set of given NURBS curves. In practice, these curves are usually planar

curves positioned in 3D space with a so-called spine curve. The lofting surface can

be obtained in three steps(Piegl, 1991): 113

1 . Make all the cross-sectional curves compatible. That is, all the curves should

have the same degree and number of control points and be defined over the

same knot vector. Assume this has been done; then

Ck{u) = Ê Q i,kNiM , ft = 0 ,..., ÜT (4.30) >=0

are u-directional curves lying on the surface(isoparametric lines in the u direc­

tion) and defined over the same knot vector U.

2. Compute v values and a knot vector V for interpolation with degree-g NURBS

curves. The v values are needed as the curves(Eq.(4.30)) are assumed to be at

a certain fixed v.

3. Using the knot vector V and the v values computed in Step 2, interpolate curves

through the control points of Eq.(4.30). More precisely, for each i, i = 0,... ,n,

obtain m = (4.31) j=o so that Eq.(4.31) interpolated Q,-,*. at certain v values(note that if the u-direction

curves of Eq.(4.30) are rational, then rational interpolations are to be used). The

control points of Eq.(4.31) are then the control points of the lofting surface.

n m S M = (4.32) t=o i=o

defined over the knot vectors U and V.

Generally speaking, surfaces created by lofting process can produce fairly good results in practice if a sufficient number of cross-sectional curves are used. However, 114 the more number of cross-sectional curves, the more number of surface patches will be created and this may produce an oscillation effect which may not be desired.

Moreover, the input data points should be arranged as a series of scanned lines in order to create cross-sectional curves. Therefore, the selection of cross-sectional curves is very important in the lofting process. Note that not all the data points are used to create the cross-sectional curves during lofting process. The points which are not used for cross-sectional curves can be applied to iteratively improve the fitting surface by projecting the data points onto the lofting surface in order to calculate the parameters for the least-squares surface fitting. This process can be iterated until a satisfactory result is obtained.

4.4 Summary

In this chapter, NURBS curve and surface approximation techniques are introduced to create free form curves and surfaces from scanned data points. The definition and the properties of the NURBS functions are briefly introduced in the beginning.

An iterative process to create NURBS curves from scanned data points by using least-squares technique is presented. Depending on the selection of the knot vectors, open or closed curves can be created bsaed on the need of the application. Two methods including uniform knot interval and averaging method work well in practice for specifying knot vectors. For surface approximation, an iterative approach and a lofting approach are discussed. Lofting process is generally fast for creating the first approximating surface from scanned data points. To obtain best result, it may be necessary to apply iterative processes. Techniques to improve the efficiency in the 115 iterative processes and the selection of optimization techniques have been investigated in Sarkar’s Ph.D. disertation[Sarkar, 1991]. CHAPTER V

Automatic Recognition of Geometric Forms from B-rep Models

The purpose of this chapter is to develop a neutral basis that bridges the gap between the geometric data of a design work and the higher-level geometric abstraction that supports complex reasoning incurred in feature-based systems. In order to accomplish this goal, a new approach is proposed to characterize surface differential geometry and object topology so as to develop a neutral basis that can be applied to different types of geometric data and the resulting higher-level geometric abstraction can support various life cycle design activities. In this approach, two attributes are proposed to characterize the geometric entities including surfaces, edges, and vertices by apply­ ing the Gauss-Bonnet theorem and the concept of homeomorphisms. By associating the entities with the proposed attributes and applying the adjacent relationship in­ formation among geometric entities, three basic feature categories including positive features, negative features, and transition features are first identified. The proposed approach is not limited to prismatic parts. Part models consisting of fillets, rounds,

cylindrical surfaces, spherical surfaces, torus, and sculpture surfaces can all be rec­

ognized. The recognized features from the proposed approach can include a much

broader categories of part models compared to those in previous studies and can sup­

116 117 port the subsequent product activities in design and manufacturing. Based on the proposed approaches, a software program is implemented and examples are given to demonstrate the proposed approach.

5.1 Introduction

Features have been proposed as a means of providing high-level semantic informa­ tion for the life-cycle design concerns[Gu, 1994; Chen and Miller, 1992; Gomes and

Teixeira, 1991; Requicha and Vandenbrande, 1989; Shah and Rogers 1988]. Research work can be generally categorized as two distinct approaches: design by features and feature extraction.

In the design-by-features approach, designers are encouraged to construct parts based on explicit feature elements which are predefined in a feature library or are sup­ ported by the system. Manufacturing information can be tagged as related attributes along with the feature definition. Therefore, the constructed features make the per­

tinent data available for down stream applications and ensure the reasoning directly looking at the relevant regions. By taking this advantage, several systems proposed

for manufacturability assessment have been reported[Ovtcharova, et al, 1994; Hen­

derson and Anderson, 1994; Lion, 1991; Chung, et al, 1990; Shah, et al, 1990; You,

et al, 1989; Chang, et al, 1988; Luby, et al, 1986]. However, these systems can only

evaluate design in very limited regions which are contracted by using predefined fea­

tures and tagged attributes. For a part which comprises complex shape and multiple

features, interactions among the constructed features are likely to occur, and conse­

quently new features can be generated. These features may serve important functions 118 and most likely need be addressed in various life-cycle activities. The other drawback is that the feature model is not interchangeable among different applications. For example, when designing a die-cast part by using die-cast features, the constructed features can be assessed for the die-cast process. However, a die model, which can be produced by subtracting the part model from a die block, may not convey suffi­ cient feature information for machining process assessment. Nevertheless, design by features has its merit and will continue to be an active research area.

Feature-extraction approach releases the burden of restricting designers’ freedom to the limited modeling method within predefined feature elements. It can be classi­ fied into two subcategories: volumetric approach and subgraph matching approach.

The volumetric approach[Woo, 1982; Tang and Woo, 1991; Kim, 1991; Perng, et al, 1990; Vanderbrande and Requicha, 1993; Sakurai, 1994; Shen and Shah, 1994;

Menon and Kim, 1994] has the advantage of extracting machining features since each machining feature can be represented by a volume.

For the subgraph matching approach, a significant amount of research has been performed[Kyprianou, 1980; Joshi and Chang, 1988; Henderson and Anderson, 1984;

Chuang and Henderson, 1990, 1994; Marefat and Kashyap, 1990; Vandenbrande,

1990; Ferreira and Hinduja, 1990; Corney and Clark, 1991; Fields and Anderson,

1994; Sakura and Gossard, 1990; Chamberlain, et al, 1993; Dong, et al, 1993; Dong,

1994; Regli, et al, 1993, 1994; Narayan and Ling, 1994; Rosen, et al, 1994; Tseng and Sanjay, 1994]. In these studies, features are defined in terms of string charac­ ters, volumes, specific patterns or graphs consisting of faces, edges, and vertices. The 119

recognition is normally performed by two procedures: first, characterize faces, edges, or vertices and; second, rules, grammar, or graphs are applied to identify features

by matching feature pattern in database of the part model. However, most of these

approaches can only deal with prismatic parts which contain only planar faces and

straight-line edges in the model. Some of them may include cylindrical surfaces. This

limitation is resulted from the fact that features are characterized by the attributes

of edges or vertices only. Thus, the identifiable features are normally simple in geom­

etry. These approaches cannot process parts having complex shapes, such as rounds,

fillets, sculpture surfaces, etc. On the other hand, Lentz and Sowerby[1992] proposed

a feature extraction method for sheet-metal parts by using surface curvature prop­

erties. Attributes are defined for surfaces only and features can be identified from

concave and convex regions and their intersections. This approach has the advantage

of identifying features from rounds, fillets, sculpture surfaces, but it has very lim­

ited capabilities to recognize features which are formed by edges and vertices, such

as holes [Lentz and Sowerby, 1994], because surface curvature properties cannot be

applied to edges and vertices. Moreover, if two or more features intersect each other,

feature recognition using current approaches may become very difficult. In addition,

recognizing features by dealing with low level geometric entities such as vertices,

edges may produce combinatorial problems[Gadh and Prinz, 1992]. The variations

in geometry and topology of features can result in an overwhelmingly large number

of patterns and create a substantial barrier to feature recognition. Most of current

approaches are able to handle few sources of difficulty; however, practical problems of­ 120 ten have many sources of difficulty occurring simultaneously. Thus, without a generic approach which can characterize various geometric entities and their variations by using a set of common attributes, it would be difficult to recognize highly complex features that often exist in real world designs. Therefore, it is highly desirable to bet­ ter characterize the surface geometry and topology and to extract common attributes from low-level geometric data so as to identify higher-level abstraction of a design and to support various life cycle design activities.

As shown in Fig.3, the geometric data of a design work can be originated from a traditional CAD modeling system, design by features, a sophisticated parametric design system, or from a reverse engineering process, and the developed neutral basis will be used to facilitate feature extraction so as to support various life cycle de­ sign activities. In the figure, only two major activities are given, namely design for functions and design for manufacturability. Depending on the specified downstream application, the neutral basis can be converted and transformed to the appropriate feature representations based on the rules from the domain knowledge of the appli­ cation. The application to design for manufacturability for various manufacturing processes includes machining, die casting, injection molding, sheet metal forming,

..., etc.. In order to develop the neutral basis for feature recognition, the surface differential geometry and topology of various geometric data will be characterized and the invariant properties of various geometric entities will be identified. B-rep models wiU be adopted in this research since it has the advantage of providing ex­ plicit geometric and topologic information. Two common attributes are proposed to 121 characterize three basic geometric elements including surfaces, edges, and vertices.

These attributes can also be used to characterize the forms and shapes of a design model. Based on these attributes, primitive forms and shapes of an object can be extracted and represented by basic feature categories. Features for various manufac­ turing processes can then be recognized by the automatic feature recognizer in Fig.3 based on those basic feature categories along with attributed B-rep information. This dissertation focuses on the recognition of basic forms and shapes, i.e., the neutral ba­ sis as shown in Fig.3. Software programs will be implemented and examples are given to demonstrate the proposed approaches.

5.2 Geometric Forms

In this section, several invariant surface properties derived from diiferential geometry and their applications to feature recognition are investigated. The concept of home- omorphism and Gauss-Bonnet theory are introduced and their applications to the derivation of two attributes that characterize basic geometric entities are discussed as follows.

5.2.1 Homeomorphism

For solid objects such as baseballs, chalk, donut, cups, etc., their shapes can be rep­

resented by the surfaces enclosing the objects. Ordinarily, such surfaces are “closed,”

that is, compact and without boundary. Unlike these surfaces, a plane is not closed

and thus it cannot be used to represent a solid object. There are two kinds of closed

surfaces, orientable and nonorientable. The sphere, the torus, the double torus, the 122

vCÜ' ) ( 'C ü ' 'C ü ' ) ( 'Q' 'C ü ' \C 3 /

sphere torus double torus triple torus

Figure 31: Orientable Closed Surfaces

triple torus, and so on, are orientable surfaces as shown in Fig.31. They can also be described by adding handles to a sphere. For example, adding one handle to a sphere yields a torus, and adding two handles to a sphere yields a double torus, and so on.

It can be proven that every closed connected orientable surface is homeomorphic to one of the above mentioned orientable surfaces [Carmo, 1976]. Two surfaces are con­ sidered to be homeomorphic if one of the surfaces can be continuously distorted to look like the other. Continuous distortion can be bending, stretching, and squashing without tearing or “gluing” points together. According to these criteria, the surfaces of spheres, baseballs, chalk, and bolts are homeomorphic. Similarly, the surfaces of nuts, the teacup with one handle, and the torus are homeomorphic.

Typical examples of nonorientable surfaces include the Mobius band which is not closed but compact, the Klein bottle and the projective plane which are closed. It can be proven that a closed surface is nonorientable if and only if it has a subspace homeomorphic to a Mobius band. Since most of these surfaces do not exist in 3D space, they will not be considered in this dissertation. 123

5.2.2 Gauss-Bonnet Theorem

For any compact orientable surface, a triangulation can be performed to divide the surface into a number of triangular patches. Let F, E, V denote the number of triangles, edges, and vertices, respectively, in the surface after triangulation. The

number

X = F —E + V (5.1) is called the Euler characteristic of the triangulation[Carmo, 1976]. The Euler char­

acteristic can also be expressed as % = 2 — 2 g where g is called genus, representing the

number of handles added to a sphere. This concludes that % is a topologic invariant

of compact orientable surfaces in 9%^.

Given a triangulation of a compact orientable surface R, and let Ci, ..., Cn be

the closed, simple, piecewise regular curves which form the boundary of each triangle

and 0 1 ,..., 9p be the set of external angles of the curves C\, ..., the Gauss-Bonnet

theorem[Carmo, 1976] can be expressed as

f f Kda + L + 12^! = 2irx(E) (5.2) J JR i= i JCi (=1

where s denotes the arc length of Q, Kg is the geodesic curvature of Ci, K is the

Gaussian curvature of R, n is the number of piecewise regular curves, and p is the

number of vertices after triangulations. The Gauss-Bonnet theorem is important in

the geometry of surfaces since it produces a relationship between the Euler charac­

teristic which is defined in terms of topology and the total curvature which is defined

in terms of distances and angles. 124

(a) A Hemisphere Adding on a Block (b) A Hemisphere with fillet Adding on a Block

Figure 32: A Form with Positive Total Curvature Grown on a Block

5.2.3 Form Features vs. Curvature

Using a sphere and a rectangular block as an example, it should be observed that both the sphere and the rectangular block have the identical Euler characteristic(two), and the total curvature are both dw’s since they are homeomorphic to each other.

However, the distributions of the two curvature functions are very different. The

Gaussian curvature of a sphere is a constant which is equal to the inverse square of the radius. On the other hand, the total curvature in the planar faces and straight-line edges of a block is zero and it is concentrated at the vertices.

If a bump is made in the top face of the block as shown in Fig.32(a), the total curvature remains invariant since this deformation does not affect the Euler charac­ teristic. It can be observed that even though the bump has positive total curvature, the intersecting edge between the bump and the top face of the block has concen­ 125 trated negative total curvature which offsets the positive total curvature generated by the bump. If the sharp edge between the bump and the face is replaced by a smooth fillet as shown in Fig.32(b), the total curvature still remains invariant. The negative total curvature to offset the positive total curvature of the bump is distributed on the fUlet which can be calculated by JJKd(r. This example demonstrates that once a feature having positive total curvature is “grown” on an object, there must exist a form which could be surfaces, edges, or vertices and has an opposite sign of total curvature to offset it.

From the above discussion, one may infer that adding one form feature to an exist­ ing object win lead to variations of curvature distributions. Curvature distributions can be characterized as feature properties and can be applied to feature recognition.

A further investigation in the classification of basic geometric elements for feature recognition is performed as follows.

Fig.33 shows six different features on a planar face of a rectangular block. Apply­ ing the Gauss-Bonnet theory in Eq.(5.2), we can find that each feature has positive total curvature except the boundaries intersecting with the planar face, which have negative total curvatures. On the other hand, there is only one non-zero term in

Eq.(5.2) for each feature in Fig.33. In other words, the non-zero total curvature can be obtained by surfaces such as features (a) and (d), edges such as features (b) and

(e), or vertices such as features (c) and (f) in Fig.33. However, features (d), (e), and

(f), which represent a form growing on the base surface, and the other features (a),

(b), and (c), which represent a form subtracted from the base surface, cannot differ 126

(d )j

Figure 33: A Form with Positive Total Curvature Grown on a Block

from the sign of the total curvature. Therefore, additional criteria are required to identify the differences in these two classes of features.

5.2.4 Two Attributes

For the cases where the total curvature can be evaluated by surface curvatures, i.e.,

JfKdcr, some properties of Gaussian curvature can be applied to classify these sur­ faces. It is well known that surfaces with positive Gaussian curvature can be classified 127 into either concave or convex surfaces. A convex surface has two negative principal curvatures while a concave surface has two positive principal curvatures. Surfaces with negative Gaussian curvature contain mixed concave and convex properties since one of the principal curvatures is positive and the other one is negative. These classi­ fications result in three basic categories of form features: the positive feature referring to convex surfaces, the negative feature referring to concave surfaces, and the tran­ sition feature referring to mixed concave and convex surfaces. It is worthy noting that the transition features here often represent the intersecting boundaries such as blending surfaces between features.

Therefore, feature (d) in Fig.33 represents a positive feature because of convex surface property and, on the other hand, feature (a) in Fig.33 represents a negative feature because of concave surface property according to the above criteria. The features (b), (c), (e), and (f) in Fig.33, however, cannot be evaluated by Gaussian curvature since the total curvature are concentrated at surface discontinuities, namely, edges and vertices. Nevertheless, the concept of concave and convex properties can be extended to characterize edges and vertices. For example, a point on a surface is said to have convex property when the two principal curvatures are both negative.

This indicates that all the curves generated by intersecting all the planes containing the surface normal at the selected point, are convex. Similarly, using a plane to intersect an edge or a vertex, the resulting curve may contain curvature discontinuity at the intersecting point. If all the resulting curves can be identified as convex(or concave) curves at the selected point, we may say that the selected point has convex(or 128 concave) property. On the other hand, if some curves are concave and the others are convex, the selected point has both concave and convex properties. Thus, we propose to typify the concept of concave and convex properties in describing a continuous curve to describe a curve containing curvature discontinuities. The “character line” is proposed to describe curvature discontinuities in a feature. The character line can be a straight line segment or a curve containing two attributes which could be concave, convex, or straight. The use of two attributes is similar to that of the two principal curvatures which are used to characterize a point on a surface. These two attributes will describe the two extreme curvature properties by using properties of concave, convex, and straight instead of numerical values like principal curvatures. Similar to a character line, a vertex which is defined as the intersection of character lines

can also have two attributes. The attributes of a vertex can be determined similar to a point on a character line. For example, if all the character lines connecting to

a vertex have concave properties, two attributes of the vertex will be concave. On

the other hand, if the character lines connecting to a vertex have both concave and

convex properties, the vertex will have a concave and a convex attribute.

5.2.5 Transition Feature

Referring to Fig.33, it is worth noting that the transition features of the six features

have significant meaning in feature recognition since they represent the segmentation

of features. Since all six features in Fig.33 have positive total curvature, the segmen­

tation of features can be performed by simply identifying the surfaces, character lines,

or vertices with negative total curvatures. Thus, the “character loop” is proposed to 129 represent the segmentation of features. A character loop can be composed of surfaces, edges, or vertices which correspond to the segmentation of features. More precisely, the character loop is a chain of surfaces, edges, or vertices which surrounds a positive or negative feature. Normally, a character loop should form a closed chain of vertices, edges, or stripes of surfaces; however, there are some exceptions that the closed loop cannot be formed especially when features interact; these will be discussed in next chapter. After identifying all the character loops, we can then construct the “entity- loop-attribute graph” which is an abstraction of the data structure describing the relationships among loops, surfaces, edges, and vertices. Feature extraction can then be performed based on this graph.

As mentioned earlier, prior research in feature recognition has been mainly limited to the models containing planar faces and straight-line edges. The proposed approach attempts to overcome the difHculty in recognizing features from non-planar surfaces and extends the feature recognition to free form surfaces by applying character lines, character loops, and curvature properties.

5.3 Neutral Basis(1): Two Attributes

In this section, three geometric entities, including surfaces, edges, and vertices, are considered since they are the basic entities to form the geometric shape of a part model. Two attributes can be used to characterize these three geometric entities as follows. 130

5.3.1 Characterization of Surfaces

As we discussed earlier, surfaces can be characterized by using signs of principal cur­ vatures. Thus, two attributes are proposed to characterize surface property and they can be determined by signs of the two principal curvatures. If the principal curvature is greater than zero, the attribute is defined as convex. If the principal curvature is less than zero, the attribute is defined as concave. Otherwise, the attribute is de­ fined as straight when the principal curvature is zero. Therefore, six categories of surfaces can be simply identified from Table 3 by using signs of principal curvatures or Gaussian curvature K and mean curvature m.

Table 3: Characterization of Surfaces

Surface Property «1 «2 K m notation convex and convex < 0 < 0 > 0 < 0 concave and concave > 0 > 0 > 0 > 0 (+>+) concave and convex > 0 < 0 < 0 any ( + ,- ) convex and straight = 0 < 0 = 0 < 0 (0 ,-) concave and straight > 0 = 0 = 0 > 0 (+>0) planar face = 0 = 0 = 0 = 0 (0,0)

Note that the planar face does not belong to concave or convex categories. How­ ever, it can be included in both positive and negative features.

5.3.2 Characterization of Edges

Edges with C7° or continuities have curvature discontinuities and thus the total

curvature cannot be evaluated by Gaussian curvature. However, from the Gauss- 131

Bonnet theory in Eq.(5.2), the total curvature can still be evaluated by geodesic curvature along the character line. By applying the concept of a homeomorphism, the geometric form of an edge can be altered to a surface such as a fillet or round by maintaining the same total curvature. This implies that we may also define two attributes to characterize the total curvature for an edge similar to the two attributes describing curvature properties of a point on a continuous surface. Thus, we propose to define two attributes for an edge. When tracing edges in a B-rep model, two attributes, which could be concave, convex, or straight, also need to be determined.

These two attributes will describe the two extreme curvature properties like principal curvatures.

From Eq.(5.2), the sign of total curvature from an edge can be determined by the sign of geodesic curvature. Kg. Since geodesic curvature can be calculated from the edge and surface normal at the selected point, the sign of total curvature can then be determined. Similar to the characterization of a surface, two attributes for negative total curvature will be concave and convex. For positive total curvature, two attributes will he concave and concave or convex and convex which need be determined by additional criteria. As we mentioned earlier, positive total curvature at a selected point will have all concave or convex curves passing the point. Thus, any one curve can be used to identify the type of the two attributes for the selected point on the edge. In practice, two methods are found to be convenient for calculation.

The first method is using the edge itself, and the attribute can be determined from the sign of normal curvature of the edge. The second method is utilizing the angle 132 between two tangent planes for two surfaces at the selected point. If the angle is greater than tt, the attribute is convex. If the angle is less than ir, the attribute is concave. Otherwise, the attribute is straight. Once one attribute is determined from either one method, the other attribute can be determined automatically by using the sign of total curvature. When the total curvature is zero, two attributes could be both straight or one of two attributes is straight. The non-straight attribute for the later case can be determined by using the above two methods.

For edges with (7® continuities, one of the most typical examples is the intersecting line between two non-parallel planes. The total curvature is zero. Referring to Fig.34, let 6 be the angle between the two planes shown. When 0 < tt, the edge is defined as concave-and-straight. When 6 > t t , the edge is defined as convex-and-straight since all the curves on two planar faces passing a point on the edge have concave or straight properties. It is noted that when 6 = 0, the edge is neither concave nor convex and its attributes can be defined as straight and straight, since the two planes can be merged into one plane.

For edges of (7® continuity as shown in Fig.35(a), the total curvature is positive and the two attributes are identified as convex and convex. For edges of (7® continuity in Fig.35(b), the total curvature is negative and the two attributes are identified as concave and convex. It is interesting to see that attribute obtained from the edge is concave and the attribute obtained from two tangent planes is convex. This demonstrates that the edge has both concave and convex properties.

When the angle between two tangent planes is equal to zero, the (7® continuity in 133

(a) convex-and-straight

Figure 34: Character Line Between Two Planes

0>7T

.T- 0<7T

(a) convex- and— convex (b) convex-and-concave

Figure 35: Character Line of Continuity

Figure 36: Character Line of Continuity

(=0 1= 0

Figure 37: Character Line of Zero Gaussian Curvature 134 this case will become continuity. For edges of continuity, the two tangent planes at the selected point on the intersecting curve are coincident and they have the same normal vector. Thus, the geodesic curvature of the intersecting curve will have the same magnitude but different signs for the two intersecting surfaces. Therefore, the total curvature will be zero along the edge with continuity. This can be illustrated by one example in Fig.33. A positive hemisphere feature is added on a block with concentrated negative total curvature edge as shown in Fig.33(a). If the edge between the hemisphere and the block is replaced by a fillet as shown in Fig.33(b), the negative total curvature is distributed on the fillet and the edges surrounding the fillet have

continuity with zero total curvature. Some other examples of this type of edges which surround fillets and rounds are shown in Fig.36. Thus, these edges normally do not describe any characteristics of features during the feature extraction process.

However, they are still important since they represent the segmentation of surfaces or features. Similarly, for edges with zero Gaussian curvature in Fig.37, they have zero total curvature and thus they are not significant in feature recognition. However, they are still important since they represent the segmentation of surfaces or features.

5.3.3 Characterization of Vertices

Similar to a character line, a vertex, which is defined as the intersection of edges, can

also be characterized by two attributes by applying the concept of a homeomorphism.

Ideally, two attributes of a vertex should be determined based on all the curves passing

the vertex on the adjacent surfaces. However, there is an easier way to determine two

attributes by using aU the edges intersecting at the vertex. The attributes of a vertex 135 can be determined from the attributes of the adjacent edges by using the operator similar to a logical “OR” operator. For example, if all the edges connecting to a vertex have concave properties, two attributes of the vertex wUl both be concave.

If the edges connecting to a vertex have both concave and convex properties, the vertex will have a concave and a convex attribute. On the other hand, if all the edges have common tangent planes, the vertex will have zero total curvature and thus the attributes can be defined as straight and straight. In this case, the vertex is not significant in feature recognition, but it represents the segmentation of edges and surfaces.

5.4 Identification of Character Lines

In the B-rep model of a part, edges may not describe all the curvature discontinuities in a part model. For example, a free form surface represented by a NURBS function may contain (7° or curves without having any edge entity. Thus, in addition to the existing edges, there may exist other curves containing curvature discontinuities in a part model. Those curves are called “character lines” and they need to be identified and converted to edges in the part model. This process is especially important while part models containing sculpture surfaces. Similar to the classifications of edges, character lines can be classified as character lines with continuities and zero

Gaussian curvature. They can be identified by connecting points corresponding to curvature discontinuities and zero Gaussian curvature on a surface. For example, a torus surface has two character lines which are defined by two circles with zero Gaus­ sian curvature at the top and bottom of the torus surface. For B-spline or NURBS 136 surfaces, multiple knots or multiple control points may create curvature disconti­ nuities and zero-Gaussian-curvature character lines can be identified by calculating

Gaussian curvature on a surface and linking the zero crossings.

In a B-rep model, vertices are defined as the intersection point among edges.

Similarly, character vertices are defined as the intersection point among character lines and edges. Once all the character lines and character vertices are identified,

they can be merged into the edges and vertices in the part model. Two attributes

also need to be identified and assigned to the character lines and character vertices.

Neutral Basis(II): Positive Features, Negative Features, and Transition Features

The proposed two attributes can be used to characterize basic geometric entities.

Thus, the geometric entities with similar attributes can be associated to form a higher-

level geometric form. According to the definition of the two attributes, positive

features, negative features, and transition features can be extracted easily from the

geometric entities using the two attributes and they can be applied to extract high

level feature information.

As discussed earlier, surfaces can be classified into six categories. Since edges and

vertices can also be characterized by the two attributes, they can also be classified into

six categories. Thus, positive forms can be identified by connecting geometric entities

with convex properties, negative forms can be identified by connecting geometric

entities with concave properties, and transition forms can be identified by connecting

geometric entities with concave and convex properties. In the following, ” wiU 137 be used to represent the attribute of convex, will be used to represent concave, and “0” will be used to represent straight. Positive, negative, and transition forms, which comprise surfaces, edges, and vertices, are proposed as neutral basis and can be identified as follows.

5.4.1 Identification and Characterization of Feature Loops

The feature loop is defined as connected geometric entities with common attributes.

Despite its name, a feature loop is not restricted to comprising only edges and vertices.

It may comprise adjacent entities of vertices, edges, or surfaces. A single surface can be defined as a feature loop. A tree-searching technique can be applied to identify forms by searching the adjacent geometric elements, such as breadth-first search or depth-first search. Two categories of feature loops can be identified as follows:

Positive Feature Loop: Starting with any surface, edge, or vertex with (—, —) at­

tribute, connect the adjacent surfaces, edges, or vertices with attributes includ­

ing (—, “ )> (~>0)> (Oj —), o: (0>0).

Negative Feature Loop: Starting with any surface, edge, or vertex with (-|-,-f)

attribute, connect the adjacent surfaces, edges, or vertices with attributes in­

cluding (-t-,-k), (-t-,0), (0,-f), or (0,0).

The adjacent relationship is defined based on the connectivities among basic geometric elements. Three primitive geometric elements are defined as vertices, edges, and surfaces. The adjacent relationship can be simply illustrated in Fig.38. In Fig.38,

Si denote a surface, e, denotes an edge, and u, denotes a vertex. Using Fig.38 as 138

Figure 38: The Definition of Adjacent Geometric Elements

an example, the adjacent geometric element can be searched based on the following rules.

surface: Si is adjacent to vi, ug, eg, ei, and eg

continuously search based on ui, vg, eg, ei, and eg

edge: e% is adjacent to vi, ug. Si, and S^

continuously search based on ui,ug. Si, and S 4

vertex: vi is adjacent to all the S, and ei where i=l,2,3,4

continuously search based on all the Si and e, where i=l,2,3,4

Thus, the positive and negative feature loops can be identified by applying the search­

ing technique above. Note that vertices and edges with (0,0) attributes and planar

faces can be included in both types of forms. 139

5.4.2 Identification and Characterization of Character Loops

After performing identification and characterization of feature loops, character loops can then be identified. A character loop can be simply defined as the adjacent surfaces, edges, or vertices with positive total curvature with opposite attributes, negative or zero total curvature surrounding a feature loop with positive total curvature. Since there are two types of features, two categories of character loops can be identified as follows:

Positive Character Loop: Starting with any surface, edge, or vertex with (+, —)

attribute which is adjacent to a positive feature, connect the adjacent surface,

edge, or vertex with attributes including (+, —), (+, 0), (+, +) to surround the

positive feature.

Negative Character Loop: Starting with any surface, edge, or vertex with (+, —)

attribute which is adjacent to a negative feature, connect the adjacent surface,

edge, or vertex with attributes including (—, +), (—, 0), (—, —) to surround the

negative feature.

Note that a character loop normally comprises a closed chain of basic geometric elements for simple isolated features. However, a character loop need not be closed, especially when features interact with each other. Character loops may overlap among features, i.e., a segment of one character loop may be shared by another character loop. Thus, it is more efficient to search features loops prior to searching character

loops. Since a feature may interact with a number of other features, a feature may 140 have one or more character loops.

In this research, the form features which are mentioned so far have positive total curvatures only. Thus, the adjacent surfaces, edges, or vertices with negative total curvatures are identified as character loops for segmentation among features. How­ ever, it is possible that some applications may define form features having negative total curvature such as saddle surfaces. The character loop, in this case, will have positive total curvature property. This might create complexities in recognizing form features. However, the feature recognition procedures can be performed in a manner similar to that of form features defined as positive total curvatures. Nevertheless, we consider form features defined as positive total curvatures in this dissertation.

5.4.3 Construct Entity-Loop-Attribute Graph

After performing identification and characterization of character lines, vertices, sur­ faces, feature loops, and character loops, a complete description of adjacency rela­ tionships among these elements and their properties can be illustrated by a graph which is called an entity-loop-attribute graph. Fig.39 shows one part model and its corresponding entity-loop-attribute graph.

An entity-loop-attribute graph is the abstraction of the data structure which con­ tains feature loops, character loops, vertices, character lines, surfaces, and their at­ tributes in addition to the B-rep data structure. In an entity-loop-attribute graph, an elliptic element is used to represent a feature loop or a character loop which con­ tains four fields. The two top fields indicate the label of the loop and the type of the loop which could be a positive/ negative feature/ character loop. The loop type is 141

(a) A Part Model

LO PFL

L9 PCL

37 LIO

NCL L3 PCL L5 PCL L7 PCL LIO PFL

LJ.k(.23,24,25,26 F19 L2 L6 L8 Lll

L2 NFL PFL L6 PFL L8 PFL PCL

F7 u,v,18-22,F8-Fll n,n,o,p,27“ 34,F12-F16 38 L12

L12 PFL

F20

(b) Entity-Loop-Attribute Graph of the Part

Figure 39; A Part Model and the Corresponding Entity-Loop-Attribute Graph 142 usually abbreviated as PFL, PCL, NFL, and NCL where P is positive, N is negative,

F is feature, C is character, and L is loop. The center field indicates the pointers to the ID of the surfaces, edges, and vertices which belong to the loop. The type of each primitive topological element is included in the B-rep data structure and can be referenced from the element’s ID. The bottom field indicates the pointers to the ID of other adjacent loops which are enclosed by the loop. For a feature loop, it may include the pointer of the character loops, if any, enclosed by the feature loop. For a character loop, it usually contains the pointer of the enclosed feature loop.

An arrow which is only used between two loops represents the parent-child rela­ tionship between two loops. When one of the two loops is completely enclosed by the other, the parent-child relationship can be determined from that the parent loop is closer to the root node than the child loop is while the child loop is closer to a terminate leaf node than the parent loop is. In other words, the parent-child rela­ tionship is created based on the sequence relative to the root node, i.e., base feature.

In Fig.39(b), the loop to which the arrow points is the parent loop and the other one is the child loop.

From Fig.39, a child feature is normally composed of two loops; one character loop and one feature loop. The connections among loops in the entity-loop-attribute

graph are similar to a tree structure. Using Fig.39 as an example, the base feature is

the root node in the tree structure and it contains several leaf nodes which correspond

to the features added on the base feature. The processes that features being added

on other features are equivalent to those that add new leaves on the existing nodes. 143

Each terminate node may represent a feature or part of a feature. Therefore, the recognition of basic features can be performed by tracing the tree structure.

5.4.4 Recognition of Basic Features

Basic features are represented in terms of surfaces, edges, vertices, feature loops, and character loops. In this research, we propose four categories of basic features which are defined as follows;

B ase F eature: A base feature is defined as a basic form or volume to start with in

design or manufacturing. It normally consists of a simple geometry, such as a

rectangular block, a cylinder, or an L-bracket. In the feature-based representa­

tion of a workpiece model for machining, a base feature may correspond to a

raw material. A base feature usually corresponds to the root node of the tree

data structure in the entity-loop-attribute graph. It can be determined from

the feature loop with the most number of character loops it connects to or by

some other criteria depending on the application.

Positive Feature: A positive feature is defined as a form with convex property

added on a part where the intersecting boundaries between the feature and the

part have negative total curvature property. The convex form is defined as the

adjacent vertices, edges, and surfaces having convex-and-convex or convex-and-

straight attributes. A positive feature containing planar faces is allowed. In the

entity-loop-attribute graph, a positive feature usually corresponds to a positive

feature loop. 144

N egative F eature: Contrary to the positive feature, a negative feature is defined

as a concave form by removing a form with convex property from a part, where

the intersecting boundaries between the feature and the part have negative total

curvature property. Similar to positive features, planar faces are also allowed in

negative features. In the entity-loop-attribute graph, a negative feature usually

corresponds to a negative feature loop.

T ransition Feature: A transition feature is defined as a form representing segmen­

tation among features. It can comprise a connected portion of vertices, edges,

or surfaces with negative total curvature. Thus, transition features will contain

both concave and convex properties. Note that edges or surfaces with zero total

curvature may be identified as transition features when they are adjacent to a

feature loop where the non-straight attribute is different from the attnbute of

the feature loop. In the entity-loop-attribute graph, a transition feature usually

corresponds to a positive or negative character loop.

Based on the definitions of the proposed basic features, positive and negative features can be distinguished by the positive or negative feature loop of the feature.

In order to recognize features for manufacturing processes, positive and negative features can be further manipulated and classified into sub-categories according to the distinct characteristics of their edges, loops, and surface properties. These wiU be discussed in the next chapter.

AU the above feature categories can be recognized from the entity-loop-attribute

graph when each feature is singly isolated without interacting with other features. 145

For example, a simple feature has a single positive or negative feature loop which is also a terminate node in the graph. A translational sweeping feature normally con­ sists of a feature loop and a character loop where the feature loop can be identified as a translational sweeping feature. A bridge or through depression feature consists of two character loops which connect to a feature loop. However, when a feature is interacting with other features, the feature properties may be altered and char­ acter loops may become broken or splitting. This usually results from two or more features interacting with each other and, thus, the character loops of these features interact. Therefore, some segments of the character loops may become nonexistent or be merged together. In order to recognize features brom such situations, additional criteria has to be identified and these are investigated in the next chapter.

5.5 Implementation and Results

A block diagram in Fig.40 shows the proposed approach of recognizing basic features from a B-rep model. Based on the proposed approaches, a software program has been implemented by using the AGIS geometric modeling kernel on an IBM RISC-6000 workstation. AGIS is a new generation, three dimensional, boundary representation, geometric modeler from Spatial Technology, Inc.. The modeling kernel is written in

G-t-4- and all the topologic and geometric entities are defined by the base class or the derivatives of the base class. For example, the topology of model data structure is based on the G-F-f base class ENTITY. Glasses derived from ENTITY including

BODY, LUMP, SHELL, FACE, LOOP, GOEDGE, EDGE, VERTEX represent the topology of a solid object. User-defined attributes are also defined as a subclass of an 146

B-Rep Model of an Object From a CAD system

Identification of Character Lines

rSurfaces Characterizatioin of ^ Edges ^Vertices

Identification and Characterization of Feature Loops and Character Loops

Construct Entity-Loop-Attribute Graph

Recognition of Basic Features

Basic Features

Figure 40: Block Diagram of Feature Recognition Module 147

ENTITY-derived class, called ATTRIB. The proposed two attributes, feature loop, character loop can be defined in the ATTRIB class.

The developed program consists of five modules including identification of char­ acter lines, characterization of surfaces, edges, and vertices, identification and char­ acterization of feature loops, identification and characterization of character loops, and recognition of basic features. Four examples are given below to demonstrate the proposed approach. Note that in the shading images after feature recognition as shown in the following examples, each vertex is replaced by a sphere and each edge is replaced by a solid which is formed by sweeping a circle along the edge for the purpose of illustration.

5.5.1 Example 1:

Fig.41(a) shows six typical features created on a rectangular block. Those features represent three distinct feature categories containing spherical, cylindrical, and pla­

nar surfaces. The total curvature for those three feature categories are corresponding

to only one nonzero term in the Gauss-Bonnet Theory(Eq.(5.2)). Fig.41(b) shows

the identified three positive features which are shaded by light gray, three negative

features which are shaded by dark gray, and the transition features which are shaded

by black. Fig.41(c) shows six features similar to the ones in Fig.41(a). However, the

edges between six features and the base feature are replaced by blending surfaces. The

output from the developed program shows that three positive features and three neg­

ative features can still be recognized as shown in Fig.41(d). By comparing Fig.41(b)

to (d), it is interesting to compare the attributes of the edges and vertices in Fig.41(b) 148

(a) Original Model (b) Model After Feature Recognition

(c) Original Model (d) Model After Feature Recognition

Figure 41; Example 1: A Block with 6 Features 149 to the ones of the corresponding blending surfaces in Fig.41(d). This demonstrates that even though the geometric forms are different, the two attributes can be used to identify the feature information. The edges around the blending surfaces in Fig.41(d) have zero total curvature and are included in the transition feature in the feature recognition process. The CPU time for processing part models in Fig.41(a) and (c) is 0.5 and 2.0 seconds, respectively.

5.5.2 Example 2:

Fig.42(a) shows a typical rectangular slot, a rectangular slot with rounds and fil­ lets created on a block, and a cylindrical slot created on a sphere, respectively. By carefully studying the slot features in Fig.42(a), (c), and (e), one should notice that negative feature loops identified from three cases have zero total curvature because the geometric forms including edges and cylindrical surfaces have zero total curvature.

This results in that the character loops may have different combinations of total cur­ vature due to the form on where the slot feature creates. In Fig.42(e), the cylindrical slot will remove part of spherical surface which has positive total curvature. Since adding the slot feature does not affect the Euler characteristic, the character loop, in this case, will have positive total curvature to offset the loss of the total curvature from the cutout spherical surface. In Fig.42(a) and (c), the rectangular slot wiU re­ move part of planar faces and straight edges which have zero total curvature. The character loop, in this case, wiU have both positive and negative total curvatures and the sum of these wiU be zero, too. Thus, the four vertices on the top of the two side faces of the slot have convex and convex attributes and positive total curvature to 150

(a) Original Model (b) Model After Feature Recognition

(c) Original Model (d) Model After Feature Recognition

(e) Original Model (f) Model After Feature Recognition

Figure 42: Example 2: Rectangular and Cylindrical Slots 151 offset the four vertices on the bottom face of the slot which have concave and convex attributes. As a result, the character loops may contain geometric entities which may also be included in positive feature loops. The decision may vary depending on appli­ cations and software implementation. In Fig.42(b) and (d), all the boundary edges, vertices, and surfaces of the slot with the positive total curvature are included in the negative character loop. On the other hand, the edges surround the cylindrical slot is included in the positive feature loop of the sphere in Fig.42(f) and, thus, there is no character loop between two feature loops in this case. The CPU time for processing part models in Fig.42(a), (c), and (e) is 0.2, 1.1, and 0.1 seconds, respectively.

5.5.3 Example 3:

Fig.43(a) shows a half torus surface on a block with fillets at two intersections. This example is used to illustrate how to recognize features from a free form surface. The torus surface in the original model as shown in Fig.43(a) has a continuous surface function with edges around the surface patch. However, the Gaussian curvature is positive for the outside portion of the torus surface and is negative for the inside

portion of the torus surface. Thus, the character line, which is a circle on the top of

the torus surface, is identified and divides the original torus surface into two surface

patches. This process is automatically performed by the software program and the

output from the program shows that one positive feature and one negative feature

are identified in addition to the base block feature in Fig.43(b). The CPU time for

processing this part model is 1.1 seconds. 152

(a) Original Model

(b) Model After Feature Recognition

Figure 43: Example 3: Torus Surface On A Block 153

5.5.4 Example 4:

Fig.44(a) and (b) shows the top and bottom view of an example part from CAM-I.

After applying the software program to process the part, four positive features and five negative features are identified as shown in Fig.44(c) and (d). Positive features include a base block, a bridge feature, a cylindrical feature which attaches to the bridge feature, and a protrusion feature on the base block. Negative features include one through hole, two slots which open beside the through holes, one round pocket, and a slot at the back side of the part. This example is used to demonstrate that the proposed approach can recognize features from a practical mechanical part with moderate feature interactions. The CPU time for processing this part model is 1.1 seconds.

5.5.5 Discussion

From the previous four examples and the exampe in Fig.39, one can see that the proposed two attributes can be used to categorize various types of geometric entities in a B-rep model. In fact, two attributes can be applied to any geometric entity as long as the Gauss-Bonnet theory can be applied. This indicates that two at­ tributes can be used to provide an abstract geometric information for every entity in most solid models used in practical design and manufacturing. As mentioned in the introduction, there are two diverse approaches in automatic feature recogntion by applying F-E-V graph. The first approach, which characterizes edges and ver­ tices, can recognize features from models in Fig.41(a) and Fig.42(a), but it cannot 154

(a) Original Model(top view) (b) Original Model(bottom view)

WMM

(c) Model After Feature Recognition (d) Model After Feature Recognition (top view) (bottom view)

Figure 44: Example 4: CAM-I Example Part 155 process features in Fig.41(c), Fig.42(c) and (e), and Fig.43. On the other hand, the second approach, which characterizes surfaces, can recognize features from models in

Fig.41(c), Fig.42(c) and (e), and Fig.43(the character line needs to be identified in advance), but it cannot process features in Fig.41(a) and Fig.42(a). Therefore, the proposed approach combines the merits of the two diverse approaches into one generic representation and, thus, has the capability to recognize features from a practical part model.

5.6 Summary

A systematic approach for feature recognition from B-rep models is presented. By using the concept of homeomorphisms and Gauss-Bonnet Theory, surfaces, edges, and vertices can be characterized by two attributes. Recognition of basic features

can be performed based on the two attributes and the adjacent information among

surfaces, edges, and vertices. This approach can be used as a neutral basis for feature-

based representations in various applications. Based on the knowledge of a specific

application, rules can be developed to perform feature extraction for the specific

application. With moderate modifications, many of previously developed methods

that extract features from F-E-V graph or similars can also be performed based on the

neutral basis and, thus, enhance their capabilities of recognizing part models having

complex geometry rather than prismatic parts. The neutral basis for feature-based

representation can serve as links among design, process planning, manufacturing,

assembly, and inspection. CHAPTER VI

Recognition of Detailed Features and Feature Interactions

In the previous chapter, basic features including positive, negative, and transition features can be recognized. Basic feature categories which are recognized from entity- loop-attribute graph can be extensively classified by additional criteria. A feature rec­ ognizer is proposed to transform the geometric data of a design to high-level semantic information for various applications. Among many applications, design for manufac­ turability for machining, die casting, injection molding, and sheet metal forming, will be used to illustrate the proposed concept. In the beginning of this chapter, typical feature categories in various manufacturing processes are presented. Based on distinct topologic and geometric properties of these manufacturing features, detailed features can be classified and recognized from the basic features recognized fiom the previous chapter. The recognition of detailed features can be performed based on the topo­ logic patterns and characterizations of the involved surfaces, edges, and vertices. This procedure usually needs a database which stores the patterns of all the recognizable features. A searching process then compares each identified basic feature with the patterns in the database to find a match. For example, simple feature can be classified based on the surface type into the sphere, ellipsoid, cone, and bspline surface. In this

156 157 chapter, a number of specific types of features are recognized by using a matching process. However, when a feature interacts with other features, the feature properties may be altered and character loops may become broken or splitting. This usually results from two or more features interacting with each other and, thus, the character loops of these features interact. Therefore, some segments of the character loops may become nonexistent or be merged together. In order to recognize features from such situations, additional criteria have to be identified and they are investigated in this chapter.

6.1 Features in Manufacturing Processes

In this section, the features frequently used in manufacturing processes including machining, die casting, injection molding, and sheet metal forming are presented.

The essential features that characterize these manufacturing processes and their links to the basic features are investigated as follows.

6.1.1 Machining

Machining is a process that removes material. Â feature in this domain normally corresponds to one or a number of specific machining operations. The geometric shape of a feature may determine the needed cutting tools and/or machines. Typical machining features are analyzed as foUows[Joshi and Chang, 1988]:

• Slot: A slot feature is normally created by a milling process. As shown in

Fig.45(a), specific types of milling cutters are used to machine specific types of

slot features with distinct cross-section curves. Thus, a rectangular, triangular. 158

Rectangular Trian^iar Circular Dovatai Slot Sot Slot T Slot Slot

(a) Vatioas Types of Slot Features

Blind Through Counterbored Countersunk Hole Hole Hole Hole

(b) Cross Sections of Various Types of Hole Features

Blind Slot Through Slot

Rectangular Pocket Step t y

Blind Step

(c) Through and Blind Slot, Step Features

Figure 45: Typical Features for Machining Process 159

or circular slot can be identified from a negative feature with specific geometric

shape. Note that fillets or rounds can be added to the slots and our approach

can still recognize the needed information. Moreover, in Pig.45(a), T slot and

dovetail slot can be identified as two recursive negative features with specific

geometric shapes.

# Hole: Hole features are normally classified into two categories: through hole

and blind hole as shown in Fig.45(b). Both categories belong to negative fea­

tures; however, they can be differed by the number of character loops. Each

category can be further classified based on the shape of the cross-section curve.

A circular hole with constant cross-section curve is normally created by a drilling

process. Countersunk and counterbored holes, which have varying cross-section

curves, are created by the respective countersinking and counterboring pro­

cess following the drilling process. They can be recognized by two recursively

interacting negative features with specific geometric shapes. For non-circular

cross-section curves, depending on the shapes, milling is probably the most pop­

ular process and they can be recognized as negative features or the interacting

features.

• Step: A step feature is similar to a slot except it is created along the boundaries

of part surfaces as shown in Fig.45(c). Thus, one side face of the slot does not

exist. The recognition process is similar to that of a slot. 160

• Pocket: A pocket feature is equivalent to a blind hole except that pocket

normally represent non-circular cross-section curves. The feature recognition

process is similar to that of a blind hole.

6.1.2 Die Casting & Injection Molding

Die casting and injection molding use temperature dependent changes in material properties to obtain the final shapes of discrete parts to finish or near finish dimen­ sion through the use of dies or molds. Features defined in this domain is related to manufacturing concerns, such as filling, solidification, ejectability, .. ., etc. for man­ ufacturing assessment. Typical features for these two processes are shown in Fig.46 and they are discussed as follows [Peng, 1990]:

• Rib: The primary function of rib features is to provide functional and structural

services. They can be recognized from positive features which have rectangular

cross section with or without draft.

• Flange: The functionalities of flange features is similar to that of rib features

except that a flange feature is created at the boundary of the part surfaces.

They can be recognized from positive features. However, the recognition is

more complex than that of rib features due to overlapping edges and loops of

feature interaction.

• Boss: Boss features are used to support metal threaded inserts or serve as

mounting or fastening points during assembly. They can be recognized from

positive features which has cylindrical surface with or without draft. A hole 161

Draft

Boss Range Radi Flange Cosset Hole

Figure 46: Typical Features for Die Casting and Injection Molding

Straight Range /

inch] Shrink H oie\ Range Range Bead

Bend Notch

Stretch Flange

Notch

V Bead Flat V Bead Round Bead

Figure 47: Typical Features for Sheet Metal Forming 162

feature may be created on a boss which leads to recursively interacting features.

In some cases, the boss feature may overlap with a rib or flange feature and

lead to relative-position-interacting features.

• C osset: Gosset features are very efficient reinforcing elements which provide

added stiffness while using only a small amount material. Thus, they are nothing

more than very short reinforcing ribs. On the other hand, their shapes are

normally the variation of wedges. Therefore, gosset features can be recognized

from positive features. However, a gosset feature normally has relative-position

interaction with the other positive feature.

• R adii: In die casting and injection molding, sharp corners will result in diffi­

culties in filling the cavity and high brittleness. Radii features should always be

specified on corners of parts. They are equivalent to rounds or fillets depending

on if the replacing edge is convex or concave. Thus, radii features can be found

in both feature loops and character loops. The recognition of radii features may

require specific surface information and spatial relationships among surfaces.

• D raft: For any surface perpendicular to the parting line, it is desirable to have

sufficient draft augle to permit easy ejection from a die or a mold. The recogni­

tion of draft angle requires surface orientations and additional information such

as parting line or die open direction.

Some others features such as holes can be processed similar to those in the machining process. 163

6.1.3 Sheet Metal Forming

Sheet metal forming are operations that impart permanent shape change to the sheet by plastic deformation. Forming operations can be classified into four basic categories depending on the natural of the deformation involved: straight line bending, curved line bending, stretching, and drawing. Since each type of operation can produce distinct shape of the sheet, features corresponding to these operations are analyzed as foUows(Fig.47)[Mantripragada, 1994]:

• Bend: Bending is the simplest operation of sheet metal forming. It simply

produces a curved segment from the plain sheet. Thus, bend features can be

recognized from the two attributes of the proposed neutral basis. More complex

bending operations such as multiple curvature bending and contour form in g can

be analyzed similarly depending on the geometric shapes.

• Flange: Flanging operations are essentially bending contoured parts with com­

pound curves. The basic differences between bending and flanging operations

are that during flanging, the bent down metal is shorter compared to the overall

part size, and the flange features have different functions from the bend features.

Depending on the geometric shapes, they can be further classified as straight,

shrink, stretch, and hole flanges.

• Hole, , Notch: These features can all be recognized from negative

features. The differences are in the location and the geometric shape of the

negative features. 164

• Deep Drawn: Deep drawn features are normally referred to deep, cuplike

products. They can be recognized from positive or negative features depending

on which side of the sheet is referenced.

• B ead: Bead features represent localized deflection of a flat sheet for adding

stiffness to thin sheets. The cross section of bead features can take different

forms and they are produced by different types of dies and molds. They can

be recognized from positive or negative features with specific geometric shapes

and sizes.

6.2 Classification and Recognition of Detailed Features from Basic Features

From the above analysis, most of the features in the manufacturing processes can be further classified from the proposed neutral basis. Based on the definitions of the proposed basic features, positive and negative features can be distinguished by the positive or negative feature loop of the feature. In order to recognize features in a

more detailed manner, positive and negative features need to be further classified. For

example, using the classification criteria based on the combinations of loops similar

to the approach reported by Gadh and Prinz[1992], positive features can be classified

into two subcategories: simple feature, consisting of one positive feature loop and

one character loop, and bridge feature, consisting of one positive feature loop and

two character loops; negative features can be classified into two subcategories: simple

feature, consisting of one negative feature loop and one character loop, and bridge

feature, consisting of one negative feature loop and two character loops. However, 165 this classification is still rather vague. Thus, positive and negative features need to he further classified into sub-categories according to the distinct properties of their geometric and topologic entities. Thus, the detailed features of a part can be recognized according to the defined sub-categories. For positive features, they can be further classified into the following categories.

• Simple Feature: The feature loop contains a single surface with convex-and-

convex or convex-and-straight attribute. The hemisphere in Fig.39 is one typical

example.

• Sweeping Feature: When a volume is generated from sweeping a 2D section

along a trajectory, a positive feature can be obtained if the volume is added

onto another feature. Normally, a positive feature loop along with an adjacent

character loop can be obtained. If the size and shape of the 2D section is fixed

during the sweeping process, three sub-categories of sweeping features can be

classified based on the type of the trajectory as follows.

( l) T ranslational Sweeping Feature: A translational sweeping feature can

be obtained by sweeping a 2D section along a direction. Thus, the surfaces

and edges swept by the 2D section have at least one straight attribute. Also

the direction of the edges and the principal direction with zero curvature

on the surface should be identical to the sweeping direction. These can

be used as the criteria to identify translational sweeping features from the

feature loop. Typical examples include ribs, straight flanges in Fig.46 and

straight flanges, striaght beads in Fig.47. 166

(2) Rotational Sweeping Feature: A rotational sweeping feature can be ob­

tained by sweeping a 2D section about an axis. Thus, one of the principal

directions for each point on the sweeping surfaces is perpendicular to the

axis. The other principal direction will be along the tangent direction of

the 2D section. These can be used as the criteria to identify rotational

sweeping features from the feature loop. Typical examples include bosses

in Fig.46 and shrink flanges, round beads in Fig.47.

(3) G eneral Sweeping Feature: A general sweeping feature can be obtained

by sweeping a 2D section along a general trajectory. The recognition of this

type of features may be difficult unless some information of the trajectory

or 2D section is given in advance.

If the 2D section is varying its size or shape during the sweeping process, this

is usually called blending. The recognition of this type of features may re­

quire some information on the trajectory and the variation of the 2D section in

advance. Thus, it is normally difficult to recognize this type of features.

• Bridge: A bridge feature normally represents the connection feature between

two features or isolated parts. Thus, the bridge feature comprises one feature

loop adjacent to two character loops which connect to two different features or

base features.

• P olyhedra Feature: A polyhedra feature consists of planar polygonal faces

and edges with convex-and-straight attribute. The part model generated from 167

the stereolîthography format is one of the typical examples. The block and

wedge in Fig.39 and gossets in Fig.46 are another examples of the polyhedra

features.

• Transition Positive Feature: Rounds, and chamfers are typical examples of

transition positive features. For a chamfer, it can be characterized by a pair of

parallel edges with convex-and-straight attributes enclosing a narrow stripe of a

planar face. For a round created along a straight edge, it is normally represented

by a cylindrical surface.

Similar to the above classifications, negative features can be further classified into the following categories.

• Sim ple Feature: The feature loop contains a single surface with two concave-

and-concave or concave-and-straight attribute. The trough in Fig.39 and circu­

lar slots in Fig.45(a) are typical examples of the simple feature.

• Sweeping F eature: When a volume is generated from sweeping a 2D section

along a trajectory, a negative feature can be obtained if the volume is removed

from a feature. Normally, a negative feature loop along with an adjacent char­

acter loop can be obtained. If the size and shape of the 2D section is fixed

during the sweeping process, three sub-categories of sweeping features can be

classified based on the type of the trajectory similar to the positive sweeping

feature. Thus, translational, rotational, and general sweeping negative features

can also be recognized by applying the similar process mentioned in the positive 168

sweeping feature. Typical examples include slots and steps in Fig.45(a) and (c),

and stretch flanges in Fig.46.

• Through Depression Feature: A through depression feature normally rep­

resents a negative feature with two opening ends on a part. Thus, a through

depression feature can be characterized by a negative feature loop adjacent to

two character loops which are on the same part in the entity-loop-attribute

graph. The through hole in Fig.45(b) is one typical example.

• Polyhedra Feature: Contrary to a positive polyhedra feature, a negative

polyhedra feature consists of planar polygonal faces and edges with concave-

and-straight attribute. The blind step and blind slop in Fig.45(c) are typical

examples.

• Transition Negative Feature: Fillets and chamfers are typical examples of

transition negative features. For a chamfer, it can be characterized by a pair of

parallel edges with concave-and-straight attributes enclosing a narrow stripe of

a planar face. For a fillet created on a straight edge, it normally represented by

an inside cylindrical surface.

All the above feature categories can be easily recognized from the proposed neutral basis and the entity-loop-attribute graph when each feature is singly isolated without interacting with other features. For example, a simple feature has a single positive or negative feature loop which is also a terminate node in the graph. A translational sweeping feature normally consists of a feature loop and a character loop where the 169 feature loop can be identified as a translational sweeping feature. A bridge or through depression feature consists of one feature loop which connects to two character loops.

6.3 Feature Interactions

In practice, a part might not be simply formed from isolated positive features, neg­ ative features and a base feature. Features may interact with each other by sharing common vertices, edges, surfaces, feature loops, or character loops. When a feature is interacting with other features, the feature properties may be altered and char­ acter loops may be broken or split. This usually results from two or more features interacting with each other and, thus, the character loops of these features interact.

Therefore, some segments of the character loops may not exist or be merged together.

These interactions make the feature recognition much more complex. Based on dif­ ferent concerns, feature interactions may have different interpretations and require different reasoning strategies. There are several different views concerning feature interactions including

• G eom etric Interaction: The geometric forms of two or more features interact

with each other by sharing some common primitive topological elements.

• P rocess Interaction: Due to process constraints, features may need to split

or merge together for processing convenience.

• P reced en t Interaction: The forms of the features may be affected if the

precedent relationships among them are changed. 170

• Application Interaction: A part model may have different feature represen­

tation in different applications.

Most of the these interactions may lead to different results in different applications.

It is necessary to write specific rules for each application. However, all of these interactions are more or less based on geometric interactions among features. Thus, in order to make the proposed approaches useful in practice, a thorough classification and solutions to geometric interaction need to be investigated.

From the point of view of geometric relations among features, feature interactions can be classified into two categories: recursive interaction and relative-position inter­ action. The recursive interaction results from the fact that one feature, which can be referred to as a child feature, is located on the other feature, which can be referred to as a parent feature. Thus, it is also called a parent-child relationship between two features. The relative-position interaction results from the fact that at least two features interact with each other because of their relative positions.

6.3.1 Recursive Interactions

Based on the contact conditions among feature loops and character loops of the interacting features, the recursive interactions can be classified into the following sub-categories.

• Simple Recursive Interaction: The character loop between parent and child

features is a closed loop and is completely enclosed by the feature loop of the

parent feature. This interaction will lead to a series of loops in the entity-loop- 171

attribute graph, and each feature can be recognized by a pair of feature and

character loops without difficulty. Typical examples are shown in Fig.39 and

Fig.43. Fig.48 shows another example which a pocket is located on a rib where

the rib is placed on a base feature.

Base

PCL

P o sitiv e F e a tu re PFL

5,6,7,8.Fl,Fa.F3.F4 e,f.g,h,g,10,11.12.F5

NCL

i,j,k.l,13.14.15.16

N egative F e a tu re L4 NFL

17.18.19.20.F6.F7.F8.F9 Note: edge: 1.2.3.4.5. .. vertex: a.b.c.d.e. m.n.o.p.21.22.23.24 face: F1.F2.F3....

Figure 48: Simple Recursive Features

• Features Sharing Part of Feature Loops and Character Loops: Two

features with recursive interaction may share some common segments of feature

loops and character loops. As a result, part of the character loop between two

interacting features may change its property and merge into the feature loops 172 of the two features and the character loop wiU become broken. Assuming the parent feature to be a positive feature, the recursive interaction can be classified into the following cases based on the type of intersecting loops and features.

1, Positive Child Feature: When the child feature is a positive feature,

two cases can be observed as follows.

(a) Two positive features recursively interact with each other by sharing

a segment of an edge which yields that the character loop between

two features become an open loop. The original entity-loop-attribute

graph has a compound positive feature which represents a positive fea­

ture contains a broken character loop inside. Depending on the topo­

logical elements between the open ends of the character loop, three

cases can be observed referring to Fig.49. Note that the dashed loop

in the entity-loop-attribute graph represents an open character loop.

By tracing the character loop L3 in Fig.49(a), edge# 10 can bridge

the broken character loop between vertices i and j. A rule can be set

up to convert edge#10 into the character loop and divide the com­

pound feature into two recursive positive features when this type of

entity-loop-attribute graph is identified. In Fig.49(b) and (c), the bro­

ken character loop terminates at vertices i and j which are on the

same surface. This provides a hint that if a “pseudo” edge or a zero-

Gaussian-curvature character line can be created between vertices i

and j on surface FI the compound positive features can be divided 173

PFL

FO

PCL

PFL 6.6.7,Wl.F2.ra.P4

(a) Open Character Loop connected by an Edge / — “- x i r i----X I l,iIc,l,16,lW7 1 /

PFL

FO

PCL (b) Open Character Loop connected by a Sur­ face Positive Featnre-1 12 PFL 5.8.7AFl.F2.ra.F4 e.f.g,h,9,11,12,13,14

f in 'll

PCL li.k.L15.18.17.10

L4

PFL Note: @ denotes an added character line 18.19.20,Zl.rB.r7.FB.F9 @ a face generated from adding 0 (c) Open Character Loop connected by a Planar m n.o.p.2 2.23.24.2^ Face (d) ELA graph

Figure 49: Features Sharing Part of Loops: Open Character Loop 174

into two recursive positive features. Even though criteria can be set

up to separate the interacting features, note that, sometimes, it may

be more appropriate to group them together as a compound feature

with transition form or negative form on the feature depending on the

application.

(b) The character loop of the child feature overlaps part of the character

loop of the parent feature. Since character loops are identified based

on the entities adjacent to the feature loop, the common edges 19,

20, 21 in Fig.50 can be shared by both character loops. Note that

there are three open character loops in Fig.50 and they are connected

by dashed lines which represent the connectivity relationships among

them. On the other hand, edge 18 can also be merged into planar face

F7 of the positive feature-1. This allows edge 18 to be included in the

character loop between two features. Therefore, both features can be

identified and have parent-child relationships.

2. Negative Child Feature When the character loop of a negative child

feature is enclosed by a positive parent feature, a closed character loop can

be identified. Thus, it can be identified as a simple recursive feature with­

out any problem. However, when the character loop of the negative child

feature overlaps the character loop of the positive parent feature. Three

cases can be observed as shown in Fig.51, Fig.52, and Fig.53. In fact, it

is not very difficult to find similarities by comparing these three examples 175

33

11 - 16-

.3—

a

PFL a,b.c.d,l-8.Fl.F2,F3.F4 e.f,g.h.9-12.F6.U

Z ' L2 I PCL PCL i.g,k,l.p.in.l3-17 I p.m.18 I

L4 1 3 ^ ^ ------L3 L5 Positive Feature—1 PCL LZ PFL 30.31.32.33.F7,F8,F9.F10.L3

U L5 u.v.ir.3c,34-37,F16

L4 PFL PFL PCL 30,31,32.33.F7.F8.F9.F10 22-25.Fll,F12.F13,F14 m,n,o.p.l6.19.20,21 ^ t.it,x .34-37.F16 a.r.s.t,26-29.F15 Positive Feature—2 L4 PFL 22,23,24,25,Fll.n2,F13,F14 q.r,s.t.26-29,F15

Figure 50: Features Sharing Part of Loops: Overlapping Character Loops 176

V c

23

a

LO FFL a—h.1—12,n —F6

U tA. k,Lq,r,13 U PCL / ---- '----X _L tJ.kLq.r.13.15-18 q.r.U Positive F e a tu re

q,r.s,t.2V.Z8.29 U L5

Negative F e a tu re m.n,o

Figure 51: Open Character Loop Connected by an Edge 177

a

PFL a-h.l-12.n-F8

U lA. ij.k,Lq,r.l3-18 PCL I tj.k.lq.r.13,15-18 1 Positive \ ------y Feature

CL

L3 L4 / ----L3 N C L ^ \ q,r.s.tl4,25.26.27 4 y PFL NCL \ I 19-22.F7.F8.F9.no u.v.30-33.n3.n4,F15 > Negative Feature m,n.o.p.23-26.ni X.w,35.36.37.n6 L4 nfl ^ N ^ [ 30-33,Fi2.n3.n4.ns U.V.X.W.34—37.n6 ,

Figure 52: Open Character Loop Connected by a Planar Face 178

LD PFL a-h,l-12,Fl-P6 PFL a-h,l-12.M,Fl-Fe

U U. PCL

PCL Positive I LJ.k.l.q.r.13,16-18 1 Feature L3 12 12 PFL 19-22.F7-F10 m,n.o,p,23-28.F11

NCL q,r.s,t.l4.25.26,27

PFL L4 NCLL3 Negative 19-22,F7.F8.F9.F10 u,v.30-33.F13.F14,F15 Feature m,n.o.p,23-26,Fll x.w.35,36,37,F16 NFL 30-33.I12.F13.F14.F15 U.V.X.W.34-37.F16

Figure 53: Open Character Loop Connected by an Edge 179

to the cases in Fig.49 and Fig.50. In Fig.51 and Fig.52, the character loop

between two features is an open loop. Thus, the edge connecting the open

ends of the character loop can be included in the character loop as shown

in Fig.51 and two features can be separated. By adding a “pseudo” edge

or a character line with straight-and-straight attribute on the planar face

between two open ends of the character loop in Fig.52 similar to the case

in Fig.49(c). Two recursively interacting features can then be identified

individually. In Fig.53, two character loops overlap by sharing a common

edge 14 which is similar to the case of Fig.50. Thus, two features can be

identified and have parent-child relationships.

The above discussions assume that the parent feature is a positive feature. On

the other hand, when the parent feature is a negative feature, similar procedures

of analysis can be performed and similar results can be obtained except that

feature-loop properties are reversed.

• Features Sharing an Entire Surface or Loop: When a new feature adds

on an existing feature, it may occupy an entire surface region in the feature

loop. When two features are both positive features or both negative features,

the character loops may become nonexistent if the intersecting loops also have

corresponding concave or convex properties as shown in Fig.54. The separation

of this kind of feature interactions usually require specific knowledge in advance.

Otherwise, these child features merge into the parent features and, thus, they

can be identified as one single feature. 180

Positive F e a tu r e - 2 PFL

Positive Feature-1

PCL m '' c 12 Positive F eature LZ PFL

Figure 54: Two Positive Features Interact Without Character Loop

However, when two features have different types of feature loops, the character

loop may become nonexistent as shown in Fig.55. Since two feature loops

can still be identified in Fig.55, the character loop between two features can be

assigned to the vertices and edges of the positive feature loop which are adjacent

to the negative feature loop in Fig.55.

In Fig.56, the character loop of the child feature also overlaps the character loop

of the parent feature similar to the case in Fig.53. Thus, edge 13 is shared by

both character loops.

• Features Crossing Boundaries Between Two Planar Faces; A new child

feature adding on a parent feature may cross the boundary edge or blending

surface between two planar faces of the parent feature. When two features are

both positive features or negative features, feature recognition can be performed

without any problem. However, when one feature is a positive feature and the 181

LO PFLX\ f a—h.1—12.F1—F6 j LO P F L ^ \ f a—h,!—12,n—F6 j

LI pcl\ 1 l.j.k.113.14,15.16 j

/ U p c l N ^ 14.15.18 J Positive ( tj,k U 3 Feature / L2 P F L ^ \ Positive , Feature 117.18.19.20.F7.F8.F9.no j P F L ^ \ L3 ( 17.18.19.20.B7.F8.F9, no J \m.n.0.p.21.22.23.24/ NCLN^ ( m.ao.p.21.22.23.24 j

13 N F L ^ \ L4 Negative^ , Negative Feature , 25.26.27.28.Fll.n2.n3.n4 Feature r.s.tu,29-32,F15 L4 N F L ^ \ 25.26.27.28.FU.n2.n3.n4 r.3,t,a29-32.F15

Figure 55: One Positive and One Negative Features Interact Without Character Loop 182

23.

m n3

FIS 2 )

¥U

—3

a

PFL U) PFL\ a~h.l—12.F1—P6 a —h.1—12.Fl—F6 |

PCL CL u PCL\ \ IB tik.l.13.14.15.16 ' -/ L4 L3 L3 \ ^ ^ L 2 , Positive Feature [2 P F L ^ \ / ---- ■---\ 17-22,24.F7-F10 \ . o p .: 3.L3 y

NFL NCL L3 NCL^ 18.19.22.24,F7-F10 25-28.FU.F12.F13.F14 i,j.m.n,13.17.20,21 p.0.23 q,r.s.t29-32.F15 \ L4 / , Negative Feature / lA N F L ^ \ [ 25-28.F11-F14 j.r,8.t,29-32.F15

Figure 56: One Positive and Negative Features Interact Without Character Loop 183

other is a negative feature, the character loop may not form a closed loop be­

cause the character loop on the boundary between two planar faces may alter

its properties. Most of time, those regions will turn out to have the same type of

attributes with the parent feature. One example is shown in Fig.57. Two broken

m f

6 ____ _

b

Figure 57: Feature Crossing the Boundary of Two Planar Faces

character loops are identified and can be merged by adjoining two vertices with

convex-and-convex attribute. In other words, for the feature recognition pur­

pose, a rule can he set up to include the intersecting vertices into the character

loop when this type of entity-loop-attribute graphs is identified..

6.3.2 Relative-Position Interactions

Based on the contact conditions among feature loops and character loops of the interacting features, the relative-position interactions can be classified as follows.

• Features Crossing C haracter Loops: In some cases, two features do not

have parent-child relationships, but they will interact with each other on another

feature. Thus, at least three features interact with one another by coupling the

character loops together. Based on the types of feature loops, three cases can be 184

24.

m w

a

LO m PFL

U 12.

PCL PCL PFL PCL #—H —12 ^ —FO

lA L3 LB U L3.

PCL 12 PFL 17-8M4J7-P10 12 U PCL

13 NCL M 13 i7aB.ig2orrAFB^o !B.20.a0.3tF12-nB LlW.1%17.2021 in.aap2i-24^ ■,Lu.T32-aBjïë^ hi PFL LB PFL iB4B2anF7-no »,313326.F12-nS nAo.p.Z2-eO.Fll NFL 2S-2e.711-I14 %r*L2B-32Jl5

Figure 58: Two Positive Features Sharing Character Loops 185 classified as follows. In Fig.58, when two interacting features are positive fea­ tures, the intersecting vertices of two character loops have concave-and-concave attribute instead of usual concave-and-convex attribute for transition features.

This leads to two possible interpretations: First, two concave-and-concave ver­ tices can he included in the character loops. These two vertices along with edges

29, 32, 36and vertices u and x are shared by three character loops. Thus, two features are identified separately. Second, the concave-and-concave vertices are considered to be a negative feature. Thus, in this example, the resulting fea­ tures may include two positive features and two negative features on a parent feature(which is not shown in Fig.58).

When two interacting features are negative features as shown in Fig.59(a), two open character loops corresponding to two negative feature loops are identified.

Three open character loops can be identified and all of them are connected to one another at the vertices of open ends. The intersecting vertices of three character loops have convex-and-convex attributes; on the other hand, the intersecting vertices of the feature loop have concave-and-convex attributes. This also leads to two possible interpretations: First, a compound negative feature can be obtained if two negative features are combined as one. Second, two negative

features are identified separately.

In Fig.59(b), two different types of features interact with each other. Two

character loops can be identified. However, the edges between two feature loops

need to be shared by two character loops. Thus, two features with relative- 186

PFL

a-h,l-ia,Fl-F6

/ --- U --- \ 1 \ q.r.s.l,26,27,28 |

8

b V , L4 L5

m NFL NFL

18,19,20.21,F7-F10 30.31.33.35,F12-F15

a (a) Two Negative Features Sharing Character Loops

LO PFL

a-h.l-12.Fl-F6

U '---\ lj.k.1.13-17

L4

f q,r,u,v,29,30,31

b PPL NFL

18.19.20.21.F7-F1G 32.33.34.35.F12-F15 m.n.o.p.22-25 w,x.y.z.A,B.F16.F17

(b) One Positive and One Negative Features Sharing Character Loops

Figure 59: Features Crossing Character Loops 187

position interaction can then be identified separately.

• Features Sharing Part of Character Loop: When two features with the

same types intersect each other by sharing some common segments of character

loops, two features can still be identified without any problem as shown in

Fig.60. However, if one feature is a positive feature and the other feature is

a negative feature, the intersecting character loop may change its attribute or

even becomes nonexistent. Fig.61 gives one example where the intersecting

character loop becomes nonexistent because two planes merge into one plane.

The identified entity-loop-attribute graph shows that two feature loops share

on character loop and one planar face. By tracing the boundary entities of the

planar face, top half of the boundary edges and vertices belong to positive form

and bottom half of the boundary edges and vertices belong to negative form and

these two sets are separated by two vertices which belong to transition feature.

This provides a hint that a “pseudo” edge can be created between vertices i and

j to divide the planar face into two halves. As a result, the positive feature and

negative feature can be separated in the entity-loop-attribute graph in Fig.61.

Features involving interactions may alter some properties of feature or character loops which are used to identify features. This not only makes the feature recognition more complicated but also may provide several different interpretations of feature recognition. Based on the above analysis, feature interactions can be further ana­ lyzed in an enumerating fashion. The above analysis does not contain the complete anaylsis of all types of feature interactions. Nevertheless, the purpose is to demon- 188

■a» F4 _ a _

FI 2

F2

w PFL a“ h,l“ 12,Fl—F6

U PCL / ---- J. '--- \

< ------

Positive , Positive Feature-1 Feature—2

PFL PFLL4 17.ia,19.20.F7.FB.F9.F10 20.29.30,31,F12-F15 m,n,o,p,2i-24,F9

Figure 60: Two Positive Features Sharing an Edge 189

1 0 PFL a—h,l—1E.F1—F6 a-h.l-12.Fl-Fe U 13

CL PCL NCL l,J.q.r.k.L13.15,16.25-27 Uk.Liai4.15.16 lk.q.r.14.25.26.27 L3 PoaiUva ^ Negative Feature Feature 12 PFL 13 NFL 12 PFL NFL 17.ia.l8.20.F8.F9.F10 2a.28.30.31.F12.F13.F15 17.lB.18.2D.F7.FB.FB.no 28.2B.30.31.n2-F15 m.n.o.p.21-24,F8 s.t.u.v.32-35,FI6 m.n.o.p.21-24.F8 s.t.u.T.32-35.n6

Figure 61: One Positive and One Negative Features Sharing an Edge 190 strate that the proposed approaches can be applied to recognize basic features with feature interactions.

6.4 Summary

In this chapter, the features in manufacturing processes including machining, die casting, injection molding, and sheet metal forming and the recognition of detailed features from the proposed neutral basis were presented. Feature types including , sweeping features, bridge features, polyhedra features, transition positive features, and through depression features were investigated. Based on the entity-loop-attribute graph and the proposed neutral basis consisting of two attributes and basic features, feature interactions were analyzed and possible interpretations are discussed. CHAPTER VII

Conclusions

High competition in modern industry has brought more pressure for cutting cost, for improving product quality, and for minimizing the time from concept to production.

These requirements, coupled with the ever-growing complemty of functional require­ ments of products and of production systems, have contributed to a rising interest in concurrent engineering and automation of the processes in a CIM environment. The goals of this research are to shorten the time from an existing prototype to produc­ tion, and thus to increase productivity, to improve the dimensional accuracy, and to bridge some of the missing links in the CIM environment. In order to achieve these goals, this dissertation focuses on two important and interelated processes in the CIM environment including reverse engineering and feature recognition. The objective of the reverse engineering is to create B-rep models from scanned data points. Accurate descriptions of the prototype model are needed in many aspects in design, analysis, and manufacturing planning. The objective of feature recognition is to extract feature information fcom a B-rep model in order to produce a feature-based model so as to support various design, process planning, and manufacturing activities. By linking these two processes, the feature recognition process can be applied to extract features from the resulting B-rel models of the reverse engineering process. Therefore, the

191 192 goals of this research can be achieved.

In this study, four modules for reverse engineering are identihed. They are segmen­ tation of scanned data points, classihcation of the data points, creation of curves and surfaces, and interface to CAD/CAM solid modeling systems. Several key issues in these modules are studied intensively. For the segmentation module, a five-step pro­ cess, including curvature calculation from discrete data points, Gaussian smoothing operator, identification of character points and lines, scale-space tracking technique, and hypothesis test, is developed to divide the scanned data points into a number of subsets of data points. Several techniques to calculate curvature from various for­ mats of scanned data points are derived. The use of the Gaussian smoothing operator at multiple scales together with the scale-space tracking technique can increase our confidence in detecting character lines or points without loss of localization. The hypothesis test can justify the results from the smoothing operator and eliminate the need to specify threshold.

After segmenting the scanned data points into a number of subsets, two approaches are proposed to identify curve and surface types from each subset including curva­ ture approach and quadratic-fit approach. Curvature approach applies the signs of curvature information to characterize surfaces into eight sub categories. Quadratic-fit approach applies quadratic-fit technique to characterize surfaces into sub categories of

quadrics. Techniques to perform quadratic fit without having singular matrix are pre­

sented. After curve and surface types are identified from each subset of data points,

surface approximation module describes the operations involved in generating curves 193 and surfaces for each subset of data points identified from the segmentation module.

A least-squares fitting technique is applied to approximate curves and surfaces from scanned data points. In this study, we do not focus on the optimization technique to obtain the optimum results. On the other hand, several methods to obtain practically satisfactory results are discussed because they require much less computational time.

Thus, several techniques to approximate free form curves and surfaces from various formats of scanned data points are discussed.

For feature recognition, the current study focuses on the development of neutral basis including two attributes and basic features. The purpose of the neutral basis is to bridge the gap between the geometric data of a design work and the higher-level geometric abstraction that supports complex reasoning incurred in feature-based sys­ tems. A systematic approach for feature recognition from B-rep models is presented in this study. Two attributes are proposed to characterize the geometric entities including surfaces, edges, and vertices by applying the Gauss-Bonnet theorem and the concept of homeomorphisms. By associating the entities with the proposed at­ tributes and applying the adjacent relationship information among topologic entities, three basic feature categories including positive features, negative features, and tran­

sition features are first identified. The proposed approach is not limited to prismatic

parts. Part models consisting of fillets, rounds, cylindrical surfaces, spherical surfaces,

torus, and sculpture surfaces can all be recognized. The recognized features from the

proposed approach can include a much broader categories of part models compared to

those in previous studies and can support the subsequent product activities in design 194 and manufacturing. Feature interactions were analyzed and possible interpretations are discussed by considering two or three features interferencing to one another. A feature recognizer is proposed to transform the geometric data of a design to high-level semantic information for various applications. Detailed features are recognized 6om basic feature categories which can be applied to several manufacturing applications including machining, die casting, injection molding, and sheet metal forming.

It is believed that the contributions and the applications of the present study can be listed as follows:

• Identify the needs for developing a modeling scheme for complex parts with no

existing CAD database.

• Develop relevant knowledge on the basic procedures involved in the segmen­

tation of scanned data points including curvature calculation from discrete

data points, Gaussian smoothing operator, identification of character points

and lines, scale-space tracking technique, and hypothesis test.

• Develop pertinent knowledge required for identifying curve and surface types

and approximation of curves and surfaces from scanned data points.

• Develop a systematic approach for recognition of basic features from B-rep

models.

• Develop pertinent knowledge required for characterizing various geometric en­

tities including surfaces, edges, and vertices, namely, two attributes, by using 195

the concept of homeomorphisms and Gauss-Bonnet Theory. Moreover, two-

attributes approach combines the merits of the two diverse approaches in the

previous study into one generic representation.

• Develop pertinent knowledge required for feature interactions during the feature

recognition process.

• Develop relevant knowledge on the processes involved in recognizing detailed

features from basic feature categories for various manufacturing processes.

7.1 On-going Research

Based on the present study, several topics for on-going research are identified as follows.

• In this research, character lines and character vertices need to be identified

before characterizing geometric entities. If the part model is created from the

proposed reverse engineering process, the character lines and vertices can be

identified in the reverse engineering process. However, for the part models

created from other CAD systems, there may exist charater lines and vertices

which needs be identified. Even though character lines are classified in chapter 5

and some examples of identifying character lines are given, there is no complete

and thorough approach available. The approach similar to the segmentation of

scanned data points used in Chapter 2 can be used, however, note that there is

no noise in the curvature calculated from CAD model. Therefore, a much faster 196

and accurate approach can be develop to identify character lines and character

vertices from a B-rep model.

• In the present work, feature interactions are analyzed based on the interacting

two or three features and they are used to demonstrate that the proposed ap­

proaches can be applied to recognize basic features with feature interactions.

However, in practice, more number of features interacting to one another may

take place from time to time. Thus, for a long term goal, a recursive algorithm

needs be developed to perform automatic reasoning which can be applied to

any number of interacting features.

• After identifying the neutral basis of a part model, a feature recognizer is pro­

posed to transform the geometric data of a design to high-level semantic infor­

mation for various appUcations. In this research, detailed features are analyzed

based on several manufacturing processes, including machining, die casting, in­

jection molding, and sheet metal forming. In order to practice the objective

of design for manufacturability, a specific application needs be investigated in

more details and software programs needs be developed.

• As discussed earlier, one of the distinct characteristics of the proposed two at­

tributes is that it can provide information linkage in feature interactions. In

Chapter 6, we have demonstrate that two or more possible sets of alternate

interpretations of interacting features can be generated and reasoned in a part

model. In some applications, it may require to obtain all the possible sets of 197

alternate interpretations for the process planning or manufacturability evalu­

ation. Therefore, methodologies to generate all the possible sets of alternate

interpretations for a part model needs be developed.

• Even though basic feature categories include positive features and negative fea­

tures, there are some applications which may require only one type of the two

basic categries. For example, machining process only involves material removal

process and, thus, all the machining features belong to negative features. There­

fore, it is necessary to develop an approach of converting positive features into

negative features in order to extract features for machining process.

• In this research, several issues in reverse engineering has been studies inten­

sively including segmentation of scanned data points, curve and surface type

identification, and curve and surface approximation. In the present work, the

curve and surface are produced based on the subset of data points from the

segmentation result. The relationship between subsets of data points is also

important and may impose other constrants for approximating curves and sur­

faces. For example, for a cylindrical adjacent to a planar face with tangent

continuity condition at the intersecting boundary, the surface approximation

results may not yield the tangent continuity condition because of a small off­

set between two surfaces due to the form errors or measurement errors from

the measurement data. Thus, it may require a modified surface approximation

technique which combines the geometric constraints of the surface functions so

as to apply the best fit technique to the compound surface functions from a 198

number of associated subsets of data points.

7.2 Recommendation

In addition to the on going research, there are several topics which are related to the present work and can be pursued in the future. They are listed as follows.

• The proposed neutral basis consists of two attributes and basic feature cate­

gories and these are proposed based on the pure geometric concerns. However,

in order to provide a higher level reasoning environment and mechanism for

more sophisticated design and manufacturing evaluation, such as the determi­

nation of parting lines and draw directions for die casting, machining sequence

planning, manufacturing evaluation of die cast part, etc., additional design and

manufacturing related information, attributes, and properties can be stored in

the neutral basis in order to be accessed and reasoned during the recognition

process.

• In this research, the proposed neutral basis contains abstract geometric informa­

tion and features are represented in terms adjacent geometric entities. In order

to provide a link to and support design-by-features approach, the parametric

information such as feature dimension needs be extracted and, then, it can be

used to modify and reconstruct the part model so as to support other design ac­

tivities. This capability is especially important in supporting the feature-based

design environment. 199

• One of the most attractive properties of feature-based representation is that it

can support manufacturability evaluation for a specified manufacturing process

in order to pratice concurrent engineering. Based on the proposed neutral basis,

a richer abstract geometric information can be obtained and reasoned. This

may help the reasoning mechanism during the manufacturability evaluation

process and also provide a basis for geometric reasoning. Specific manufacturing

process, such as machining, die casting, injection molding, and sheet metal

forming, can be applied for further investigation. List o f R e f e r e n c e s

Adrian, Albert, Solid Analytic Geometry, University, of Chicago Press, Chicago, 1949. Asada, Haruo and Brady Michael, “The Curvature Primal Sketch,” Proceedings of the 2nd IEEE Workshop Computer Vision: Representation and Control, 1984, pp.8-17. Anderson, D.C. and Chang, T.C., “Geometric Reasoning in Feature-Based Design and Process Planning,” Computers and Graphics, Vol. 14, No.2, 1990, pp.225- 235. Besl, P.J. and Jain, R.C., “Intrinsic and Extrinsic Surface Characteristics IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1985, pp.226-233. Besl, P.J. and Jain, R.C., “Three-Dimension Object Recognition,” ACM Computing Surveys, Vol.17, No.4, Dec. 1985, pp.75-145. Besl, P.J. and Jain, R.C., “Segmentation Through Symbolic Surface Descriptions,” IEEE Computer Society Conference on Computer Vision and Pattern Recogni­ tion, 1986, pp.77-85. Besl, P.J. and Jain, R.C., “Invariant Surface Characteristics for 3D object Recog­ nition in Range Images,” Computer Vision, Graphics, and Image Processing, Vol.33, 1986, pp.33-80. Besl, P.J. and Jain, R.C., “Range Image Segmentation,” Machine Vision Algorithms, Architectures, and Systems, Edited by Herbert Freeman, Academic Press Inc., 1988. Boyer, K.L., Mirza, M.J., and Ganguly, G., “The Robust Sequential Estimator: A General Approach and its Application to Surface Organization in Range Data,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.16, No.lO, Oct. 1994, pp.987-1001. Brady, M., Ponce, J., and Yuille, A., “Describing Surfaces,” The Second Interna­ tional Symposium on Robotics Research, Edited by Hanafusa, H. and Inoue, H., MIT Press, 1985 Carmo, M.P., Differential Geometry of Curves and Surfaces, Prentice-HaU, Inc., New Jersey, 1976.

200 201

Chamberlain, M.A., Joneja, A., and Chang, T.-C., "Protrusion-Features Handling in Design and Manufacturing Planning,” Computer Aided Design^ Vol.25, No.l, Jan. 1993, pp.19-28. Chang, T.C., Anderson, D.C., and Mitchell, O.R., “QTC — An Integrated Design/Man­ ufacturing/Inspection System for Prismatic Parts,” ASME Int. Computers in Engineering, Vol.l, 1988, pp.417-426. Chen, Y.M., Miller, R.A., and Vemuri, K.R., “A IVamework for Feature Based Part Modeling,” ASME Int. Computers in Engineering, 1991, pp.357-365. Chen, Y.M., Miller, R.A., and Vemuri, K.R., “On Implementing an Integrated Design-ManufacturabiHty Assessment Environment,” ASME Int. Computers in Engineering, 1991, pp.407-413. Chivate, P.N. and Jablokow, A.G., “Solid-Model Generation From Measured Point Data,” Computer-Aided Design, Vol.25, No.9, 1993, pp.587-600. Choi, B.K., Barash, M.M., and Anderson, D.C., “Automatic Recognition of Ma­ chined Surfaces from a 3D Solid Model,” Computer-Aided Design, Vol.16, No.2, 1984, pp.81-86. Chuang, S.H. and Henderson, M.R., “Three-Dimensional Shape Pattern Recognition Using Vertex Classification and Vert ex-Edge Graphs,” Computer-Aided Design, Vol.22, No.6,1990, pp.377-387. Chuang, S.-H.F. and Henderson, M.R., “Using Subgraph Isomorphisms to Recognize and Decompose Boundary Representation Features,” Tyrans, of ASME, Journal of Mechanical Design, Vol. 116, No.3, Sept. 1994, pp.793-800. Chung, J.C.H., Patel, D.R., Cook, R.L., and Simmons, M.K., “Feature-Based Ge­ ometry Construction For Geometric Reasoning,” ASME Int. Computers in Engineering, 1988, pp.497-504. Chung, J.C.H., Patel, D.R., Cook, R.L., and Simmons, M.K., “Feature-Based Mod­ eling For Mechanical Design,” Computer and Graphics, Vol.l4, No.2, 1990, pp.189-199. Dong, Jian, “General Manufacturing Features: Information Carriers for Concurrent Design and Manufacturing,” ASME Winter Annual Meeting, 1994, pp.1-10. Dong, Jian, Parsaei, H.R., and Gornet, T., “Feature-Based Automated Process Plan­ ning (FBAPP) System,” Proceedings of the 2nd Industrial Engineering Research Conference, 1993, pp.11-15. Duda, R.O., Nitzan, D., and Barrett, P., “Use of Range and Reflectance Data to Find Planar Surface Regions,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.PAMI-1, July 1979, pp.259-271. 202

Fan, T.-J., Medioni, G., and Nevatia, R., “Segmented Descriptions of 3-D Surfaces,” IEEE Journal of Robotics and Automation, Vol.RA-3, No.6, Dec. 1987, pp.527- 538. Floriani, L., “Feature Extraction from Boundary Models of Three Dimensional Mod­ els,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.ll, No.8, Aug. 1989, pp.785-798. Fischer, A. and Shpitalni, M. , “Accelerating the Evaluation of Volumetric Modelers by Manipulating CSG trees and DAGS,” Computer-Aided Design, Vol.23, No.6, 1991, pp.420-434. Fu, K.S., Gonzalez, R.C., and Lee, C.S.G., Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill, 1987. Gadh, R., “Abstraction of Manufacturing Features From Design,” Ph.D. These, Carnegie Mellon University, Pittsburgh, PA, May 1991. Gadh, R., Gursoz, E.L., and Prinz, F.B., “Knowledge Driven Manufacturablility Analysis From Feature-Based Representation,” Proceedings of the ASM E Win­ ter Annual Meeting, 1989, pp.21-34. , Gadh, R., Gursoz, E.L., Hall, M.A., and Prinz, F.B., “Feature Abstraction In a Knowledge-Based Critique of Designs,” Manufacturing Review, Vol.4, No.2, June 1991, pp.1-11. Gadh, R. and Prinz, F.B., “Abstraction of Manufacturing Feature From Designs,” Proceedings of the ASME Winter Annual Meeting, 1991. Gadh, R. and Prinz, F.B., “Recognition of Geometric Forms Using the Differential Depth Filter,” Computer-Aided Design, Vol.24, N o.ll, Nov. 1992, pp.583-598. Gadh, Rajit, “Hybrid Approach to Intelligent Geometric Design Using Features- Based Design and Feature Recognition,” Proceedings of the 19th ASME Design Automation Conference, Albuquerque, NM, 1993, pp.273-283. Gavankar, P. and Henderson, M.R., “Graph-Based Extraction of Protrusions and Depressions from Boundary Representation,” Computer-Aided Design, Vol.22, No.7, 1990, pp.442-450. Gu, Peihua, “Feature Representation Scheme for Supporting Integrated Manufac­ turing,” Computers & Industrial Engineering, Vol.26, No.l, 1994, pp.55-71. Hall, E.K., Tio, J.B.K., McPherson, C.A., and Sadjadi, F.A., “Measuring Curved Surfaces for Robot Vision,” IEEE Computer, Dec. 1982, pp.42-54. Henderson, M.R. and Anderson, D.C., “Computer Recognition and Extraction of From Features: a CAD/CAM Link,” Computers in Industry, Vol.5,1984, pp.329- 339. 203

Hummel, K.E., “Coupling Rule-Based and Object-Oriented Programming for the Classification of Machined Features,” ASME Int. Computers in Engineering, 1989, pp.409-418. Irani, R.K., Kim, B.H., and Dixon, J.R., “Integrating CAE, Features, and Iterative Redesign to Automate the Design of Injection Molds,” ASME Int. Computers in Engineering, Vol.l, 1989, pp.27-33. Irani, R.K., Saxena, M., and Finnigan, P.M., “Boundary-Based Feature Modeling Utility,” ASME Int. Computers in Engineering, 1990, pp.45-51. Jakubowski, R., “Syntactic Characterization of Machine-Parts Shapes,” Cybem. Syst. Int. Int. J., 1982, pp.1-24. Joshi, S. and Chang, T.C., “Graph-Based Heuristics for Recognition of Mechined Features from a 3D Solid Model,” Computer-Aided Design, Vol.20, N o.2,1988, pp.58-66. Joshi, S., Vissa, N.N., and Chang, T.C., “Expert Process Planning System with Solid Model Interface,” Int. J. Prod. Res., Vol.26, No.5, 1988, pp.863-885. Karinthi, R. and Nau, D., “Geometric Reasoning for Design and Procss Planning,” Proceedings of the 1991 NSF Design and Manufacturing Systems Conference, 1991, pp.497-502. Karinthi, R. and Nau, D., “Geometric Reasoning as a Guide to Procss Planning,” ASME Int. Computers in Engineering, 1989, pp.609-616. Karinthi, R. and Nau, D., “An Approach to Addressing Geometric Feature Inter­ actions in Concurrent Design,” ASME Int. Computers in Engineering, 1990, pp.243-250. Kim, Y.S., “Form Feature Recognition by Convex Decomposition,” ASME Int. Computers in Engineering, Vol.l, 1991, pp.61-69. Kumar, B., Anand, D.K., and Kirk, J.A., “Knowledge Representation Scheme for an Intelligent Feature Extractor,” ASME Int. Computers in Engineering, 1988, pp.543-550. Lane, E.P., Metric Differential Geometry of Curves and Surfaces, The University of Chicago Press, Chicago, 1984. Langridge, D.J., “Detection of Discontinuities in the First Derivatives of Surfaces,” Computer Vision, Graphics, and Image Processing, Vol.27, Sep. 1984, pp.291- 308. Lee, E.T.Y., “The Rational Bezier Representation for Conics,” Geometric Modeling: Algorithms and New Trends, edited by Farin, G., SIAM, 1987, pp.3-19. Lee, E.T.Y., “Choosing Nodes in Parametric Curve Interpolation Computer-Aided Design, Vol.21, No.6, 1989, pp.363-370. 204

Lee, Y.C. and Pu, K.S., “Machine Understanding of CSG: Extraction and Unification of Manufacturing Features,” IEEE CG& A, January 1987, pp.20-32. Lee, Y.C. and Fu, K.S., “A New CSG Tree Reconstruction Algorithm for Feature Representation,” ASME Int. Computers in Engineering, 1988, pp.521-528. Lentz, D.H. and Sowerby, R., “Feature Extraction of Concave and Convex Regions and Their Intersections,” Computer-Aided Design, Vol.25, No.7, 1993, pp.421- 437. Lentz, D.H. and Sowerby, R., “Hole Extraction for Sheet Metal Components,” Computer-Aided Design, Vol.26, No.lO, 1994, pp.771-783. Luby, S.C., Dixon, J.R., and Simmons, M.K., “Designing with Features: Creating and Using a Features Data Base for Evaluation of Manufacturability of Cast­ ings,” ASME Int. Computers in Engineering, Vol.l, 1986, pp.285-292. Mantripragada, R., Development of Computer Aided Engineering System for Feature Based Design of Box-Type Sheet Metal Parts, Master Thesis, The Ohio State University, 1994. Marr, D. and Hildreth, E., “Theory of Edge Detection,” Proc. Roy. 5oc.(London), Vol.B207, 1980 pp.187-217. Medioni, G. and Nevatia, R., “Description of 3D surfaces using Curvature Prop­ erties,” Proc. Image Understanding Workshop, New Orleans, LA, Oct. 1984, DARPA, pp.291-299. Medioni, G., Chen, J.S., and Ulupinar, F., “Accurate Detection of Edges with Large Laplacian-of-Gaussian Masks,” Image Understanding, edited by Ullman, S. and Richards, W., Ablex Publishing Corp., New Jersey, 1989. Menon, S. and Kim, Yong Se, “Handling Blending Features in Form Feature Recog­ nition Using Convex Decomposition,” ASME Int. Computers in Engineering, 1994, pp.79-92. Mitiche, A. and Aggarwal, J.K., “Detection of Edges Using Range Information,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.PAMI-5, Mar. 1983, pp.174-178. Mortenson, M. E., Ceometric Modeling, John Wiley & Sons, Inc., 1985. Nackman, L.R., “Two-Dimensional Critical Point Configuration Graphs,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.PAMI-6, No.4, 1984, pp.442-449. Nakamura, T., Fujimoto, H,, and Chen, L. Y., “Feature Extraction from Object with Free Form Surfaces,” Trans, of JSME, Part C, Vol.60, No.572, April 1994, pp.1476-1481. 205

Nakamura, Shoichiro, Applied Numerical Methods With Software, Prentice Hall, New Jersey, 1991. Narayan, S.V. and Ling, Z.K., “Heuristics Based Feature Recognition: a Graph Ap­ proach,” ASME Design Technical Conference, Minneapolis, MN, 1994, pp.299- 306. Nitschke, D.R., Chen, Y.M., and Miller, R.A., “A Feature Extraction Interface for CAD Part Models,” ASME Int. Computers in Engineering, 1991, pp.367-373. O’Neill, Barrett, Elementary Differential Geometry, Academic Press, New York, 1966. Ovtcharova, J., Hassinger, S., Vieira, A.S., Jasnoch, U., and Rix, J., “SINFONIA: an Open Feature-Based Design Module,” ASME Int. Computers in Engineering, 1994, pp.29-43. PDES Form Feature Information Model(FFIM), PDES Form Feature Croups(June 1988)(Coordinator: Mark Dunn) Peigl, L., “A Technique for Smoothing Scattered Data with Conic Sections,” Com­ puters in Industry, Vol.9, 1987, pp.223-237. Peigl, L. and Tiller Wayne, “Curve and Surface Constructions Using Rational B- splines,” Computer-Aided Design, Vol. 19, No.9, 1987, pp.485-498. Peigl, L-, “On NURBS: A Survey,” IEEE Computer Graphics & Applications, Jan­ uary 1991, pp.55-71. Peng, Hsuan, Feature-Based Injection Molded Part Design and Manufacturability Evaluation, Master Thesis, The Ohio State University, 1990. Perng, D.B., Chen, Z., and Li, R.K., “Automatic 3D machining Feature Extrac­ tion From 3D CSC Solid Input,” Computer-Aided Design, Vol.22, No.5, 1990, pp.285-295. Ponce, J. and Brady, M., “Toward a Surface Primal Sketch,” Three-Dimension Ma­ chine Vision, edited by Kanade, T., Kluwer Academic Pub., 1987, pp.195-240. Prabhakar, S. and Henderson, M.R., “Use of Neural Nets for Feature Recognition in Engineering Parts,” Proceedings of the 1991 NSF Design and Manufacturing Systems Conference, 1991, pp.557-562. Press, William H., et al., Numerical recipes in C : the art of scientific computing, 2nd ed.. New York, Cambridge University Press, 1992. Pugachev, V.S., Probability theory and mathematical statistics for engineers, Perga- mon Press, 1984. Puttré, Michael, “Capturing Design Data with Digitizing Systems,” Mechanical En­ gineering, April 1994, pp.62-65. 206

Regli, William C. and Nau, Dana S., “Building a General Approach to Feature Recognition of Material Removal Shape Element Volumes(MRSEVs),” Proceed­ ings of the 2nd Symposium on Solid Modeling and Applications, 1993, pp.293- 302. Regli, William C., Gupta, S.K., and Nau, Dana S., “Feature Recognition for Manu­ facturability Analysis,” ASME Int. Computers in Engineering, 1994, pp.93-104. Reklaitis, G.V., Ravindran, A., and Ragsdell, K.M., Engineering Optimization, New York, Wiley, 1983. Requicha, A.A.G., and Vandenbrand J.H., “Form Features For Mechanical Design and Manufacturing,” ASME Int. Computers in Engineering, 1989, pp.47-52. Rogers, D.F. and Fog, N.G., “Constrained B-spline Curve and Surface Fitting,” Computer-Aided Design, Vol.21, No.lO, 1989, pp.641-648. Roberts, L.G., “Machine Perception of Three-Dimensional Solids,” Optical Inofrma- tion Processing, Tippet, J.T., et a/.(eds.), MIT Press, Cambridge, MA., 1963, pp.159-197. Rosen, D.W., Dixon, J.R., and Finger, S., “Conversions of Feature-Based Design Representations Using Graph Grammar Parsing,” Trans, of ASME, Journal of Mechanical Design, Vol.116, No.3, Sept. 1994, pp.785-792. Sakurai, Hiroshi, “Decomposing a Delta Volume into Maximal Convex Volumes and Sequencing Them for M a^ n g ,” ASME Int. Computers in Engineering, 1994, pp.135-142. Sakurai, H. and Gossard, D.C., “Shape Feature Recognition from 3D Solid Models,” ASME Int. Computers in Engineering, 1988, pp.515-581. Sakurai, H. and Gossard, D.C., “Recognize Shape Features in Solid Models,” IEEE Computers Graphics & Applications, Vol.lO, No.5, Sept. 1990, pp.22-32. Sander, P.T. and Zucker, S.W., “Tracing Surfaces for Surfacing Traces,” Proc. 1st Int. Conf. Comput. Vision, London, June 8-11, 1987, pp.231-240. Sarkar, B., “Modeling and Manufacturing of Multiple Featured Objects Based on Measurement Data,” Ph.D. Dissertation, The Ohio State University, 1991. Shah, J.J., “Assessment of Features Technology,” Computer-Aided Design, Vol.23, N o.5,1991, pp.331-343. Shah, J.J. and Hsiao, D.W.C., “A Meta Knowledge Base for Machining Process Selection,” ASME Int. Computers in Engineering, 1991, pp.77-84. Shah, J.J., and Rogers, M.T., “Feature-Based Modeling Shell: Design and Imple­ mentation,” ASME Int. Computers in Engineering, 1988, pp.255-261. 207

Shah, J.J., Rogers, M.T., Sreevalsan, P.C., Hsiao, D.W., Mathew, A., Bhatnagar, A., Liou, B.B., and Miller, D.W., “The ABU Features Testbed: An Overview,” ASM E Int. Computers in Engineering, 1990, pp.233-241. Shen, Yan and Shah, Jami. J., “Feature Recognition by Volume Decomposition Us­ ing Half-Space Partitioning,” ASME Design Technical Conference, Minneapolis, MN, 1994, pp.575-583. Shirai, Yoshiaki, Three-Dimensional Computer Vision, New York, Springer-Verlag, 1987. Shpitalni, M. and Fischer, A., “CSG Representation as a Basis for Extraction of Machining Features,” Annals of the CIRP, Vol.40/1/1991, 1991, pp.157-160. Spivak, Micheael, A Comprehensive Introduction to Differential Geometry, Volume Three, Publish or Perish, Inc., 1975. Sruik, Dirk J., Lectures on Classical Differential Geometry, Addison-Wesley Press, Inc., Cambridge 42, MASS., 1950. Staley, S., Henderson, M.R., and Anderson, D C., “Using Syntactic Pattern Recogni­ tion to Extract Feature Information From a Solid Geometric Model Data Base,” Gomput. Mech. Eng., Sep. 1983, pp.61-66. Stokely, E.M. and Wu, S.Y., “Surface Parameterization and Curvature Measurement of Arbitrary 3D Objects: Five Practical Methods,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.l4, No.8, Aug. 1991, pp.833-840. Talwar, R. and Manoochehri, S., “Algorithms to Detect Geometric Interactions in a Feature-Based Design System,” ASME Design Technical Gonference, Min­ neapolis, MN, 1994, pp.307-314. Tang, K. and Woo, T.C., “Algorithmic Aspects of Alternating Sum of Volumes. Part 1: Data Structure and Diiference Operation,” Computer-Aided Design, Vol.23, N o.5,1991, pp.357-366. Tang, K. and Woo, T.C., “Algorithmic Aspects of Alternating Sum of Volumes. Part 2: Nonconvergence and its remedy,” Computer-Aided Design, Vol.23, No.5, 1991, pp.435-443. Tseng, Y.-J. and Sanjay, B., “Recognizing Multiple Interpretations of Interacting Machining Features,” Computer Aided Design, Vol.26, No.9, 1994, pp.667-688. Unger, M.B. and Ray, S R., “Feature-Based Process Planning in the AMRF,” ASME Int. Computers in Engineering, 1988, pp.563-569. Vandenbrand J.H. and Requicha, A.A.G., “Spatial Reasoning fro Automatic Recog­ nition of Integrating Form Features,” ASME Int. Computers in Engineering, 1990, pp.251-256. 208

Vanderbrande, J.H. and Requicha, A.A.G., “Spatial Reasoning for the Automatic Recognition of Machinable Features in Solid Models,” IEEE Trans, on Pattern Analysis and Machine Intelligence, Vol.15, No.l2, 1993, pp.1269-1285. Vemuri, B.C., Mititche, A., and Aggarwal, J.K., “Curvature-based Representation of Objects from Range Data,” Image Vision Comput., Vol.4, No.2, May 1986, pp.107-114. Walpole, R.E. and Myers, R.H., Probahility and Statistics for Engineering and Sci­ entists, Fourth Edition, Macmillan Publishing Company, New York, 1990. Wang, T.C. and Cheng, D.I., “Three-Dimensional Shape Construction and Recogni­ tion by Fusing Intensity and Structured Lighting,” Pattern Recognition, Vol.25, No.l2, 1992, pp.1411-1425. Witkin, A.P., “Scale-Space Filtering,” Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Karlsruhe, West Germany, August 1983. Woo, T.C., “Feature Extraction by Volume Decomposition,” Proceedings of the Con­ ference on CAD/CAM Technology in Mechanical Engineering, 1982, pp.76-94. Woo, T.C., “Interfacing Solid Modeling to CAD and CAM: Data Structure and Algorithms for Decomposing a SoHd,” Computer, Vol.17, No.2, Dec. 1984, pp .44-49. Yan, Z.C. and Menq, G.E., “Evaluation of Geometric Tolerances Using Discrete Measurement Data,” Journal of Design and Manufacturing, Vol.4,1994, pp.215- 228.