<<

Applications of Cognitive in Disassembly of Products

By

Supachai Vongbunyong

B. Eng., M. Eng.

A thesis in fulfilment of the requirements for the degree of

Doctor of Philosophy

School of Mechanical and Manufacturing Engineering

The University of New South Wales

July 2013 PLEASE TYPE THE UNIVERSITY OF NEW SOUTH WALES Thesis/Dissertation Sheet

Surname or Family name Vongbunyong

First name. Supachai Other name/s: -

Abbreviation for degree as g1ven in the University calendar: PhD

School· Mechamcal and Manufacturing Engmeering Faculty· Engmeenng

Tille. Applications of cogmtlve robotics in disassembly of products

Abstract 350 words maximum: (PLEASE TYPE)

Disassembly automation has encountered difficulties in disassembly proces11 due to the variability in the planning and operation levels that result from uncertainties in quality-quantity of the products returned. Thus, the concept of cognitive robotics is Implemented to the vision-based and (semi-) destructive disassembly automation for end-of-life electronic products to handle this variability. The system consists of three operating modules, I.e. cognitive robotic module, vision system module, and disassembly operation module. First, the cognitive robotic module controls the system according to its behaviour influenced by four cognitive functions: reasoning, execution monitoring, learning, and revision. The cognitive robotic agent uses rule-based reasoning to schedule the actions according to the existing knowledge and sensed information from the physical world in regard to the disassembly state. Execution monitoring is used to determine accomplishment of the process. The significant Information of the current process is learned and will be Implemented in subsequent processes. Learning also occurs in the form of demonstration conducted by the expert user via the graphic user interface to overcome unresolved complex problem. The system is able to leam and revise the knowledge resulting In Increased disassembly performance as more processes are performed. Second, the vision system module performs recognition and localisation of the components using common features as detection rules. It also supplies other Information regarding the operations. Third, the disassembly operation unit module performs the cutting operations. The physical collision can also be detected and resolved by this module. Consequently, the integrated system Is flexible enough to successfully disassemble any models of given product type without specific process plans and parameters being supplied. LCD screens are used as a case-study product In this research.

Declaration relating to disposition of project thesis/dissertation

1hereby grant to the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or in part in the Umversity llbranes in all forms of media. now or here after known, subject to the provisions of the Copyright Act 1968. I retam all property nghts, such as patent nghts. I also retain the nght to use •n future works (such as articles or books) all or part of this thesis or dissertation

I also authonse University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstracts International (this is applicable to doctoral theses only)

...... ;i.~ig~t~:r ·~·~ ·· · ······· ·· ··· .... /c.~S'C ...... ?. .~! 9}./.?.J.U .~ ...... Witness Dale

The Un1verslty recognises that there may be exceptional circumstances requiring restrictions on copying or conditions on use. Requests for restriction for a period of up to 2 years must be made ln writing. Requests for a longer period of restriction may be considered in exceptional circumstances and require the approval of the Dean of Graduate Research.

FOR OFFICE USE ONLY Date of completion of requirements for Award:

THIS SHEET IS TO BE GLUED TO THE INSIDE FRONT COVER OF THE THESIS ORIGINALITY STATEMENT

'I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial propor1tions of material which have been accepted for the award of any other degree or diploma at UNSW or any other educational institution, except where due acknowledgement is made in the thesis. Any contribution made to the research by others, with whom I have worked at UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that the intellectual content of thits thesis is the product of my own work, except to the extent that assistance from others in the project's design and conception or in style, presentation and linguistic expression is acknowledged.'

Signed ...... f..~C!. .... ?/.tft':'.. ~~ ......

Date ...... '?..~/.C?.:! /.~q_1.~ ...... COPYRIGHT STATEMENT

'I hereby grant the Universi~t of New South Wales or its agents the right to archive and to make availabhe my thesis or dissertation in whole or part in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all proprietary rights, such as patent rights. I also retain the right tc1 use in future works (such as articles or books) all or part of this thesis or dissertation. I also authorise University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstract International (this is applicable to doctoral theses only). I have either used no substantial portions of copyright material in my thesis or I have obtained permission to use copyright material; where permission has not been granted I have applied/\\rill apply for a partial restriction of the digital copy of my thesis or dissertation.'

Signed ...... f.?.':! ..... 9r?.tf.:J. '!.~......

Date ...... ~. ~ /.'?. .7../.1-.f!.t~ ......

AUTHENTICITY STATEMENT

'I certify that the Library deposit digital copy is a direct equivalent of the final officially approved version of my thesis. No emendation of content has occurred and if there are any minor variations in formatting, they are the result of the conversion to digital format.'

Signed ...... yn.!f.'1...... 9.? .1.1-"1-.£~ ......

Date ...... ?:r;J../9.1./?:P..La ...... Abstract

ABSTRACT ______

Disassembly automation has encountered difficulties in disassembly process due to the variability in the planning and operation levels that result from uncertainties in quality- quantity of the products returned. Thus, the concept of cognitive robotics is implemented to the vision-based and (semi-) destructive disassembly automation for end-of-life electronic products to handle this variability. The system consists of three operating modules, i.e. cognitive robotic module, vision system module, and disassembly operation module. First, the cognitive robotic module controls the system according to its behaviour influenced by four cognitive functions: reasoning, execution monitoring, learning, and revision. The cognitive robotic agent uses rule-based reasoning to schedule the actions according to the existing knowledge and sensed information from the physical world in regard to the disassembly state. Execution monitoring is used to determine accomplishment of the process. The significant information of the current process is learned and will be implemented in subsequent processes. Learning also occurs in the form of demonstration conducted by the expert user via the graphic user interface to overcome unresolved complex problem. The system is able to learn and revise the knowledge resulting in increased disassembly performance as more processes are performed. Second, the vision system module performs recognition and localisation of the components using common features as detection rules. It also supplies other information regarding the operations. Third, the disassembly operation unit module performs the cutting operations. The physical collision can also be detected and resolved by this module. Consequently, the integrated system is flexible enough to successfully disassemble any models of given product type without specific process plans and parameters being supplied. LCD screens are used as a case-study product in this research.

i

Acknowledgement

ACKNOWLEDGEMENT ______

Firstly, I would like to thank my supervisors A/Prof Sami Kara and co-supervisor A/Prof Maurice Pagnucco for the great opportunity given me to work on this exciting research topic. They have always given me the best support in research direction, theory, and technical perspectives which have been very important in producing this work.

Next, I would like to thank the school of Mechanical and Manufacturing Engineering for funding support in the form of a PhD Scholarship and also the research funding and facility. In addition, I would like to thank the workshop and technical staff members, i.e. Martyn, Seetha, Russell, Alfred, Ian, Andy, Subash, Radha, and Steve, for the great technical support and production of hardware parts. I would like to thank Dr. Voorthuysen and Dr. Rajaratnam (CSE) for valuable suggestions and comments in the early stage of the disassembly cell set up in regard to robotics and programming perspectives. In addition, I would like to thank TAD NSW for supplying and donating LCD screens for testing.

I would also like to thank the members of Sustainable Manufacturing and Life Cycle Engineering Research Group (SMLCE@UNSW), i.e. Dr. Ibbotson, Dr. Li, Seung Jin, Bernard, Kanda, Rachata, Wei-Hua, Pouya, Hamed, Smaeil, Wei, Samira, and Scott, for idea sharing, valuable comments, warm welcomes, and all other assistance. Moreover, I would like to thank our German colleagues, i.e. Prof. Dr. –Ing. Herrmann, Dr. Luger, Gerrit, and other researchers from JGARG for all the information in regard to disassembly of LCD screens and other knowledge in LCA and manufacturing

Last but most important, I would like to thank my family including my beloved father, mother, and wife for the lifelong support they have given me at all times.

ii

List of Publications

LIST OF PUBLICATIONS ______

x Vongbunyong, S., Kara, S. and Pagnucco, M. (2013). "Basic behaviour control of the vision-based cognitive robotic disassembly automation." Assembly Automation 33(1): 38-56.

x Vongbunyong, S., Kara, S. and Pagnucco, M. (2013). "Application of cognitive robotics in disassembly of products." CIRP Annals - Manufacturing Technology 62(1): 31-34.

x Vongbunyong, S., Kara, S. and Pagnucco, M. "Learning and revision in cognitive robotics disassembly automation." Robotics and Computer-Integrated Manufacturing (under review).

x Vongbunyong, S., Kara, S. and Pagnucco, M. (2012). "A framework for using cognitive robotics in disassembly of products." Leveraging Technology for a Sustainable World - Proceedings of the 19th CIRP Conference on Life Cycle Engineering: 173-178.

Selected experiment videos are available online at http://www.lceresearch.unsw.edu.au/articles/2012/CognitiveRob/CogRob.html

iii

Contents

CONTENTS ______

Abstract...... i Acknowledgement ...... ii List of publications ...... iii Contents ...... iv List of Figures...... v List of Tables ...... vi Nomenclature ...... vii

1 Introduction ...... 1 1.1 Introduction ...... 1 1.2 Scope of the research ...... 2 1.3 Thesis structure ...... 3

2 Literature review ...... 6 2.1 An overview of product disassembly ...... 6 2.1.1 End-of-Life product treatment ...... 6 2.1.2 Disassembly of products ...... 8 2.2 Disassembly Process Planning (DPP) ...... 9 2.2.1 Representation of product structure ...... 11 2.2.2 Disassembly process representation ...... 12 2.2.3 Disassembly Sequence Planning (DSP) ...... 15 2.2.4 Completeness of disassembly ...... 18 2.2.5 Disassembly operation (dismantling techniques) ...... 19 2.2.6 Conclusion ...... 20 2.3 Automatic disassembly cell ...... 21 2.3.1 Modular systems and flexible disassembly cell configurations ...... 21 2.3.2 Semi-automatic disassembly ...... 22 2.3.3 Fully-automatic disassembly ...... 24 2.3.4 Conclusion ...... 26

iv-1

Contents

2.4 Vision system in disassembly ...... 27 2.4.1 Recognition ...... 27 2.4.2 Localisation ...... 29 2.4.3 Configuration of cameras and coordinate system ...... 29 2.4.4 Model representation ...... 30 2.4.5 Computer vision library and relevance algorithms ...... 31 2.4.6 Conclusion ...... 31 2.5 Cognitive Robotics ...... 31 2.5.1 Overview of Artificial Intelligence and autonomous ...... 32 2.5.2 Cognitive robotics overview ...... 33 2.5.3 Action programming language ...... 35 2.5.4 Applications of cognitive robotics ...... 38 2.5.5 Conclusion ...... 40 2.6 Product case-study: LCD screens ...... 40 2.6.1 End-of-Life treatment of LCD screen monitors ...... 40 2.6.2 Disassembly of LCD Screens ...... 43 2.6.3 Hybrid-system disassembly cell for LCD screens (case-study) ...... 44 2.6.4 Conclusion ...... 45 2.7 Conclusion ...... 45 2.7.1 Significant issues and research gaps ...... 46 2.7.2 Research direction - Economically feasible automated disassembly cell .... 48

3 Methodology overview and system architecture ...... 49 3.1 Methodology overview ...... 49 3.1.1 Human-driven disassembly process ...... 49 3.1.2 Framework of the disassembly automation ...... 51 3.1.3 Uncertainty handling and functionality of the modules ...... 54 3.1.4 Simplification of the system based on case-study product ...... 57 3.2 Control architecture...... 58 3.2.1 Levels of control ...... 58 3.2.2 Operating modules ...... 60 3.2.3 Communication among the modules ...... 64 3.3 Conclusion ...... 65

iv-2

Contents

4 Disassembly operation unit ...... 67 4.1 Case-study product: LCD screen ...... 68 4.1.1 Selection of the samples ...... 68 4.1.2 Structure analysis ...... 69 4.1.3 Implementation of the system ...... 72 4.2 Disassembly operation units in hardware perspective ...... 72 4.2.1 Conceptual design ...... 73 4.2.2 Operation units ...... 74 4.2.3 Operation routine ...... 77 4.2.4 General disassembly operation procedure ...... 82 4.3 Disassembly operation plans ...... 85 4.3.1 Conceptual overview of the operation plans ...... 86 4.3.2 Disassembly operation plan for the components in LCD screens ...... 88 4.4 Conceptual testing ...... 106 4.4.1 Testing procedure and operating cycle ...... 107 4.4.2 Testing result ...... 107 4.4.3 Conclusion of the conceptual test ...... 114 4.5 Conclusion ...... 116

5 Vision system module ...... 119 5.1 Overview of the vision system module ...... 120 5.1.1 Structure of the module in software perspective ...... 120 5.1.2 Hardware ...... 121 5.1.3 Interaction with other modules ...... 124 5.2 Computer vision functionality ...... 125 5.2.1 Optical problem and image quality ...... 125 5.2.2 Camera configuration and mapping of coordinate frames ...... 128 5.2.3 Recognition ...... 132 5.2.4 Localisation ...... 133 5.3 Detection algorithms for disassembly of LCD screens ...... 137 5.3.1 Common features ...... 138 5.3.2 Detection of back cover ...... 143 5.3.3 Detection of PCB cover ...... 145

iv-3

Contents

5.3.4 Detection of PCB ...... 147 5.3.5 Detection of carrier ...... 149 5.3.6 Detection of LCD Module ...... 154 5.3.7 Detection of screws ...... 152 5.3.8 Detection of state change ...... 154 5.3.9 Detection of model of LCD screen ...... 157 5.3.10 Other utility functions ...... 160 5.4 Experiment ...... 162 5.5 Conclusion ...... 165

6 Cognitive robotics ...... 167 6.1 Overview of cognitive robotics ...... 168 6.1.1 Methodology ...... 168 6.1.2 Cognitive robotic module architecture ...... 170 6.1.3 Action programming language IndiGolog ...... 174 6.1.4 Summary of LCD screen ...... 177 6.2 Disassembly domain for cognitive robotics ...... 178 6.3 Behaviour control in disassembly of LCD screens ...... 183 6.3.1 Basic behaviour conrol ...... 183 6.3.2 Advanced behaviour control ...... 196 6.3.3 Summary of Actions and Fluents ...... 211 6.4 Conceptual test of process flow ...... 213 6.4.1 Unknown model ...... 213 6.4.2 Known model ...... 217 6.4.3 Learning and revision ...... 218 6.5 Conclusion ...... 219

7 Performance testing...... 221 7.1 Experiment overview ...... 221 7.1.1 Key performance index ...... 221 7.1.2 Experiment setup and procedures ...... 223 7.2 Flexibility testing ...... 224 7.2.1 Vision system performance ...... 225

iv-4

Contents

7.2.2 General disassembly plan performance ...... 226 7.2.3 Key performance index ...... 227 7.2.4 Summary ...... 232 7.3 Learning and revision testing ...... 234 7.3.1 Experimental method ...... 234 7.3.2 Key Performance Index ...... 235 7.3.3 Uncertainties in process ...... 237 7.3.4 Summary ...... 238 7.4 Life cycle assessment perspective (LCA) ...... 239 7.4.1 Disassembly cost ...... 239 7.4.2 Toxicity ...... 240 7.4.3 Disassembly for recycling ...... 240 7.5 Conclusion ...... 241

8 Conclusion ...... 242 8.1 Summary and findings of each module ...... 242 8.1.1 Disassembly operation module ...... 242 8.1.2 Vision system module ...... 245 8.1.3 Cognitive robotics module ...... 247 8.2 Conclusion and discussion ...... 250 8.2.1 Flexibility to deal with uncertainties ...... 251 8.2.2 Performance improvement by learning and revision ...... 252 8.2.3 Toxicity ...... 253 8.2.4 Economic feasibility ...... 254 8.3 Future works ...... 255 8.3.1 Advanced learning and revision strategy ...... 255 8.3.2 Hardware improvement and non-destructive disassembly ...... 255

References ...... 257

iv-5

Contents

Appendix A - LCD screen samples ...... A-1 Appendix B - Hardware...... B-1 Appendix C - Vision system experiment...... C-1 Appendix D - Graphic User Interface ...... D-1 Appendix E - Performance testing ...... E-1

iv-6

List of Figures

LIST OF FIGURES ______

Figure 1.1: Chapter overview ...... 4

Figure 2.1: Scenario of End-of-Life products ...... 6 Figure 2.2: Determination of optimal disassembly strategy ...... 9 Figure 2.3: Connection diagram ...... 12 Figure 2.4: Disassembly precedence ...... 12 Figure 2.5: Example product Bourjault’s ballpoint ...... 13 Figure 2.6: Disassembly tree of the Bourjault’s ballpoint ...... 13 Figure 2.7: State diagram of the Bourjault’s ballpoint ...... 14 Figure 2.8: Disassembly-sequence diagram ...... 14 Figure 2.7: AND/OR graph of the Bourjault’s ballpoint ...... 15 Figure 2.8: Hybrid system for LCD screen disassembly ...... 23 Figure 2.9: Robotic system for disassembly of computers ...... 25 Figure 2.10: Complexity space for intelligent agent ...... 33 Figure 2.11: An architecture of cognitive robotics ...... 34 Figure 2.12: Cognitive system architecture with close-perception action loop ...... 35 Figure 2.13: Classification of the type of manufacturing ...... 39 Figure 2.14: Predicted sales of types of monitor ...... 41 Figure 2.15: Weight contribution of the components in LCD screen ...... 41 Figure 2.16: Distribution of the material in LCD screen ...... 42 Figure 2.17: Structure of LCD screen ...... 43 Figure 2.18: A sequence of disassembly of LCD monitors in automated workplace ...... 44

Figure 3.1: Behaviour of the human operators in disassembly process ...... 50 Figure 3.2: Framework of the system ...... 52 Figure 3.3: An overview of the common operation routine ...... 53 Figure 3.4: Specification summary of the robotic disassembly system ...... 54 Figure 3.5: Schematic diagram of the physical connection ...... 59 Figure 3.6: System architecture – levels of control and operating modules ...... 59 Figure 3.7: Schematic diagram of communication network structure ...... 64

v-1

List of Figures

Figure 4.1: System architecture in the perspective of disassembly operation unit module...... 67 Figure 4.2: Product structure of LCD screens ...... 70 Figure 4.3: Liaison diagrams of typical LCD screens ...... 71 Figure 4.4: Example of a complex structure of an LCD screen ...... 71 Figure 4.5: Module’s components ...... 74 Figure 4.6: Disassembly operation units ...... 75 Figure 4.7: Operation cycle of the FlippingTable ...... 76 Figure 4.8: Operation routine of the arm ...... 78 Figure 4.9: Simplified coordinate system with respect to the robot arm ...... 79 Figure 4.10: Notation of tool orientation ...... 81 Figure 4.11: Structure of disassembly operation plans and the operation procedure ...... 83 Figure 4.12: Cutting paths of operation procedures ...... 84 Figure 4.13: Semi-destructive approach ...... 87 Figure 4.14: Destructive ...... 88 Figure 4.15: Plan execution order for removing a main component ...... 89 Figure 4.16: Example of back cover ...... 90 Figure 4.17: Location of the screws relative to the nearest border of back cover ...... 90 Figure 4.18: Operation plans of the back cover ...... 90 Figure 4.19: Location and disestablishment of the press-fits ...... 91 Figure 4.20: Example images of PCB cover ...... 93 Figure 4.21: Misclassification of structure between Type-I and Type-II ...... 93 Figure 4.22: Cutting options for PCB cover Type-I ...... 94 Figure 4.23: Cutting options for PCB cover Type-II ...... 95 Figure 4.24: Hanging detached PCB cover part in Type-II structure ...... 96 Figure 4.25: Operation plan for PCB cover...... 96 Figure 4.26: Classification strategy for PCB cover based on the execution result ...... 97 Figure 4.27: Common location of the connectors on PCBs ...... 99 Figure 4.28: Location of the screws relative to the nearest border of PCBs ...... 100 Figure 4.29: Operation plans regarding the common location of the connectors on PCBs ...... 100 Figure 4.30: Position of PCBs to be disassembled ...... 101 Figure 4.31: Common location of the connections belonging to the carrier ...... 103

v-2

List of Figures

Figure 4.32: Operation plan for carrier ...... 103 Figure 4.33: Front cover and LCD module ...... 105 Figure 4.34: Operation plan for LCD module and front cover ...... 105 Figure 4.35: Disassembly states and detached main components ...... 108 Figure 4.36: Disassembly states and expected operation plans ...... 109 Figure 4.37: Removal of back cover ...... 110 Figure 4.38: Removal of PCB cover ...... 111 Figure 4.39: Removal of PCBs ...... 112 Figure 4.40: Comparison of the disassembly outcome of PCBs ...... 112 Figure 4.41: Disestablishment of the connections of the PCB ...... 113 Figure 4.42: Removal of carrier and LCD module ...... 114

Figure 5.1: System architecture in the perspective of the vision system module ...... 119 Figure 5.2: Class diagram of the vision system ...... 121 Figure 5.3: Images from the top-view ...... 122 Figure 5.4: Raw images and distortion field of Kinect ...... 123 Figure 5.5: Depth image represented in 2.5D map ...... 123 Figure 5.6: Depth accuracy and resolution in z-axis within the operation range ...... 124 Figure 5.7: Configuration of the cameras over the fixture plate and distance calibration ...... 127 Figure 5.8: Configuration of the disassembly cell ...... 130 Figure 5.9: Perspective transformation in the camera ...... 130 Figure 5.10: Frames coordinate and image space observed from top-view ...... 131 Figure 5.11: ROI and VOI according to the Product coordinate ...... 136 Figure 5.12: Assignation and implementation of ROI and VOI ...... 136 Figure 5.13: Histogram of the base colour in S-Channel collected from the samples ... 140 Figure 5.14: Histogram of the base colour in H-channel collected from the samples .... 140 Figure 5.15: General process flowchart for component detection ...... 143 Figure 5.16: Samples of the back cover seen from top-view ...... 144 Figure 5.17: Edge refining process ...... 145 Figure 5.18: PCB cover under different condition of IR ...... 146 Figure 5.19: Histogram and centroids obtained from k-means ...... 147 Figure 5.20: Blob detection on disconnected and different colour regions ...... 149 Figure 5.21: Partitioning of the oversize region containing multiple PCBs ...... 149

v-3

List of Figures

Figure 5.22: Blob detection on disconnected and different colour regions ...... 150 Figure 5.23: Captured images of LCD module ...... 152 Figure 5.24: Sample images of the screws ...... 153 Figure 5.25: Detection of screws ...... 154 Figure 5.26: State change – original condition ...... 156 Figure 5.27: State change – the component is removed ...... 156 Figure 5.28: Interest points of SURF in a sample and a candidate model in KB...... 158 Figure 5.29: Process flowchart of the model detection ...... 160 Figure 5.28: Checking size of grinder disc ...... 161 Figure 5.29: Measurement direction of the distance error ...... 163

Figure 6.1: System architecture in the perspective of the vision system module ...... 167 Figure 6.2: Behaviour of the cognitive robotic agent in disassembly process...... 169 Figure 6.3: System architecture in Cognitive robotics perspective ...... 171 Figure 6.4: Interaction with actions and fluents ...... 172 Figure 6.5: Analysis process for formulating the code in the programming ...... 177 Figure 6.6: Product structure of LCD screens and the main components ...... 178 Figure 6.7: Choice points in disassembly domain ...... 179 Figure 6.8: Representation of a product structure in a disassembly state ...... 180 Figure 6.9: Behaviour control in regard to the disassembly domain ...... 184 Figure 6.10: Disassembly state diagram ...... 185 Figure 6.11: Disassembly state diagram ...... 194 Figure 6.12: Example of the KB for a sample model ...... 199 Figure 6.13: Cutting operations in learning and implementation ...... 201 Figure 6.14: User’s demonstrated primitive cutting operation in GUI ...... 206 Figure 6.15: learning cutting operation for add-on plan ...... 207 Figure 6.16: Example process flow of unknown model ...... 215 Figure 6.17: Example process flow of the second run for Type-II unknown model ...... 216 Figure 6.18: Strategy to detach screws from the back of carrier ...... 216 Figure 6.19: Example process flow of known model ...... 217 Figure 6.20: Example knowledge based in two revisions ...... 218

v-4

List of Figures

Figure 7.1: Detached components classified by type of material ...... 222 Figure 7.2: Snapshots of the disassembly process captured from the performance test ...... 224 Figure 7.3: Time consumption of the disassembly process ...... 230 Figure 7.4: Time consumption of the disassembly process by each operation ...... 231 Figure 7.5: Human assistance count in the disassembly process ...... 232 Figure 7.6: Experiment order due to the revision ...... 234 Figure 7.7: Disassembly performance with respect to multiple revisions...... 235 Figure 7.8: Incompletely detached carrier and PCBs and the second run ...... 236 Figure 7.9: Uncertainties due to the variation of the starting level for cutting ...... 238

v-5

List of Tables

LIST OF TABLES ______

Table 2.1: Destination of output of a disassembly facility ...... 7 Table 3.1: Uncertainties in disassembly process ...... 55 Table 4.1: Order of the CuttingMethod according to the times that robot crashes ...... 82 Table 4.2: Summary of the operation procedure ...... 85 Table 4.3: Outcome of the destructive disassembly in component perspective ...... 115 Table 4.4: Outcome of the destructive disassembly in material type perspective ...... 115 Table 4.5: Summary of operation plans for removing the main components ...... 118 Table 5.1: Summary of coordinate frames ...... 129 Table 5.2: Summary of parameters and variables for calibration ...... 132 Table 5.3: Feature representation ...... 137 Table 5.4: A satisfied colour range of the component in LCD screens ...... 141 Table 5.5: Common features for the detection of the components in LCD screens ...... 142 Table 5.6: Performance of the main component detector ...... 162 Table 6.1: Uncertainties addressed by the cognitive robotics module ...... 168 Table 6.2: Command in domain specification ...... 175 Table 6.3: Commands in behaviour specification ...... 176 Table 6.4: Facts in knowledge base ...... 197 Table 6.5: Unresolvable conditions and demonstrated actions ...... 203 Table 6.6: Sensing actions and corresponding fluents ...... 211 Table 6.7: Primitive actions and corresponding fluents ...... 212 Table 6.8: Fluent as constant parameters ...... 212 Table 7.1: Performance of the detector in destructive disassembly (actual case) ...... 225 Table 7.2: Success rate of the plans for removing main components ...... 226 Table 7.3: Classification of the main structure ...... 228 Table 7.4: Outcome of the detached components ...... 229

vi

Nomenclature

NOMENCLATURE ______

D Dimension mcut Variable Cutting method M Fixed parameter Cutting method

s feed Feed speed of the cutting tool

Ttool Orientation of the cutting tool (x. y, z) Coordinate in operational space (c, r) Coordinate in image space

(xS ,yS) Coordinate in spatial sampling

MAffine Affine Transformation matrix

zF Vertical level-z relative to the fixture plate f Focal length

αx, αy Scale factor in x and y direction

X0, Y0 Offset of the image coordinate with respect to the optical axis

j Li Offset in direction-i with respect to frame j {B} Robot base coordinate frame {F} Fixture plate base coordinate frame {T} Tooltip coordinate frame {L} Lenses centre coordinate frame {P} Product coordinate frame {S} Spatial sampling coordinate frame {I} Image plane coordinate frame H, S, V Colour space Hue-Saturation-Value R, G, B Colour space Red-Green-Blue

)i Threshold level of subject-i

Ø Diameter L Offset distance δ Distance error a Actions in IndiGolog syntax

vii-1

Nomenclature

Abbreviation AI Artificial Intelligent CRA Cognitive Robotic Agent CRM Cognitive Robotic Module CCFL Cold-Cathode Fluorescent Lamp GUI Graphic User Interface DPP Disassembly Process Plan DOF Degree of Freedom DOM Disassembly Operation Module DSP Disassembly Sequence Plan EOL End-of-Life LCA Life Cycle Assessment LCE Life Cycle Engineering MAD Median Average Deviation MAS Multi-Agent System MBR Minimum Bounding Rectangle PCB Printed Circuit Board ROI Region of Interest VOI Volume of Interest VSM Vision System Module WEEE Waste Electrical and Electronic Equipment

vii-2

Chapter1- Introduction 1 INTRODUCTION ______

1.1 Introduction

The number of End-of-Life (EOL) products has dramatically increased as a result of shorter product life cycle and increased market demand. The disposed EOL products have turned to wastes and created environmental and economical problems. To deal with this problem, the EOL treatment, i.e. recycling – reusing – remanufacturing, is one of the effective strategies aiming to recover valuable material or useful parts from this waste.

Product disassembly is one of the key steps of the efficient EOL treatment. This process aims to separate the desired parts or components from the whole products returned. This process is traditionally performed by human operators because of the uncertainties and variations in the returned EOL products. These uncertainties and variations result in complex problems at both planning and operational levels which make this process difficult and time consuming. As a result, disassembly becomes economically infeasible, especially in developed countries where the labour-cost is expensive.

Automation is a potential cost effective option that has been concerned to replace the high-cost labour for making the disassembly process economically feasible. However, the aforementioned uncertainties have become more problematic for the automation due to the lack of flexibility in sensing and decision-making in comparison to human operators. A number of attempts have been made to develop full disassembly automation (Büker et al. 1999, Tonko and Nagel 2000, Büker et al. 2001, Torres et al. 2004, Merdan et al. 2010, ElSayed et al. 2012). However, their ability to deal with various models of products is still limited since information regarding particular models needs to be supplied a priori. The flexibility and robustness of the system to deal with uncertainties at the planning and operational levels are also questionable. In this research, the concept of cognitive robotics is used for emulating human behaviour in order to handle these uncertainties. Cognitive robotics is an with high-level cognitive functions that allow the system

1

Chapter1- Introduction to reason, revise, perceive change in dynamic world, and respond in a robust and adaptive way in order to complete goals (Moreno 2007).

In the manual disassembly process, skilful operators are expected to be flexible enough to carry out the disassembly process of previously unseen models of products. Appropriate decisions can be made based on their prior knowledge and the information of the product’s condition perceived during the operation. Therefore, the process expects to be achievable under a number of unknown conditions, e.g. unknown product structure and geometry, which are regarded as uncertainties. The first time disassembly of a previously unseen model may be carried out awkwardly by trying a number of possible ways to achieve the goal. Afterwards, the operator can learn from this experience. As the more specific knowledge of this model has been learned, the process will be more efficient when disassembling this model again. In this research, the proposed disassembly automation exhibits similar behaviours. Together with learning by demonstration conducted by the human operator to resolve certain unresolvable cases, the system is flexible enough to disassemble various models of a case-study product without specific information supplied.

1.2 Scope of the research

To achieve economic feasibility in the disassembly process for recycling purposes, this research aims to develop a low-cost disassembly automation platform which is flexible and robust to deal with any models of product in one product family. The concept of cognitive robotics is implemented to emulate the human behaviour in order to address the uncertainties in disassembly process. As a result, the system expects to carry out the process autonomously without prior specific detail in product structure and geometry supplied. It expects to be able to learn from previous disassembly processes and improve the performance after repeating disassembling previously seen models. In addition, human assistance in the form of learning by demonstration is incorporated in specific cases where the uncertainties cannot be resolved by the system itself. From learning process, the model specific knowledge associated with the product properties and the disassembly process will be generated and learned. Eventually, the system will become fully autonomous to carry out the process without further human interaction.

2

Chapter1- Introduction

The disassembly process is conducted in (semi-)destructive approaches in order to disassemble the product into the component level by using the selective disassembly method (Lambert and Gupta 2005). In this case, LCD (Liquid Crystal Display) screens are used as a case-study and the disassembly system is designed based on this product. In summary, to achieve this goal, development of the following three modules must be completed:

x Cognitive robotic module that autonomously controls the system throughout the disassembly process in planning and operational levels. The cognitive robotic agent controls the behaviour of the system in accordance with cognitive ability. Human assistance in the form of learning by demonstration is also implemented. x Vision system module that facilitates the cognitive robotic module in detection of the product, components, and condition of the disassembly state. It needs to be flexible and robust to handle the physical variations and uncertainties of the products in regard to their appearance. x Disassembly operation unit module consists of a and other mechanical units that facilitate the automated disassembly. The system is designed for handling variations in the case-study product and conduct (semi-)destructive approach.

1.3 Thesis structure

This thesis is organised into seven chapters which can be considered as three main parts. Firstly, the introduction including literature review is found in Chapters 1 and 2. Secondly, regarding the research methodology, this system is composed by three main modules as stated previously. An overview of this integration is presented in Chapter 3. The methodology used in these modules is described in Chapters 4 – 6, which are the mechanical unit module, the vision system module, and the cognitive robotic module, respectively. Finally, the performance testing results is presented in Chapter 7 and the conclusion is presented in Chapters 7. A chapter overview is illustrated in Figure 1.1.

3

Chapter1- Introduction

Introduction 1

Literature review 2

Methodology

Methodology overview & System architecture 3

Disassembly operation unit module 4

Vision system module 5

Cognitive robotic module 6

Performance testing 7

Conclusion 8

Figure 1.1: Chapter overview

Chapter 1: Introduction, presents an overall introduction of this research, including research motivation, scope, and structure of this thesis.

Chapter 2, Literature review, provides an overview of the existing research in the relevant areas, namely disassembly sequence planning, disassembly automation, vision system, and cognitive robotics. In addition, significant background knowledge with respect to the development of the disassembly cell is provided.

Chapter 3, Methodology overview and system architecture, explains the methodology of the automated disassembly system associated with human-driven disassembly and handling of uncertainties. The architecture of the entire system in terms of modules integration, levels of control, and communication protocol are also explained.

Chapter 4, Disassembly operation unit module, explains the design and functionality of the disassembly operation units, especially the robot arm and the Flipping Table. The operation of the robot module is emphasised due to its complicated functions. This chapter also provides information and structural analysis of LCD screens. Finally, the disassembly operation plans specifically developed for this case-study product are described.

4

Chapter1- Introduction

Chapter 5, Vision system module, describes functionality of the vision system used as a sensing module that supplies the information to the cognitive robotic agent. Coordinates mapping regarding geometrical transformation between the vision system modules and the mechanical unit are illustrated. Moreover, the detection processes, including 1) the components in the products and 2) transition in the disassembly state for execution monitoring, are also explained.

Chapter 6, Cognitive robotic module, explains the disassembly domain, including product’s structure and its representation, which is suitable for using with cognitive robotics. In addition, the cognitive functionality regarding the behaviour control of the automated disassembly process, namely 1) reasoning, 2) execution monitoring, 3) learning, and 4) revision are explained. In addition, human assistance as learning by demonstration is also explained.

Chapter 7, Performance testing, presents the experimental design and result of the performance testing of the entire system. The experiment was done in two perspectives, including 1) flexibility test and 2) learning and revision test. The key performance indices and result are explained and discussed.

Chapter 8, Conclusion, gives a summary, conclusion, and discussion of the proposed system. Prospective future works are also given.

5

Chapter 2 - Literature review

2 LITERATURE REVIEW ______

This chapter presents a review of related literatures and background knowledge required for developing cognitive robotic disassembly automation. According to the methodology overview presented in the scope of this research, the content is divided into five main sections. Overview of disassembly process is explained in Section 2.1 followed by the main methodologies, including disassembly process planning (Section 2.2), automatic disassembly cell (Section 2.3), vision system (Section 2.4) and cognitive robotics (Section 2.5). The literatures related to the case-study product, LCD screens, is given in Section 2.6. Lastly, the conclusions and research directions are given in the final section.

2.1 An overview of product disassembly

2.1.1 End-of-Life product treatment

1. Maintenance 2. Repair 3. Product Reuse 4. Upgrading Downgrading Remanufacturing 5. Material recycling 6. Incineration 7. Landfill

A. Disassembly B. Shredding

Other processes: sorting, cleaning, inspection

Figure 2.1: Scenario of End-of-Life products (Duflou et al. 2008)

After End-of-Life (EOL) products have been collected by the reverse logistics process, recycling, reusing, and remanufacturing as EOL option need to be considered (see Figure 2.1). Life Cycle Assessment (LCA) is taken into account in order to identify proper treatment options in regard to environmental and economic perspectives. In order to carry out those aforementioned EOL options, the products first must be disintegrated into

6

Chapter 2 - Literature review

individual components, parts, or material according to the requirement of each treatment process. Disintegration process can be done in two approaches: 1) shredding and 2) disassembly. Details and a comparison between the two approaches are as follows.

Shredding is a destructive process that roughly breaks the products into small pieces or particles that will be supplied to the recycling process. The outcome of the shredding process is low-quality blends of material which requires the sorting process to separate the valuable material from the scraps. The outcome shredded pieces or particles will be physically sorted by density and magnetic property using a number of techniques, i.e. magnetic, electrostatic, eddy-current, etc. Shredding is commonly implemented in industry practice due to the low operating cost. However, a major disadvantage is the loss of value of the parts and components those turned out as shredded pieces. In addition, hazardous components that potentially contaminate the workplace and other materials after being shredded is also problematic (Lambert and Gupta 2005).

Disassembly systematically separates the product into its constituent parts, sub- assemblies, or other groupings (Kaebernick et al. 2007). The detached components can be supplied to a wide range of treatment processes according to the desired purpose and downstream conditions (see Table 2.1). It serves not only the EOL products treatment but also repair-maintenance purposes if the proper techniques are applied. However, a major problem is a high operating cost that usually overcomes the value recovered from the EOL products. Therefore, it becomes economically infeasible and is usually avoided in industry practice. However, the possible solutions to make the disassembly process economically feasible are discussed in the following sections.

Reuse Reuse Recycle Landfill Incinerate/ Refurbishing Refurbishing Remanufacture Unprocessed products z z z Modules z z z z Components z z z Damaged components z z Waste z

Table 2.1: Destination of output of a disassembly facility (Lambert and Gupta 2005)

7

Chapter 2 - Literature review

2.1.2 Disassembly of products

The EOL treatment process has become more concerned due to a large number of the products disposed. Overview of the disposals is given as follows. In the field of Life Cycle Engineering, much research focuses on the products having high material-return rate, short life-cycle, and high volume of waste, such as electronic and electrical waste and EOL vehicles. According to the European Directive 2002/96/EC (Parliament 2003), Waste Electrical and Electronic Equipment (WEEE) covers a wide range of electrical and electronic products which are categorised into 10 groups, e.g. household appliances, IT & telecommunication, consumer equipment, and electrical and electronic tools, etc. In 2005, the number of WEEE in EU-27 was around 8.3 - 9.1 million tons per year consisting of 40% for large-household appliances and 25% for medium-household appliances. This figure is expected to grow 2.5 - 2.7% every year (Jaco et al. 2008). Moreover, the number of WEEE is 8% of the whole municipal solid waste (MSW) worldwide (Babu et al. 2007). Regarding vehicles waste, in 2006, around 230 million cars were in use in Europe (EU-15) and 10.5 million tons of them are disposed of every year. According to the European Directive 2000/53/EC, at least 80% by mass of the EOL vehicles must be reused and recovered; meanwhile, 85% of them must be recycled (Viganò et al. 2010).

Since the disassembly of products is one of the key steps of the efficient EOL treatments, a number of investigations have been conducted in regard to the environmental and economical aspects (Li et al. 1995, Gungor and Gupta 1999, Chen 2001). The disassembly approach is also compared with the conventional waste treatment, i.e. disposal and landfill (Ewers et al. 2001). These investigations concluded that disassembly approach greatly benefits the environment but it is not an economical process due the excess disassembly costs are resulted from both direct costs (i.e. labour and machine) and indirect costs (i.e. stock and logistics). However, the disassembly process can be economically feasible if the optimal disassembly strategy with respect to the cost and benefit is implemented as illustrated in Figure 2.2 (Desai and Mital 2003).Therefore, the research direction currently focuses on developing the strategy for the economically feasible disassembly. Gupta and McLean (Gupta and McLean 1996) provide an overview of the research direction which can be categorised into four relevant areas: 1) design for Disassembly (DfD), 2) disassembly process planning, 3) design and implementation of disassembly systems, and 4) operations planning issues in the disassembly environment.

8

Chapter 2 - Literature review

Figure 2.2: Determination of optimal disassembly strategy (Desai and Mital 2003)

Duflou et al. (2008) state the factors influencing profitability of the disassembly process that are analysed by applying Principal Component Analysis method (PCA) on the case- study of disassembly activities. The profitability is related to three factors: 1) completeness of disassembly, 2) EOL facility, and 3) using of automation. In short, the depth of disassembly is increased if the process is performed by relevant manufacturers; a high degree of automation positively affects profitability; and slightly high investment of an end-of-life facility is not in conflict with profitability. These factors must be taken into account in order to perform economically feasible disassembly.

x According to the scope of this research, the developed disassembly system will be used in the treatment process of the disposals where the DfD is irrelevant. Therefore, the literature review is limited to the follow issue: Disassembly process planning and operation (Section 2.2); and, x Design and implementation of the automatic disassembly system (Section 2.3).

2.2 Disassembly Process Planning (DPP)

The disassembly process deals with unpredictable characteristics in both quality and quantity of EOL products. It is more difficult than the assembly process in the following perspectives.

x Physical uncertainties of product at EOL condition: Gungor and Gupta (1998) summarise the physical uncertainties found in the EOL product. The uncertainties are resulted from component defects, upgrading or downgrading during the usage, and damage during the disassembly operation.

9

Chapter 2 - Literature review

x Variety of supply of EOL products: Variations of characteristics, e.g. model, size, internal configuration, material, and brand, are presented even in one product group. This information may not be revealed until some parts are separated from the products or during the disassembly process. The challenging problem is to develop a disassembly plan that is general and effective enough to deal with these uncertainties (Lambert 2003). x Complexity in process planning and operation: The challenge in process planning is to find the optimal sequences of the disassembly operations. Due to the complicated connections of the components in a product, finding the proper disassembly sequence is considered an NP-complete optimisation problem. Kroll et al. (1996) define the term Disassemblability which is a metric for quantifying the ease of product disassembly by analysing the disassembly related data of each product. The difficulties in the disassembly operation are determined with respect to five major criteria: 1) component accessibility, 2) precision in locating the component, 3) force required to perform tasks, 4) additional time, and 5) special problems that cannot be categorised in the other areas. In addition, Mok et al. (1997) conclude the ease of disassembly characteristics of products as follows:

ƒ Minimal exertion force: quick operation without excessive manual labour; ƒ Mechanism of disassembly: mechanism should be simple; ƒ Minimal use of tool: ideal disassembly should be performed without tools; ƒ Minimal repetition of part: easy to identify at each state of disassembly; ƒ Easy recognition of disassembly joints; ƒ Simpler product structure; and, ƒ Avoid usage of toxic material.

Gupta and McLean (1996) state that development of optimal disassembly plans relies on four main key phases: 1) product analysis, 2) assembly analysis, 3) usage mode and effect analysis and 4) dismantle strategy. These key phases are taken into account in order to achieve economically feasible process. Firstly, the product must be analysed and represented systematically. The options of disassembly process can be generated and represented from the product structure. The process can be divided into two levels which are the sequence plan and the operation. The completeness of disassembly is considered a

10

Chapter 2 - Literature review

part of the sequence plan. In this section, the background knowledge and related literatures are presented as follow. (Section 2.2.1 - 2.2.5)

x Representation of product structure; x Representation of disassembly process; x Disassembly sequence plan; x Completeness of disassembly; and, x Disassembly operation.

2.2.1 Representation of product structure

Lambert and Gupta (2005) describe that the structure of products consists of 1) components and 2) connections. First, the component is an element that is detached from the product and keeps its extrinsic properties, i.e. functionality and material properties. It cannot be further dismantled unless using destructive disassembly methods. Second, the connection or liaison is a relation that physically connects two components. The disassembly task is to disestablish these relations by means of semi-destructive or non- destructive methods. The fasteners are connective components connected between the main components. They can be divided into two groups: 1) quasi-components (e.g. screw) and 2) virtual-components (e.g. weld and solder).

The structure of products can be represented in two ways. First, the connection diagram (liaison diagram) can graphically represent the complete product structure by an undirected graph. The components are represented by nodes and the connections are represented by arcs. According to the complexity, the graph can be shown in three different forms: 1) extended form 2) reduced form and 3) minimal form (see Figure 2.3). The extended form shows full details of the products with every component and fastener. The reduced form represents the structure more concisely by hiding the virtual components and using dash lines for quasi-components. The minimal form shows the structure of the product in the most compact way by hiding both virtual and quasi components. Second, the product structure can be represented by a disassembly matrix which the problem can be solved by a computing approach, e.g. Linear Programming (LP) and Integer Programming (IP).

11

Chapter 2 - Literature review

(a) Assembly of product (b) Extended form (c) Reduced form (d) Minimal form

Figure 2.3: Connection diagram (Lambert and Gupta 2005)

2.2.2 Disassembly process representation

The process or sequence of product disassembly can be schematically represented in many ways. Lambert and Gupta (2005) summarise these approaches as follows:

2.2.2.1 Disassembly precedence graph

The process is expressed as the order of sub tasks connected and constrained by the precedence relationship. It can be represented in two forms: 1) component-oriented graph and 2) task-oriented graph (see Figure 2.4). The constraints determine the order of the tasks to be performed. This technique was originally used for assembly process representation and assembly line-balancing problems. Gungor and Gupta (2002) introduce using it in the disassembly process due to its simplicity. However, a major disadvantage is that the disassembly sequence of a complete product cannot be described in one graph (Tumkor and Senol 2007).

(a) Assembly (b) Component-oriented (c) Task-oriented

Figure 2.4: Disassembly precedence (Lambert and Gupta 2005)

2.2.2.2 Disassembly tree

The disassembly tree represents all possible choices which are derived from a primitive table containing all possible sequences sorted by level and type of operation. The Bourjault tree is one of the most widely used methods. Two major drawbacks are 1) the complexity arises in complex products and 2) parallel operations are difficult to be

12

Chapter 2 - Literature review

presented. Figure 2.6 shows the Boujault’s disassembly tree that represents the disassembly process of a sample product, Baujault’s ballpoint, in Figure 2.5. This product will be used as an example for other representation methods in the following sections.

(a) Assembly (b) Connection diagram

Figure 2.5: Example product Bourjault’s ballpoint (Lambert and Gupta 2005)

Figure 2.6: Disassembly tree of the Bourjault’s ballpoint (Lambert and Gupta 2005)

2.2.2.3 State diagram

The disassembly sequence is represented as a diagram consisting of nodes and undirected graphs. It can be categorised into two approaches: 1) connection-oriented (Fazio and Whitney 1987) and 2) component (Homem De Mello and Sanderson 1990, Woller 1992) (see Figure 2.7). All possible combinations of connections are represented by nodes. Each node represents a state which demonstrates establishment or disestablishment of connective components. This method provides two major advantages: 1) the disassembly sequence of the complete product can be demonstrated in one diagram and 2) the diagram is compact even for the complex products containing a number of components. However, a major disadvantage is that the disestablishment of some connections cannot be done individually without other combinations of related connections.

13

Chapter 2 - Literature review

(a) connection-oriented (b) component-oriented

Figure 2.7: State diagram of the Bourjault’s ballpoint (Lambert and Gupta 2005)

Kara et al. (2006) used this component-oriented state diagram representation method to develop a graphical representation method, named disassembly-sequence diagram, for representing disassembly sequence at different stages of the process for selective disassembly. The diagram can be automatically generated from the liaison and precedence relations. An example is shown in Figure 2.8.

(a) Liaison diagram of washing machine (b) Disassembly-sequence diagram

Figure 2.8: Disassembly-sequence diagram (Kara et al. 2006)

2.2.2.4 AND/OR graph (Hypergraph)

This graph represents disassembly sequences based on a subassembly. A process is represented by multiple-arcs (hyper-arcs) pointing from a parent to its child components (subassembly) (see Figure 2.9). This overcomes the drawback of the state diagram.

14

Chapter 2 - Literature review

However, a major drawback is the complexity of this graph which arises when the number of components increases. Lambert (1999) proposes a simplified version of this graph named concise AND/OR graph. More specific representations have been developed, e.g. arborescence with hypergraph (Martinez et al. 1997), Petri net (Zussman et al. 1998), and Hybrid graphs (Wang et al. 2006), which are capable of representing the product model and its constraints more accurately.

Figure 2.9: AND/OR graph of the Bourjault’s ballpoint (Lambert and Gupta 2005)

2.2.3 Disassembly Sequence Planning (DSP)

A disassembly sequence is a procedure for the disassembly operation. The initial state is defined as the complete product and the final state is the separation of desired parts and components. The main purpose is to find the optimal sequences to disassemble products with respect to certain factors, e.g. cost-effectiveness, material return, component recovery, and duration of operations. Technically, a large number of possible sequences are increased exponentially according to the number of components. Therefore, finding the optimal solution is considered an NP-complete optimisation problem. Lambert (2003) summarises effective methodologies based on a product-oriented approach as follows:

2.2.3.1 Mathematical programming (MP) method

This method tries to make the internal variables converge to their optimum value without considering the complete search space. The problem model is derived from the graph (i.e. hypergraph). Costs are assigned to each action (arc) with respect to subassembly components (i.e. parent and child) and stored in a transition matrix. As a result, it can be effectively solved by Mathematical solvers, e.g. Linear Programming (LP), Mixed Integer Programming (MIP), or Dynamic Linear Programming (DLP). Moreover, Petri nets are used in case of a dynamic approach.

15

Chapter 2 - Literature review

2.2.3.2 Heuristic methods

Gungor and Gupta (1997) present the heuristic algorithm that is used to find near-optimal solutions. The near-optimal solutions are considered instead of the optimal solutions which is sometimes difficult to find due to the size of the search space. This method requires information of the precedence relationship among each of the components and a variety of the difficulty in removing the component. The efficiency is evaluated by the authors based on the disassembly time. In addition, a case study of disassembly of a cell phone using the heuristic method with different search algorithms, e.g. greedy k-best and A*, is examined by Lambert and Gupta (2008).

2.2.3.3 Artificial intelligent (AI) methods:

Artificial intelligence prunes a number of choice points in the search space in order to find the optimal solution. Various techniques are applied to generate constraints and reduce the size of search space. As a result, the performance of the optimisation process in terms of stability and time consumption increases. However, this process needs a long execution time which is not suitable an online application. Lambert (2003) reviews typical AI techniques which are currently emphasised, e.g. simulated annealing algorithms, genetic algorithms (GA), fuzzy sets, neural networks, multi-agent systems, and Bayesian networks. Furthermore, novel efficient algorithms, e.g. ant-colony (Shan et al. 2007), case-based reasoning (Shih et al. 2006), rule-based sequence generation on clustering graphs (Kaebernick et al. 2000), are continuously proposed.

2.2.3.4 Adaptive planner

A disassembly sequence is generated adaptively with respect to the uncertainties and unexpected circumstances encountered in the disassembly operation. A number of literatures are included in this section since they are expected to be applied to this research for handling the uncertainties. According to the literature, the research is conducted at two levels: 1) process planning level and 2) sequence planning and operational level.

First, for the process planning level, Tang (2009) proposes using a Fuzzy Petri net to model the dynamics in disassembly regarding uncertainties in the supplied products’ condition and human factors. The system is trained with the data and feedback from the actual disassembly. As a result, the system can select the disassembly plan for handling

16

Chapter 2 - Literature review

the uncertainties based on past experience. The concept is extended by implementing a Fuzzy Coloured Petri Net for balancing a disassembly line presented by Turowski et al. 2005. Grochowski and Tang (2009) propose the learning approach using a disassembly Petri net (DPN) and Hybrid Bayesian network. Veerakamolmal and Gupta (2002) propose using case-based reasoning (CBR) to generate the disassembly plan for multiple products. The plan for the new product is adapted from the existing plan by deriving from the base case. Gao et al. (2005) propose using a Fuzzy Reasoning Petri Net to adaptively generate the disassembly sequence along the disassembly process due to the conditions of the product observed in each state. Therefore, the decision is made based on the component’s condition in regard to value returned, hazard level, and disassembly cost.

Second, for the sequence planning level, Salomonski and Zussman (1999) propose using a predictive model with DPN to adaptively generate the plan according to the components’ conditions retrieved from real-time measurement conducted by a robot arm. Lee and Bailey-Van Kuren (2000) address the uncertainties in the operation level by automatically recovering from the error detected by visual sensing. In addition, Martinez et al. (1997) propose a dynamic sequence generation method that generates an optimal disassembly plan during operations to deal with unpredictable situations, e.g. failure to remove corrosive part, replacement of screws, etc. The system is modelled and controlled by a multi-agent system (MAS). ElSayed et al.(2012) use a GA to generate an optimal disassembly sequence according to the currently detected components and supplied bill- of-material (BOM). However, the original BOM must be preserved.

In conclusion, the existing adaptive planners deal with many types of uncertainty during the disassembly process. The uncertainties relate to variations in the component conditions that deviate from the ideal case. The existing knowledge will be adapted to the new plan due to the current product’s ability to handle those uncertainties. Machine learning is also taken into account to improve the performance of the process from past experience. However, the structure of the product, e.g. BOM and computer aided design (CAD) model, is needed to be supplied a priori. No research proposes any methodology to tackle uncertainties of full product structure in real-time. In addition, the learning process has been implemented only in the planning level. Hence, the learning in the operation level, such as process parameters, should be further investigated.

17

Chapter 2 - Literature review

2.2.4 Completeness of disassembly

Lambert and Gupta (2005) categorise disassembly plans considering completeness into two types: 1) complete disassembly and 2) incomplete disassembly. First, the complete disassembly or full disassembly is the process that separates every single component of the product. It is rarely done economically due to high cost and technical constraints especially from the complexity and the uncertainties of end-of-life products. Second, the incomplete disassembly separates only the desired specific components or disassembly to the desired depth. It is more practical regarding cost-effectiveness. Consequently, the methodology of selective disassembly which is an incomplete disassembly technique is taken into account.

Selective disassembly serves certain specific purposes, e.g. to recover components used as spare parts, to remove hazardous modules, and to improve quality and quantity of shredder residue (Lambert 1999). It is broadly applied to disassembly for maintenance or disassembly of end-of-life products. The disassembly process will be performed until the desired goal or the depth of disassembly is reached. The outcome of selective disassembly can be one of these three following types (Lambert and Gupta 2005).

x Homogeneous components: parts that cannot be physically disassembled. x Complex components: components comprised of a number of homogeneous components joined together with connective parts. Destructive disassembly is required to further separate these components but leads to excessive costs. x Modules: sets of components that perform their own function which can be reused. They can be further disassembled by non-destructive or semi-destructive operation. However, the proper plan is to maintain their original condition and functionality.

The researchers currently focus on developing a methodology to find optimal disassembly sequences. Kara et al. (2005) propose the methodology of developing the optimal selective disassembly sequence which is the reverse of the methodology for assembly presented by Nevins and Whitney (1989). The disassembly sequences are generated from the product specifications, namely list of parts and subassembly, precedence rules, product representation model, and disassembly sequence diagram. Afterwards, the optimal sequences for removing the selected parts are obtained by removing invalid sequences according to liaison analysis In regard to this concept, the software that

18

Chapter 2 - Literature review

automatically generates and visualises optimal sequences of selective disassembly from specified constraints is developed by Kara et al. (2006) and Pornprasitpol (2006).

2.2.5 Disassembly operation (dismantling techniques)

Considering disassembly operations in the physical aspect, Lambert and Gupta (2005) categorise dismantling techniques into three types: 1) non-destructive disassembly, 2) semi-destructive disassembly, and 3) destructive disassembly. Suitable techniques must be chosen in regard to cost-effectiveness and specific purposes of disassembly. These techniques are explained in detail as follows.

2.2.5.1 Non-destructive disassembly

This approach does not damage the components. It is important for the reuse of components. The operations can be reversible or semi-reversible according to the type of connectives component. The reversible operations (e.g. unscrewing) are generally easier than the semi-reversible operations (e.g. detaching snap-fits). The operation cost is generally expensive due to the time dealing with uncertainties in EOL condition such as rust and partial damage. Even though a number of special disassembly tools, e.g. tools specifically for disassembly of screws (Seliger et al. 2001) and snap-fits (Braunschweig 2004), have been developed to facilitate the operation, the operation cost is still high and makes the non-destructive approach economically infeasible (Duflou et al. 2008).

2.2.5.2 Semi-destructive disassembly

This approach means to destroy only the connective components by breaking, folding, cutting, etc. No or little damage may occur to the main components. This approach can increase the disassembly-efficiency and potentially be economically feasible in most cases. Many research works related to automatic disassembly operation use the semi- destructive disassembly techniques to overcome the uncertainties in product condition and geometry. For instance, Karlsson and Järrhed (2000) drill fixing screw heads during the disassembly of electric motors. Feldmann et al. (1996) developed the Drilldriver for removing fasteners without a working point, e.g. rivet, corrosive screw, point-weld. Reap and Bras (2002) use a cut-off wheel and grinding disc to cut the screw heads during disassembly of the battery pack. However, the authors stated that this technique should be avoided in the disassembly of complex products for reusing purposes.

19

Chapter 2 - Literature review

2.2.5.3 Destructive disassembly

This approach deals with partial or complete destruction of some components that obstruct the extent of the disassembly operation. The components or irreversible fasteners, e.g. welds, are expected to be destroyed using various destructive tools, e.g. hammer, crowbar, grinder, etc. The operation is flexible, fast, and efficiently. As a result, it is economically feasible and generally performed as industry practice. One of the common problems where the destructive operation is implemented is to open the covering parts, i.e. housing, in order to reach the other valuable components inside. For example, Feldmann et al. (1996) developed the SplittingTool for removal of various forms of housing by breaking the separating line. Uhlmann et al. (2001) use plasma arc cutting to destroy the product’s metal case in a clean environment.

In summary, semi-destructive and destructive approaches are economically feasible due to the efficiency of the operation with respect to time consumption. On the contrary, non- destructive approach is very expensive due to the operation cost but unavoidable in the maintenance or reuse of components.

2.2.6 Conclusion

Disassembly is a key step in an efficient EOL treatment process. However, it is usually economically infeasible due to the high operating cost related to the uncertainties in the products. Three issues need to be taken into account to develop proper disassembly plans. In general, the following strategy should be applied in order to potentially achieve economic feasibility. First, optimal or near-optimal sequences of DSP and DPP are necessary to optimise the process. Second, performing selective disassembly to a certain depth is more feasible than performing full-disassembly. Third, the semi-destructive and the destructive approaches are preferable due to a short time consumption and effectiveness of the operation. These will be taken into consideration in development of the disassembly system in this research. However, as discussed in the adaptive planner section (Section 2.2.3.4), the following issues should be further developed, 1) the disassembly sequence generation strategy that is capable of dealing with uncertainties in a product’s structure and 2) the learning methodology at the operation level.

20

Chapter 2 - Literature review

2.3 Automatic disassembly cell

Nowadays, automatic systems, i.e. automation and robots, play an important role in modern manufacturing industry because of three major advantages. First, in long term usage, automations are more cost-effective compared to human labour especially in developed countries where the labour-cost is high. Second, due to the characteristics of machines, they are capable of working repeatedly with high precision and accuracy. Third, they can work in a hazardous environment which can be harmful to human operators, e.g. contaminated environments and radiation exposure (Craig 2005). One of the critically dangerous cases is disassembly of automotive Lithium-Ion Batteries which carries high-voltage risks (Schmitt et al. 2011). On the other hand, a major drawback is lack of intelligence to deal with uncertainties and unpredictable circumstances. A significant amount of research in artificial intelligence have been currently conducted and applied in automatic machines in order to increase the level of intelligence and overcome specific problems. However, humans are still needed in order to supervise and make proper decisions in many cases. The operation can occur in the way that the human provides the high-level commands to the machines or human-machine collaboration.

In regard to the applications in product disassembly, automation is one of the possible approaches to achieving the main goals of the disassembly process including high flexibility and cost-effectiveness (Knoth et al. 2002). However, a number of difficulties arise because the process has to deal with many types of uncertainties. A number of strategies dealing with uncertainties in disassembly automation are investigated by many research works. They can be presented in the following perspectives.

2.3.1 Modular systems and flexible disassembly cell configurations

One of the possible approaches are modular systems which are beneficial in terms of cost- effectiveness and technological aspects and are clearly explained by Kopacek and Kopacek (2006). In addition, Knoth et al. (2002) suggest configuring the disassembly cell as modular systems consisting of basic modules for flexible disassembly cells which are:

x Industrial robots or manipulating devices with special features, e.g. force control, path control, and high accuracy; x Gripping devices operating with various geometry and dimension of parts; x Disassembly tools specially designed for robots and tasks;

21

Chapter 2 - Literature review

x Feeding systems of the supply products; x Transport systems; x Fixture systems dealing with various geometry and dimension of parts; x Manual disassembly stations; x Intelligent control units dealing with data from sensor systems; x Product databases; x Vision systems for part recognition; x Sensor systems, e.g. position, distance, force, and moment; and, x Storage systems for parts and tools.

Kopacek and Kopacek (2001) propose a concept of Disassembly families which is a product grouping method that makes the process more flexible and economical. A group of products consists of different products which have certain similar features, e.g. size, and require the same disassembly operation. Therefore, the operation can be achieved by the same disassembly tools. This concept is applied in many research works, for instance Knoth et al.(2002) dealing with electrical and electronic products and Kopacek and Kopacek (2006) dealing with mobile phones.

2.3.2 Semi-automatic disassembly

Semi-automatic disassembly or Hybrid system integrates human operators and an automatic disassembly workstation, e.g. robots with disassembly tools, in order to improve the efficiency of the disassembly process. The majority of the process is carried out by automatic machines that facilitate the operation. The machines are used to improve efficiency and conduct hazardous tasks, e.g. heavy duty destructive operation. Consequently, human operators can focus on more sophisticated tasks rather than performing the operational work (Knoth et al. 2002). Kim et al. (2007) state that the hybrid system is necessary in the flexible system in order to support various product families. Manual operation is involved in case the automatic operations fail. Franke et al. (2006) also state that the hybrid system allows achievement of the disassembly process economically but the manual operation must be involved because the major drawback of automated systems is their instability due to a non-determined disassembly sequence and various product conditions. A number of research works regarding the semi-automatic disassembly have been conducted. Examples are as follows.

22

Chapter 2 - Literature review

Kim et al. (2007) developed a hybrid system that is flexible to disassemble a wide range of product families. The study focuses on automatic generation of plans and the control sequence of a system consisting of three robot arms and conveyor belts. The robots are responsible for heavy duty tasks, e.g. plasma cutting of the sidewall of washing machines. Before the process starts, the system will evaluate the degree of autonomy of the overall task from the product information and availability of the system. Consequently, the tasks can be properly distributed to the manual and the automatic workstations. Kim et al. (2009) extended this concept to develop a disassembly line for LCD screens (see Figure 2.10). Screw removal and the object handling operations are performed by Selective Compliant Arms (SCARA) while manual operations perform the rest of the process.

Figure 2.10: Hybrid system for LCD screen disassembly (Kim et al. 2009)

Zebedin et al. (2001) extends the semi-automatic disassembly cell with a modular concept by configuring the cell controller using hierarchical control and information distribution of the disassembly process. The system is designed for extracting embedded components from printed circuit boards (PCB). Regarding modular systems, the working components (e.g. robot arms, part feeders, fixtures, and quality control system) are grouped as a subsystem. The cell controller is used to supervise the communication and co-ordination tasks in and between each subsystem. Machines can operate automatically and a human operator can command and monitor the system through the user interface.

23

Chapter 2 - Literature review

In conclusion, a major advantage of semi-automatic disassembly cells is the flexibility to deal with uncertainties and variation in the products. Economic feasibility can be achieved since the automatic workstation can perform the tasks more efficiently. Meanwhile, humans will get involved in the top-level control or when the automatic operations fail. However, the operation cost due to the human labour still exists in this concept. The economic feasibility may not be able to achieve especially in developed countries where the labour-cost is very high. Therefore, this concept should be developed to a human-free disassembly environment where a higher degree of autonomy is needed. The fully-automatic disassembly is explained in the following section.

2.3.3 Fully-automatic disassembly

In comparison with the semi-automatic disassembly, a degree of autonomy is increased by incorporating with sensor modules, prior knowledge of products, and a high-level task planner. From a number of studies, the configuration of the system is similar which typically consists of four components: 1) robot arms, 2) vision system, 3) disassembly and handling tools, and 4) other optional sensors. A number of selected research works using complete disassembly cells are presented in this section. Moreover, since the vision system is typically used in fully-automatic disassembly, much research has been focused on this component. It is explained in the next sections.

Torres et al. (2004) developed one of the most complicated and advanced disassembly cells for disassembling computers (see Figure 2.11). This disassembly cell consists of two industrial articulated robots equipped with force/torque sensors and selected interchangeable disassembly tools. Both robots work co-operatively through a task planner automatically generating paths and trajectories based on a graph model proposed by Torres et al. (2009). Other related works are as follows. Gil et al. (2007) implemented the multi-sensorial system that combines information between a tactile sensor and the vision system in order to perform visual-servoing of the robot. The conceptual test was conducted by removing a bolt from a straight slot. Gil et al. (2006) focus on the vision system that detects partial occlusions of the components to simplify the disassembly task. The conceptual test was done in the detection of circuit boards. In conclusion, this system exhibited an ability to solve the uncertainties problem in operational level by using an integrated sensor system. However, according to the high-level planning part, the disassembly sequence plan is based on precedence relations among assemblies (Torres et

24

Chapter 2 - Literature review

al. 2003). This method can generate the DSP automatically but the user still need to indicate the precedence relation and specific information of the product structure a priori. The input from the vision system is only to indicate the detailed geometry used for the operation level. There is no feedback information sent back to the higher level planner to acknowledge the current situation of the process. Therefore, it can be concluded that the system is able to disassemble only the product that the structure is known.

(a) Cooperative disassembly operation (b) Operation performed with multi-sensor

Figure 2.11: Robotic system for disassembly of computers (Gil et al. 2007)

ElSayed et al. (2012) developed the disassembly system for EOL computers for reusing and recycling purposes. The system consists of an articulated and a camera system. The system deals with uncertainties by two components: 1) visual sensor and 2) online genetic algorithm (GA). First, the visual sensor provides a 2.5D map of the detected component by integration between a 2D-camera with a laser range sensor. Template matching is used to recognise and locate the components according to the 2D template supplied in a BOM. Second, the operation sequence for removing the detected component is generated by an online GA as the process goes. As a result, the optimal and/or near-optimal sequence is generated by minimising travel distance and the number of disassembly method changes. In conclusion, the key feature of this system is the ability to adapt the plan according to the current situation obtained by the vision system. However, a precise BOM that represents the product structure is needed to be supplied a priori. The online visual input is only used to identify the operation options, e.g. accessibility, for the components expected. The BOM cannot be modified even if the actual product structure is found inconsistent with the predefined one. This will limit the flexibility to deal with unknown model samples. In addition, the uncertainties at the operational level were not been clearly explained in this article.

25

Chapter 2 - Literature review

Buሷker et al. (2002) developed a disassembly system for wrecked cars which is part of Project DEMON founded by the German Ministry for Education and Research (523- 4001-01 IN 506 B 2). This project focuses on the disassembly of automotive wheels with variation in the size of the wheels, the number of the bolts, and the position of the wheel. Active stereo cameras are used to reconstruct the 3D structure of the product. PCA is used to determine the component that is potentially difficult to recognise due to the uncertainties in EOL condition, e.g. rusty. According to their related work, Buሷker and Hartmann (1996) propose using a knowledge-based approach with a neural network to address the problem of occlusion in complex scenes. In short, this research focused on increasing the flexibility of the vision system to deal with uncertainties in EOL condition. The product structure is simple and well defined. None of the complex disassembly planning issue involved in this research.

Furthermore, some interesting developments at the operation levels were mentioned in the following literature. First, Merdan et al. (2010) propose an ontology-based architecture with a multi-agent system (MAS) for disassembly of digital cameras. The ontology is used to describe the benefit of each operating module on the current task. The optimised tool-path can be generated. Second, Bailey-Van Kuren (2006) presented a strategy of real-time tool path generation and error recovery.

2.3.4 Conclusion

In conclusion, a number of automatic disassembly cells have been developed in order to reduce the duties of human operators in the disassembly process. However, human operators are still needed in order to address the unresolved uncertainties. This human involvement is implemented in the semi-automatic systems are proved to be more economically feasible. According to the existing fully-automatic systems, most of the research works focus on the detailed operation. A number of techniques using types of sensors, e.g. cameras and force sensors, to overcome uncertainties of the products to be disassembled are developed. In regard to the planning level, it is clear that prior knowledge of the product structure needs to be supplied to the system in certain ways, e.g. BOM, precedence relations, etc. The high level planner needs this specific information for each model of products to generate the sequence plan and operation plan. ElSayed et al. (2012) proposed an interesting approach to take the current disassembly

26

Chapter 2 - Literature review

condition into account for generating the optimal plan. However, the original predefined structure still cannot be adapted by this input feedback.

2.4 Vision system in disassembly

This section focuses on the application of the vision system in the automatic disassembly cell and the semi-automatic disassembly cell described in Section 2.3. In general, machine vision serves a large number of applications in robotics and autonomous machine research, e.g. assembly process of electronic products, process control, and quality control (Lowe 1998).

From a case-study of disassembly of used cars investigated by Tonko et al. (2009), the authors summarise the vision-based constraints that are commonly encountered in the disassembly process. The constraints are: 1) detection of rigid objects, 2) objects located in front of a complex background, 3) partial occlusion, 4) 6-DOF estimation of objects, and 5) low contrast image due to some covering mixture e.g. oil and dirt. The vision system must be able to handle these conditions under controlled conditions, e.g. uniform lighting, in this research. However, these constraints might be slightly changed in the disassembly of other products. In summary, according to the general machine vision problem, the research is conducted in the following disciplines:

x Object recognition and localisation; x Optical problem: e.g. distortion, shading, contrast, etc; x Object and image quality: e.g. clutter, occlusion, etc; x Camera configurations, frames mapping, and coordinate system; and, x Modelling and representation of objects.

Disassembly is taken into consideration in these disciplines. The literature will be classified as follows: 1) recognition, 2) localisation, 3) configuration of camera and coordinate system, 4) model representation. In addition, the computer vision library that is expected to be used for programming is also reviewed.

2.4.1 Recognition

The object recognition process serves two purposes: 1) to classify between a product and a component and 2) to detect the desired component to be disassembled (Torres et al. 2004). In general, the recognition process is done based on the corresponding information

27

Chapter 2 - Literature review

stored in the database, i.e. geometry and relation. This data refers to the characteristic of each object which will be matched to the features of the object detected in the acquired image. The robustness is expected to be improved by Machine Learning (ML). The recognition part of two main projects is reviewed as follows.

Buሷker et al. (2001) present an effective recognition framework which is a combination of four techniques: 1) contour-based recognition, 2) feature grouping, 3) PCA-based recognition, and 4) knowledge-based control. First, a contour-based recognition is proposed to deal with the trade-off between tolerance and separability of the object detection (tolerance-separability problem) using region-based and edge-based representations. The authors suggest a simple-complex cortical neurons representation, similar to the vision system in humans, in order to tolerate edge-based representation. The number of mismatches in background clutter scenes is reduced. Second, feature grouping increases the rate and efficiency of recognition by grouping contour-elements to higher features. The entire object is modelled by sub-objects which are grouped according to their arrangement. Third, the PCA is used to accurately determine the location of the component. Because the real object may be rusty and does not clearly show enough contours, PCA based on gray-value image samples are applied. Fourth, regarding knowledge-based control, the author refers to the work of Buሷker and Hartmann (1996). The knowledge database is organised as a hierarchical tree and searched by an artificial neural network technique. Therefore, the recognition process is faster and more robust due to partial occlusion of the objects, especially in 2D views. As to 3D views, the knowledge database is an integration of many 2D views so that the process is more complicated. These techniques are validated in the detection of bolts on an automotive wheel. The bolts with various conditions were 98% detected with approximately 2 mm position error.

Gil et al. (2007) present the methods corresponding to this framework from another viewpoint. Pattern recognition and template matching are used for region detection. A Canny detector was used for contour extraction. Douglas-Peucker’s algorithm (DP) and Progressive Probabilistic Hough Transform (PPHT) were used to fit detected edges with polygonal shapes. Moreover, they are widely used to extract primitive geometries. The proposed techniques are used to detect various components in electronic products. The components are PCBs, screws, cables, and battery cover. This framework is effective given the model of each component.

28

Chapter 2 - Literature review

2.4.2 Localisation

The localisation process locates the components or significant features on the object. Gengenbach et al. (1996) summarise the objects to be located in the disassembly process including, 1) work pieces, 2) connective components, and 3) disassembly tools. Regarding the sensing techniques, the objects can be located by many techniques. Buሷker et al. (2001) summarise two main techniques: 1) reflected light and 2) active sensors. First, 2D images of reflected light from illuminated objects are captured and processed in order to obtain the position. In the case of stereo vision, a 3D model can be reconstructed from multiple 2D images using stereo vision technique. This technique is accurate and widely used in many research works (Büker et al. 1999, Tonko and Nagel 2000, Gil et al. 2007). The author presented promising results of 1-2 mm at the effective distance and the error increases for the farther location. Second, an active sensor is a sensor that transmits some kind of energy, e.g. ultrasonic, electromagnetic, coded-light (Berger and Schmidt 1995), laser, infrared, etc. A major problem occurs when measuring dirty or oxidized surfaces. Moreover, the integration of distance data and image data is complicated in some cases.

Apart from the literature, a low-cost infrared-based depth camera, part of the Microsoft Kinect sensor (MicrosoftCorporation 2011), that has been commercialised recently can be considered as an alternative. The distance data is given by a depth image which is fast and easy to use. Since this sensor was initially commercialised as a game controller, no technical data is provided by the manufacturer but is investigated by Khoshelham and Elberink (2012). However, no application in the disassembly context has been investigated yet.

2.4.3 Configuration of cameras and coordinate system

Regarding calibration, Torres et al. (2004) point out two technical problems: 1) optical problem (intrinsic) and 2) different co-ordinate system (extrinsic). First, the optical problems involve the internal characteristics of cameras, e.g. focal length, distortion, and centre point. This problem is resolved by calibration of the internal parameters of the camera, e.g. model of the lenses. Second, the ambiguity from different co-ordinate systems can be resolved by transformations of the telescopic system (translation and rotation). The extrinsic parameters can be derived from the absolute positions of the working elements, e.g. robot base, robot configuration, camera, and worktable.

29

Chapter 2 - Literature review

Subsequently, the homogeneous matrix representing the mapping between each co- ordinate system can be derived.

With respect to the position control of the robot, visual-servoing is broadly used to locate the position and orientation of the robot arm in 3D. The information can be presented in two approaches: 1) image-based (pixel) and 2) position-based (mm degree). Tonko et al. (2009) suggest using a position-based approach because it represents more explicit knowledge for position control. Furthermore, they discuss and suggest camera-robot- configurations (eye-hand-configurations) that allow multiple objects tracking.

2.4.4 Model representation

A model is used to describe the information for an object prior to performing in the disassembly process. It is prior-knowledge in database where the information is possibly built from the object recognition process. Torres et al. (2004) explains two approaches: 1) relational model and 2) geometric model. First, a relational model represents the relationship among components via their connections. They are represented by hierarchical graphs which are still simple even though the number of components increases. Second, a geometric model represents the product in multi-dimension. It presents the physical information for each component (e.g. shape and size) and relation between each component (e.g. location and contact surface). Tonko and Nagel (2000) found that most geometrical models of rigid and valuable components can be depicted by polyhedra, non-polyhedra, quasi-polyhedra, and other primitive geometries.

Hohm et al. (2000) suggest modelling of the environment by grouping objects into two types: 1) active objects (the main part of the product) and 2) passive objects (the connective components, e.g. screws, cables, and snaps). Moreover, Jorgensen et al. (1996) suggest organising a number of product models as a hierarchical tree to facilitate the object recognition process. Each node represents attributes of each sub-assembly or component. Furthermore, Buሷker et al.(2001) present the approach to reconstruct the 3D model by active-stereovision. This method benefits the operation in terms of representing a clear scene of position and orientation of the object, especially in visual-servoing e.g. to prevent collision between the product and robot arm. However, a number of complexities in calculation occur.

30

Chapter 2 - Literature review

2.4.5 Computer vision library and relevance algorithms

Computer vision libraries are available in various language platforms, e.g. C/C++, Python Java, Matlab, etc. The libraries commonly used in research and development are as follows: 1) Open Source Computer Vision (OpenCV) (Bradski 2010) in C/C++ and Python, 2) ImageJ for Java (Rasband 2012), and 3) Image Processing Toolbox for Matlab (MathWorks 2009). OpenCV is selected to be used in this research since it provides most of the algorithms used in the aforementioned literature, e.g. PPHT, template matching, cameras calibration, ML, etc. Moreover, it is compatible with C/C++ which is typically used in robotic research due to the fast processing and accessibility at the machine level. Bradski and Kebler (2008) has provided one of the complete resources, both theoretical background and practical implementation of the algorithms in OpenCV.

2.4.6 Conclusion

In conclusion, a vision system is commonly used in automatic disassembly cells. It typically addresses the uncertainties in products’ conditions, e.g. quantity of components, location, defect, etc. From the literature, a number of standard algorithms are combined and implemented to achieve particular tasks. The detection process is specifically designed case-by-case according to each application and situation. Therefore, the methodology presented in the literature can be used as a guideline. In addition, a number of the presented object recognition techniques are conducted based on the predefined template which contains the properties of the features to be detected. The detection accuracy and flexibility depends on the user’s defined property. Therefore, there is a possibility to apply this methodology to this research in order to deal with the variation in appearance of the components.

2.5 Cognitive Robotics

Cognitive robotics is an autonomous robot with high-level cognitive functions which allow it to reason, revise, perceive change in unpredictable environments, and respond in a robust and adaptive way in order to complete goals (Moreno 2007). It is used to control the main behaviour of the system in this research. In this section, an overview of artificial intelligence (AI) and the concept of cognitive robotics are given. Afterwards, the practical approach for developing the cognitive robotic agent regarding language framework and the situation calculus are explained.

31

Chapter 2 - Literature review

2.5.1 Overview of Artificial Intelligence and autonomous robots

Russell and Norvig (1995) clearly explain artificial intelligence which is a large research field that aims to study thought processes, reasoning, and the behaviour of machines. The authors also group the research into six main areas: 1) natural language processing, 2) knowledge representation and reasoning, 3) automated reasoning, 4) machine learning, 5) computer vision, and 6) robotics. These features are integrated and presented in an Intelligent Agent (IA) which is designed to perform specific tasks. The author classifies IAs according to function mapping in agent programs which are:

x Simple reflex agent (Reactive agent): Performs behaviours based on its perception only (condition-action rule). The agent function can be carried out if the environment is fully observable. The limitation is the agent has no memory (state). Hence, the agent can perform only simple tasks without decision from information from previous observations. x Model-based reflex agent: Performs its functions with a partial observable environment because there is a model describing a part of an incomplete world. The agent has memory of previous states. Therefore, it can perform the tasks based on information rom the past. However, since the agent has no information about future states, problems will arise when it encounters a complex situation with many possibilities of action sequence. x Goal-based agent: Is a model-based reflex agent extended with goal information which is desirable situations. This information provides possible choices of actions that the agent can choose in order to achieve the goal state. Planning is involved in this agent type. x Utility-based agent: The measurement of desirability is performed by a utility function in order to choose the best action sequence from the entire possibility. The utility function considers the most appropriate action on account of the given state information, e.g. conflict between actions, importance of goal, success of the goal. This function is added to the goal-based agent. x Learning agent: This agent has an extended technique to improve the existing knowledge. Learning is important because it is impossible to manually define every single aspect of knowledge in detail. Moreover, learning allows the system to adapt to the unknown environment beyond the initial knowledge.

32

Chapter 2 - Literature review

Regarding an autonomous robot, this is an intelligent robot with some degree of autonomy. It can perform the tasks by itself with minimal or without human guidance. These tasks are also performed adaptively according to the information of the unstructured environment that is sensed by its own perceptions. In this context, the IA can be considered as a core of the autonomous robot.

Figure 2.12: Complexity space for intelligent agent (Müller 2012)

However, Müller (2012) discussed the level of autonomy of the system in order to complete a real-world problem. A comparison between a classical artificial system and natural cognitive systems has been made (see Figure 2.12). The classic AI performs a complex task effectively in a non-complex environment. In contrast, the cognitive system performs the other way round. Therefore, artificial cognitive systems have been proposed to fill this gap by increasing the level of autonomy. As a result, the system is more flexible and reliable. Cognitive robotics, one form of artificial cognitive system, is explained in the following sections.

2.5.2 Cognitive robotics overview

Levesque and Lakemeyer (2007) present one of the practical approaches using knowledge representation and reasoning to solve problems in incomplete-knowledge situations and a dynamic world which are encountered by an autonomous robot. The author divides the study of cognitive robotics into three aspects: 1) knowledge representation (KR), 2) reasoning, and 3) high-level control. Sardina (2004) provides a clear overview of an

33

Chapter 2 - Literature review

architecture for cognitive robotics based on this approach. The architecture is described as a relation between knowledge, perception, and action. In brief, the robot can interact with its environment through sensors and effectors. The robot also has behaviours organised by high-level programming. The program interpreter, based on logic programming, will generate action sequences corresponding to the desired conditions, i.e. initial states, prerequisites and effects of primitive actions, exogenous events, and the result of sensing (see Figure 2.13).

Figure 2.13: An architecture of cognitive robotics (Sardina 2004)

Zaeh et al. (2007) also describe the architecture in a slightly different way as in Figure 2.14. This architecture is based on a perception-action loop which is comparable to the previous architecture but where human interaction is involved. The relation between humans and the cognitive robot is different from classical AI. It should occur in the form of interaction rather than control in order to increase the level of intelligence and flexibility of the system (Müller 2012). This architecture was originally implemented as part of Cognitive Factory in which the human operators can work side-by-side with the automation that is flexible, reliable, and safe. The Cognitive Factory is explained in Section 2.5.4.

34

Chapter 2 - Literature review

Figure 2.14: Cognitive system architecture with close-perception action loop (Zaeh et al. 2007)

Heilala and Sallinen (2008) describe a framework of cognitive robotics from another viewpoint. It is considered as a robotic agent. The framework can be categorised into six fields according to their features as follows.

x Mechanical part dealing with movement and affecting the environment; x Software and hardware part acquiring information from the environment; x Software part dealing with representation of the environment and the approach that the robot can interact with; x Software part dealing with the specification of the task that the robot should perform based on previous representation; x Software and hardware part used to compute the behaviour of the robot based on the task specification and representation; and, x Software part interfacing between sensors/actuators and reasoning components.

This architecture and the hardware-software framework are taken into account to develop the disassembly cell equipped with a cognitive robotic agent. Details on the software and language platform are described in the next section.

2.5.3 Action programming language

This section describes a high-level language and its related theory that is used in programming of high-level agents of cognitive robotics. First, a brief theory of situation

35

Chapter 2 - Literature review

calculus is presented. Second, the programming language, named Golog, which is technically used for high-level programming is presented.

2.5.3.1 Situation calculus

As described by Lin (2007), Situation calculus is a logic-based (first-order and second- order) language widely used to describe the behaviour of cognitive robotics to represent a dynamically changing world. It was introduced by McCarthy (1963). Reiter (2001) represents the behaviour of agents by four main components: 1) situations, 2) actions, and 3) fluents, and 4) successor state axioms.

x Situation is a sequence of actions representing the history of actions performed in the world. Consequently, a dynamic world is modelled from a series of situations which are a sequence of actions. However, situation will never be completely described due to an enormous number of states in the entire world history. Hence, only the fact of situations should be considered. The situations can be categorised into two types: 1) initial situations and 2) successor situations. The initial situation

is an empty sequence of actions denoted by S0. The successor situation is a result of performing an action from the initial situations. For example, successor

situation S = do(a,S0) denotes S is a situation resulting from performing action a

in the initial situation S0 where do is a binary function. x Action is a function performed on one situation which makes it change to another situation. Preconditions requirements must be satisfied in order to perform certain actions. These states of the world can be modelled by precondition axioms. After the actions are performed, the world is changed and those states affected by the actions can be modelled by effect axioms. In general, a binary predicate Poss(a,S) is used to check whether it is possible to perform an action a in situation S x Fluent is a predicate that describes properties of the world related to a situation. The truth value of fluents is affected by actions. Fluents are categorised into two types: 1) relational fluents and 2) functional fluents. Relational fluents represent relation among arguments where the truth value changes from one situation to another situation. For example, a relational fluent on(r,x,S) means an object r is on location x by performing action sequence at situation S. Functional fluents represent function of arguments where the value changes from one situation to

36

Chapter 2 - Literature review

another situation. For example, a functional fluent shape(x,S) means the shape of object x in that state reached by performing action sequence S. x Successor state axiom describes the changes of the world in regard to the executed action due to the current conditions. fxdoaS xdoaS,, is a successor state axiom for fluent f characterised as the free variables x are subject to change when the primary action a has been executed in the situation S.

2.5.3.2 Golog

Golog is a high-level programming language based on situation calculus. This action programming language allows describing complex behaviours with actions and fluents. This program represents dynamic worlds based on knowledge about preconditions-effects of actions and the initial state of the world. The knowledge is supplied by the user as an axiom. Consequently, this program is able to reason about the state of the world and provide possible solutions for the behaviour to be committed (Levesque et al. 1997). The program consists of two parts: 1) a sequence of definition for the procedures and 2) main body of procedures. The program is constructed with basic syntactic elements used to control action sequences and nondeterministic choices. The basic action theory that is defined by action preconditions and effects of execution in a situation can be formulated as part of the Golog program (Reiter 2001).

The Golog interpreter can be operated under many language platforms. One of the common developing environment is the Prolog (Covington et al. 1996) which is a logic- based declarative language that is broadly used in an AI field. This language was firstly introduced by Lespérance et al. (1994) and much research is continuously carried out to improve the features. Reiter (2001) gives a brief summary as follows. Soutchanski (2001) proposes a new approach of online decision-theoretic and incremental execution. The author also presents a sensing action that is used to sense the result of stochastic actions. De Giacomo et al. (1999) propose using an incremental-sensing method and point out that sensing is necessary in order to deal with incomplete information, especially in large agent programming. Baier and Pinto (2003) present an approach to use planning in Golog by extending situation calculus with uncertainty and the non-deterministic effect of actions. A number of extended versions of Golog have been developed. Some of the notable versions with their features are summarised as follows. ConGolog is an extension of Golog that supports concurrent programming (De Giacomo et al. 1997, Lespérance et

37

Chapter 2 - Literature review

al. 2000). IndiGolog is an extension of ConGolog dealing with online execution with sensing (Sardina et al. 2004) which is directly related to execution monitoring (De Giacomo et al. 1998). ReadyLog is an extension of Golog that supports decision making in a real-time dynamic domain (Ferrein and Lakemeyer 2008). Moreover, Golog is extended to several other versions, e.g. ccGolog, pGolog, DTGolog, sGolog, and LegoLog, which are listed by Sardina (2004). In this research, IndiGolog is used since it is designed for online execution and sensing. De Giacomo et al. (2001) provides the theoretical background, programming techniques, and application of IndiGolog.

2.5.4 Applications of cognitive robotics

A number of research works regarding cognitive robotics have been conducted in recent years. It has been implemented in many fields, e.g. biological inspired robot, navigation, etc, in order to improve flexibility of the system in a dynamic environment. One of the emphasised areas is industrial application which is reviewed as follows.

CoTeSys (Cognition for Technical Systems) develops the “Cognitive Factory” that applies functions of cognitive robotics to industrial activities, especially production (Beetz et al. 2007, CCRL 2007, Zäh 2009). The project integrates key activities in production, namely 1) monitoring and planning in production system, 2) condition of material and parts in the work place, 3) work piece assembly, and 4) human-machine co- operation. With respect to cognitive robotics, four main functions are implemented, namely 1) perception, 2) learning, 3) knowledge, and 4) planning. These functions allow the robots to monitor the product and carry out the assembly process with respect to prior knowledge of the assembly plan. A learning module makes the system self-optimise and derives a better assembly sequence from past incidents. Consequently, an optimal sequence plan can be searched autonomously by the planning module. With respect to skill-acquisition, self-adaptation, and self-modelling, the cognitive factory is flexible enough for operating with a variety of products. In conclusion, in comparison to other classical manufacturing, the Cognitive Factory can achieve higher productivity and flexibility in the production line (see Figure 2.17).

38

Chapter 2 - Literature review

Figure 2.15: Classification of the type of manufacturing (Bannat et al. 2011)

Moreover, an “Industrial ubiquitous assembly robot” is reported by Heilala and Sallinen (2008). This an autonomous robot is implemented in the manufacturing industry in order to build a smart manufacturing environment. The concept of a integrates a number of sensors and a sensor network sensing the environment in order to achieve cognitive capability. This robot is the integration of three major areas: 1) artificial intelligence 2) ubiquitous computing and 3) industrial robotics. This application is expected to be used in human collaboration tasks in order to increase productivity. Human operators play an important role of complicated decision making and providing knowledge for a robot to learn. Meanwhile, the system is robust enough to support a human’s normal working behaviour and unpredictable situations, e.g. human error.

In addition, some interesting applications of cognitive robotics in other areas are as follows. Beetz et al. (2007) present the Assistive Kitchen which is a cognitive robot that can work with people in the environment of daily life, such as a kitchen. This project is conducted by CoTeSys. Project “RHINO” presented by Burgard et al. (1999) is an interactive museum tour guide that is flexible to deal with uncertainties in a dynamic environment and have full interaction with people. “Robonaut” is a working cooperatively with human astronauts developed by NASA-DARPA. Cognitive robotics is implemented in the high-level control in order to provide complex cognitive functions reflecting adaptive behaviours, for instance self-adaptation and developing skills based on experience (NASA 2008, Diftler et al. 2012).

39

Chapter 2 - Literature review

2.5.5 Conclusion

Cognitive robotics has been recently introduced to the field of AI. It can be implemented on a classical autonomous system to achieve a higher level of flexibility and autonomy in a complex dynamic environment. The framework, architecture, and implementation are presented in the literature. With respect to the field of disassembly, much research regarding DPP, DSP, and automatic disassembly cells has been conducted using classical AI. However, the flexibility of the existing system is very limited in order to deal with a variation in EOL product returned. The limitation occurs in both planning and operational levels. In regard to the cognitive robotics, the flexibility to handle the uncertainties in various problems domain has been proved. Therefore, there is a high possibility to solve the uncertainties problem in the disassembly domain which has not been implemented yet in the existing research works. The cognitive functions and the architecture that allow the system to effectively interact with the dynamic world are the key to success.

2.6 Product case-study: LCD screens

This section gives information of the case-study product, Liquid Crystal Display (LCD), in the following perspectives: 1) overview of the impact and EOL treatment and 2) disassembly process. It should be noted that the terms regarding the components in Ryan et al. (2011) and Kim et al. (2009) are different. This thesis defines the term according to latter one. Therefore, some of these technical terms in this review have been changed from the original article.

2.6.1 End-of-Life treatment of LCD screen monitors

Cathode Ray Tube (CRT) monitors have been dramatically replaced with LCD screens over the past 10 years. Kernbaum et al.(2009) state that more than 120 million units of LCD screens were sold worldwide in 2008 and they are expected to be used in approximately 90% of desktop computers in 2010. In addition, Ryan et al. (2011) predict the sales of LCD screens will be US$80 Billion which is approximately four times higher than the sales of other types of monitor in 2012 (see Figure 2.16). Therefore, the number of disposals is continuously increasing. For instance, in Germany alone, it is expected that more than 4,000 tons of LCD screen monitor will be disposed of by 2012 (Kernbaum et al. 2009). Therefore, the impact from this product significantly increases and EOL treatment needs to be considered.

40

Chapter 2 - Literature review

CRT = Cathode Ray Tube LCD = Liquid Crystal Display PDP = Plasma Display Panel CRT RP = Rear projection CRT MD RP = Modern Rear Projection OLED = Organic Light-Emitting Diode

Figure 2.16: Predicted sales of types of monitor (Ryan et al. 2011)

According to the European Directive on WEEE (Parliament 2003), recycling and reusing of material and components of LCD screens should achieve 65% and the rate of recovery should achieve 75% by weight. Ryan et al. (2011) analyse the structure of the product from 17 different models of LCD screens. The weight contribution of each component is shown in Figure 2.17 in which the overall weight is mostly contributed by the main components, including lightbox casing (comparable to the back frame of LCD module in Kim et al. (2009) (see Figure 2.19b)), PCB mounting panel, back cover, and PCBs. The authors conclude that quick and easy disassembly strategies to reach those key components are needed in order to achieve the target percentage while the cost constraint is satisfied.

Figure 2.17: Weight contribution of the components in LCD screen (Ryan et al. 2011)

41

Chapter 2 - Literature review

Figure 2.18: Distribution of the material in LCD screen (Franke et al. 2006) Franke et al. (2006) analyse the material distribution as in

Figure 2.18. They conclude that the target of 78% by weight can be achieved if material fractions, including ferrous metal and Halogen-free plastic, are separated before recycling. In addition, according to the WEEE directive, removal of three potentially hazardous components, including 1) Cold Cathode Fluorescent Lamps (CCFL), 2) LCD glass panel, and 3) PCB, must be taken into consideration (Ryan et al. 2011).

First, the disassembly of CCFLs is difficult due to its fragility and strong connection with other components in the LCD module. Special treatment of this part is needed due to a small amount of Mercury contained in CCFLs. The amount of Mercury can be estimated from a prediction that 290-480 kg of Mercury will be disposed from 80 million LCD screens by year 2010 (Franke et al. 2006). However, no literature investigates the exact amount of Mercury contained in each screen. Second, the LCD glass panel is potentially an environmental risk. According to the WEEE directive, an LCD that is larger than 100 cm2 must be removed from any WEEE. Therefore, the LCD module is mandatory to be disassembled to retrieve LCDs. It is usually carried out destructively (Kernbaum et al. 2009). Third, the PCBs need to be separated since they contain several types of metal and thermoplastic material which is difficult to recycle. Due to the WEEE directive, any PCB that is larger than 10 cm2 must be removed from the WEEE (Ryan et al. 2011).

Moreover, a number of researchers examine LCD screens from different viewpoints. For example, Kernbaum et al. (2009) describe practical testing for remanufacturing and

42

Chapter 2 - Literature review

physical detail of components and connective parts of LCD monitors. Li et al. (2009) study material recovery from LCDs. Kim et al. (2009) proposes an emulation-based control of a disassembly line of LCD monitors. Bogdanski (2009) examines the recycle concept of LCD screens. Shih et al. (2007) use a heuristic method to find the optimal disassembly sequence with respect to the profit returned and environmental impact.

2.6.2 Disassembly of LCD Screens

Kim et al. (2009) examine 17 different models of LCD screens from various manufacturers. The common structure of LCD screens can be presented in two levels: 1) module level and 2) component level. Disassembly at the module level can be examined by the selective disassembly approach. This product consists typically of seven modules: 1) a front cover, 2) a back cover, 3) an LCD module, 4) a carrier, and 5) 3 PCBs (a power supply - inverter board, a switch board, and controller) (see Figure 2.19a). With respect to the component level, the LCD module can be further disassembled into 9 components (see Figure 2.19b). The time for disassembling to module level ranges between 3.6 – 8.7 mins depending on the design and the quantity of connective components of each model.

(a) Modules in LCD screens (b) Components in LCD a module

Figure 2.19: Structure of LCD screen (Kim et al. 2009)

Ryan et al. (2011) also examined 17 different models of LCD screens where the screen size ranged from 20” – 40”. The authors found that the general structure of LCD screens is similar among the different manufacturers. The order of the components (from front to back) is: 1) front cover, 2) LCD module, 3) carrier, 4) PCBs and cables, 5) back cover. However, the location of the components, e.g. PCBs, screws, cables, can be significantly changed even within manufacturer families. An average time for manual full disassembly is 14 mins. The authors conclude that the disassembly is time consuming and difficult due

43

Chapter 2 - Literature review

to numerous connectors and joining techniques. The EOL perspective should be taken into account by manufacturers for the new LCD screens. For the existing screens, a proper disassembly strategy in a flexible system due to the level of automation and human intervention should be established.

2.6.3 Hybrid-system disassembly cell for LCD screens (case-study)

The automation partly involves disassembly of LCD screens in the form of semi- automation or a hybrid system. No fully-automatic system has been presented in any literature yet. Kernbaum et al.(2009) conducted the experiment with a prototype hybrid disassembly system. The system consists of one automated workplace and one manual workplace as a part of the project (Kim et al. 2009). The authors summarise the common restraints of disassembly automation for LCD screens: 1) extremely strong snap fits in the housing parts (front-back covers) required high torque to release, 2) electronic components usually mounted on both sides of the carriers lead to additional handling of the carrier, 3) the stand is sometimes mounted inside the monitor, and 4) integrated video or power supply cable needs manual disassembly of the back cover. The design of the manual and automatic workplace takes these restraints into account.

Figure 2.20: A sequence of disassembly of LCD monitors in automated workplace (Kernbaum et al. 2009)

44

Chapter 2 - Literature review

x Manual workplace: First, the monitor is clamped on the fixture. Big monitor stands are disassembled while the small ones can remain mounted on the back cover if they do not interfere with the accessibility of connective elements. Then, the products are transferred to the automated workplace. x Automatic workplace: A 4-axis SCARA manipulator is used to perform object handling and unscrewing. First the suction gripper places the product on the pneumatic clamping device. Then the main components are removed by a two- finger gripper respectively as follows 1) external screws 2) monitor back housing and internal components 3) metal covers and 4) PCBs and cable connectors (see Figure 2.20).

In conclusion, this case-study provides some practical approaches to implementing the automated system in some critical part of the disassembly process of LCD screens. The design and configuration of this prototype hybrid-system is taken into account for the mechanical system design in this research.

2.6.4 Conclusion

Disassembly of LCD screens is necessary in order to achieve an efficient EOL treatment and removal of the hazardous components, such as CCFL and LCD. However, achieving economical feasibility by manual disassembly is still a challenging problem. It is suggested that the selective disassembly to the module level should be implemented. The LCD module should be further destructively disassembled but CCFL and LCD should be secured. According to the structure of the product, it is similar in different models but significant variations are found in the number, physical appearance, and location of the components. According to the case-study disassembly cell, it can be implied that the LCD screens can be disassembled by automation in most of the process. However, in this research, significant adjustment is needed to develop a fully-automatic cell.

2.7 Conclusion

The related literature needed for developing cognitive robotic disassembly automation which is economically feasible and robust to uncertainties is reviewed. The reviews consist of four key areas: 1) disassembly, 2) automatic disassembly cell, 3) vision system, and 4) cognitive robotics. The disassembly process of the case-study product is also described. The conclusion is given at the end of each section. A summary of the

45

Chapter 2 - Literature review

significant issues and research gap will be presented. The direction for developing the cognitive robotic disassembly automation in this research is also concluded.

2.7.1 Significant issues and research gaps

2.7.1.1 Disassembly

x Disassembly is a key step of efficient EOL treatment but usually economically infeasible. The high operating cost is caused by uncertainties in products and process. The following strategies potentially result in economic feasibility: 1) implementing near-optimal or sub-optimal sequences of DSP and DPP, 2) performing selective disassembly, and 3) performing semi-destructive and/or destructive approach. x The adaptive planner can deal with types of uncertainties in product conditions but no strategy dealing with the uncertainties in the product’s structure has been developed. x Learning process is implemented only in the process planning level. Further development of machine learning should be done at the operational level.

2.7.1.2 Automatic disassembly cell

x The semi and the fully automatic disassembly cells are developed to improve the performance of the process by reducing the duty of human operators in undesirable tasks, e.g. heavy duty destructive operation, removal of hazardous components, etc. x The physical uncertainties of products are handled by sensor systems. However, the automatic disassembly is normally explained as an open-loop process without the feedback in relation to the accomplishment of the operation. Hence, the strategy for automated measurement should be developed. x The DPP and the DSP in regard to the disassembly cell context are presented in a large amount of literature. However, the product structure must be supplied a priori. Therefore, the uncertainties in product structure should be addressed.

46

Chapter 2 - Literature review

2.7.1.3 Vision system

x The vision system addresses the uncertainties in quality and quantity of the components in the product, including quantity of components, location, defect, etc. The major problems consist of recognition, localisation, and camera system. x Information about the product is obtained and represented in 2.5D or 3D. Therefore, the robot can perform actions based on this information. x A number of standard algorithms are combined and implemented on particular tasks. A specific detection algorithm is designed based on the features of each component. However, no clear explanation of the generic vision system that can handle uncertainties in physical appearance of particular types of components is given.

2.7.1.4 Cognitive robotics

x The cognitive robotics concept is applied on a classical autonomous system to achieve a higher level of flexibility and autonomy to perform tasks in a complex dynamic environment. x The perception-action loop architecture expresses the key features of cognitive ability, including perception, action, reasoning, learning, planning, behaviour control, and human interaction. x There is a high possibility of solving the uncertainties problem in the disassembly process, planning level and operation level, but no application has been implemented yet.

2.7.1.5 Case-study product

x Disassembly of LCD screens is necessary for achieving an efficient EOL treatment and removal of the hazardous components, namely CCFL and LCD. It should be disassembled to the module level. The LCD module should be further disassembled to separate the hazardous components. x The main structure of the product is quite identical in every model. x Variations are at the component level, e.g. quantity of the connective component, location of the component, physical appearance, etc.

47

Chapter 2 - Literature review

2.7.2 Research direction - Economically feasible automated disassembly cell

According to the literature, the economic feasibility of the disassembly process is potentially achieved by implementing proper strategies that suit specific requirements. According to the scope of this research that the disassembly is performed for the purpose of product recycling; destructive and semi-destructive approaches are expected to be used. The case-study products will be selectively disassembled to the module level in order to separate all main components. In addition, proper treatment of the hazardous components, namely CCFL and LCD, are of critical concern. Regarding the disassembly plan, the DSP should be adaptively generated according to the current state of disassembly in order to deal with the uncertainties in product structure.

With respect to the hardware design of the disassembly cell, the concept of a modular system and product families are taken into account. First, regarding the modular system, the proposed system integrates individual modules, i.e. a robot arm, product handling tool, disassembly tools, vision system, and high-level planner. Second, the LCD screens are considered as a product family in which the main features, e.g. main structure and components, are similar across various product models. Therefore, the hardware must be designed to be capable of handling the variations of those features.

Considering a type of component, the components can appear differently in different product models. The vision system needs to be flexible and robust enough to address the uncertainties in physical appearances of the components under certain controlled conditions, e.g. uniform lighting. Therefore, the problems arise in the field of object recognition and localisation. In addition, the low-cost option of a vision sensor that is able to obtain the 3D or 2.5D information is considered.

In order to improve flexibility and reliability of the system, the concept of cognitive robotics is implemented to address the uncertainties at the planning level. Cognitive robotics is implemented as an agent in the MAS with the perception-action loop architecture. Therefore, the agent can control the system to act according to the dynamic environment, in this case, the disassembly process. The behaviour is influenced by the cognitive functions, namely reasoning, revision, and learning. Human interaction takes place in the form of guiding and teaching in case the agent fails to achieve the desired goals. With respect to programming, IndiGolog is selected because of the important feature in sensing and exogenous actions.

48

Chapter 3 – Methodology overview and system architect 3 METHODOLOGY OVERVIEW AND SYSTEM ARCHITECTURE ______

This chapter is divided into two main sections, methodology overview and system architecture. First, the methodology section gives the overview of the entire system. The conceptual design and implementation on the case-study product are explained in Section 3.1. Second, the system architecture and a technical detailed explanation of the integration of the system from the aspect of operating modules and level of control are explained in Section 3.2. The communication protocol used to communicate among the operating modules is also described.

3.1 Methodology overview

3.1.1 Human-driven disassembly process

As stated in the literature review, the uncertainties in the disassembly of End-of-Life (EOL) products can be divided into three categories: 1) physical uncertainty in EOL condition, 2) variety in the supplied products, and 3) complexity in process planning and operations (Gungor and Gupta 1998, Lambert 2003). In practice, human operators are expected to overcome these uncertainties intuitively. The decisions taken during the disassembly operations can be made based on their expertise by adapting their past experience in regard to the perception of the current situation being encountered. As a result, the process performed by human operators can become very flexible and robust. This concept has been presented by the author in (Vongbunyong et al. 2012).

Firstly, the disassembly process is expected to be performed intuitively and to be flexible enough to handle any product model without prior knowledge of specific details, e.g. the structure of the product, the number of components, etc. These pieces of information are expected to be obtained by human-level perception as the process goes. Secondly, with respect to the robustness, the disassembly is expected to have a high success rate since the

49

Chapter 3 – Methodology overview and system architect operators can justify the achievement of the process. Therefore, in case of failure, the operators can keep trying a number of possible ways to carry out the process until the task has been accomplished. To clarify a common process carried out by human operators, the behaviour of human operators associated with the disassembly process can be described as follows (see Figure 3.1).

Start disassembly

Unknown model Known model NO Known YES Trial process model? Knowledge Base (KB) x Product structure x Disassembly sequence Find the main component Learn x Number of components to be removed x Location of components x Removal operations

Try a new possible removal operation recall Follow the NO Success instruction in KB removing ?

YES

YES NO Goal state ?

Finish disassembly

Figure 3.1: Behaviour of the human operators in disassembly process

A variety of product models can be found in the disassembly process. In case of a known model, the operation can be completed effectively once the full specific information of the model is available. The process can be carried out quickly with few attempts since everything is straightforward and consistent in a particular model. Minor physical uncertainties can be found but expect to be compensated intuitively. On the contrary, difficulties may arise in the case of a previously unseen model of a product. Since there is no information available, the operators need to perform the disassembly based on their expertise.

In the case of the unknown model, the process may be carried out awkwardly state by state for the first few times of disassembling each new model. In each state of disassembly, the operators may have to spend some time to find a main component and

50

Chapter 3 – Methodology overview and system architect try many possible ways to remove it. A number of attempts will be made until the components are successfully removed and the operator can proceed to the next state. The possible operations can be logically selected based on their perception of the current component’s condition and their past experience. Concurrently, during the process, the operators can gather information, e.g. product structure, detail of the components, removal operations, etc. The operators can learn the proper ways to go through the process by considering the relation between actions and their consequences which is the outcome of the removal operation (success or failure). The knowledge base (KB) is continuously built up and will be recalled when needed. This process will be repeated in all disassembly states and finished when the goal state has been reached. Eventually, after a certain number of the samples have been successfully disassembled, the operators should have enough experience to effectively disassemble the same model when it is again encountered.

To be more precise, the characteristics of the human operator that influence the flexibility and the robustness of the disassembly are summarised as follows:

x Specific knowledge of the product’s structure is unnecessary since it can be perceived in real-time during the disassembly process; x Ability to assess the outcome of each performed operation and be able to come up with a potential alternative operation if the first alternative fails; x Having broad operation schemes for removing types of components and the ability to adapt to other physically similar components; and, x Ability to learn from past experience and adapt to previously unseen cases.

In this research, the methodology is developed based on this human’s behaviour in order to handle the uncertainties.

3.1.2 Framework of the disassembly automation

The concept of cognitive robotics is used to emulate the aforementioned human’s behaviour and expertise with respect to the capability of decision making and high level perception. This concept is applied to disassembly automation by considering it as a multi-agent system (MAS) (Iigo-Blasco et al. 2012). The cognitive robotic agent expressing the desired characteristics is developed and used as the top-level decision maker. This agent interacts with the physical world through the disassembly rig

51

Chapter 3 – Methodology overview and system architect consisting of other agents, namely 1) sensors and 2) actuators, facilitating the cognitive robotic agent’s decision. The sensors perceive the information from the external world and the actuators physically interact with it in the form of actions. Regarding the disassembly operation, the interaction with the world occurs in two forms:

x Perception of the product and state of disassembly during the process; and x Physical contact with the product to be disassembled in the disassembly operation.

In summary, this system is operated by three operating modules: 1) cognitive robotic module (CRM), 2) vision system module (VSM), and 3) disassembly operation unit module (DOM). The framework of the system in Figure 3.2 illustrates the operating modules corresponding to the MAS. The vision system module is used as a sensor that performs visual sensing of the physical world. Meanwhile, the disassembly operation unit module performs as a sensor and an actuator with respect to the motion and force control. In addition to these three modules, the human expert is involved only when special assistance is needed to overcome unusual circumstances that cannot be resolved autonomously. Assistance is given in the form of an action sequence as generated by the CRM and will be performed by the DOM.

Agent (Multi-agent system) assistance Cognitive Human robotic expert module

Vision system Disassembly module operation unit module

sensor sensor actuator

sensing sensing action Physical world

Figure 3.2: Framework of the system

Based on the concept of modularity and MAS, each module works independently from each other to process the relevant information. Only the abstract information is

52

Chapter 3 – Methodology overview and system architect transferred among the modules. An overview of the interaction is illustrated as the operational routine which is described as follow.

The cognitive robotic module consists of two components: 1) the cognitive robotic agent (CRA) and 2) Knowledge base (KB). In an operation cycle, the cognitive robotic agent controls the common operation routine by making decisions of the proper actions based on the existing knowledge and information of the external world. The command will be sent as a request command to the corresponding modules in order to perform the actions. The vision system module observes the actual disassembly process and the current condition of the product. Then, the abstract information will be supplied to the CRA. The information will be processed in association with the existing KB by the CRA. Next, the disassembly operation unit module physically performs the disassembly operations as requested by the CRA and sends feedback when the operation is done. Afterwards, the vision system will be asked to observe the current condition of the product again. This process is performed repeatedly until reaching the goal state (see Figure 3.3). In case of an unresolved condition, the CRA will request the human assistance which will be given in the form of operation sequences to resolve the problems. It should be noted that this request does not occur in normal operation cycles.

Disassembly Vision system Cognitive robotic Human user operation unit module module module

Making decision

Request information Sensing from the external world

Supply the abstract information Making decision Request disassembly operation Performing operation

Making decision Send feedback Request Assistance human assistance

Supply operation sequence Making decision

Repeat until goal state is reached

Figure 3.3: An overview of the common operation routine

53

Chapter 3 – Methodology overview and system architect

3.1.3 Uncertainty handling and functionality of the modules

The functionality of each operating module is designed by considering the uncertainties needed to be addressed. Therefore, information regarding the uncertainties stated in Section 3.1.1 need to be specifically defined in the form that the agents can acknowledge. The usable information can be represented in various forms according to the operating modules. The three primary uncertainties in Section 3.1.1 are broken down to 8 specific issues as in Table 3.1. These issues can be categorised according to the associated module. The uncertainties to be addressed and the functionality of each module are explained as follows and the summary is in Figure 3.4.

Cognitive robotic module

Uncertainties to be addressed xMain product structure xQuantity of the components xDisassembly sequence plan xDisassembly operation plan xDisassembly process parameters

Function requirement xControl of the disassembly process flow xInteraction with the external world via the supporting modules xBehavior control due to the cognitive functionality xMachine learning capability

Vision system module Disassembly operation unit module

Uncertainties to be addressed Uncertainties to be addressed xPhysical appearance of the components xPhysical uncertainties in product condition xLocation of the components xUncertainties in non-detectable objects xQuantity of the components Function requirement Function requirement xAvailability of the disassembly tools xRecognition of the main components xAvailability of the operation procedures xRecognition of the connected components xAwareness of the executability of the xLocalisation of the components operation xDetermining transition of the disassembly state

Figure 3.4: Specification summary of the robotic disassembly system

54

Chapter 3 – Methodology overview and system architect

Operating Module

Primary Specific issue uncertainty robotics Cognitive Disassembly operation unit operation Vision system Uncertainty in Physical uncertainties in product conditions z EOL condition Variety in the Main product structure z supplied products Physical appearance in the components z Quantity of the component z z Location of the components z Complexity in Disassembly sequence plan z process planning Disassembly operation plan z and operations Disassembly process parameters z Uncertainties in the non-detectable objects z

Table 3.1: Uncertainties in disassembly process

3.1.3.1 Cognitive robotic module

The cognitive robotic module addresses the majority of the uncertainty issues, namely 1) main product structure, 2) quantity of the component, 3) Disassembly Sequence Plan (DSP), 4) disassembly operation plan, and 5) disassembly process parameters. The first two issues are product-specific which represents the characteristics of the product. Another three issues represent the Disassembly Process Planning (DPP) which is a consequence of the first two issues. Due to the product perspective, the main product structure and the quantity of the components are the qualification of the product representing an interconnection of the main and the connected components. The related information can be obtained by the vision system during the disassembly process and stored as a fact in the KB. In regard to the disassembly process, the DPP can be represented as a three-level top-down structure, including 1) the DSP, 2) the operation plan, and 3) the process parameters, respectively. Inaccuracy in the disassembly rig is also considered an uncertainty in process parameters. Regarding functionality, the role of the cognitive robotic agent is to control the flow of the process with uncertainties by interacting with the external world via another two supporting modules. The flow is controlled by the system’s behaviour influenced by the cognitive functions in order to address the aforementioned uncertainties. Furthermore, a machine learning capability is

55

Chapter 3 – Methodology overview and system architect also enabled to increase performance of the system. The learning process mostly occurs autonomously during the disassembly process. However, human assistance can be involved in critical circumstances when the process cannot be carried on by the CRA alone. This learning process occurs in the form of learning by demonstration. In addition, the knowledge obtained during the process is expected to be revised in order to improve the performance of the process.

3.1.3.2 Vision system module

The vision system module addresses the uncertainties associated with the quality and quantity of the components. With respect to utilisation, they are considered in the sensible forms, namely 1) physical appearance in the components, 2) quantity of the component, and 3) location of the components. According to the type of component, the component generally looks different in each model of product even if it is in the same product family. Hence, the system needs to be robust enough to detect particular types of component with some level of variation in physical appearances. After the component has been recognised, the system needs to be able to identify the number and location of the component that has been detected. Eventually, these uncertainties are suppressed by processing the raw detection outcome into the abstract information that will be conveyed to the cognitive robotic module for making decision. It should be noted that the external uncertainties affecting the performance of the visual detection process, e.g. ambient light, is addressed a priori by lighting control.

With respect to the functional requirement, this module needs to perform the detection process of the main and the connected components. The detection process can be described as two processes, 1) recognition and 2) localisation, which are used to address all relevant uncertainties. Moreover, the vision system is used to determine the transition of the disassembly state which is necessary for facilitating the cognitive robotics characteristics regarding execution monitoring.

3.1.3.3 Disassembly operation unit module

The disassembly operation unit module addresses the problem in the operation used to handle the physical uncertainties due to 1) EOL condition and 2) other non-detectable objects. They can be described as missing pieces of information since the sensing module cannot detect or extract the information from them. First, the EOL condition can be

56

Chapter 3 – Methodology overview and system architect detailed as minor physical changes in the product returned, e.g. damaged parts. Second, the non-detectable objects consist of two types: 1) the connected components that are hidden or are difficult to be visually seen and 2) other parts of the object that possibly obstruct the robot movement in the disassembly operation.

Since the CRA controls the process mainly based on the information perceived, no decision can be made if this significant piece of information is missing. Therefore, in relation to the functional requirement, the disassembly operation units need to handle these uncertainties at the operational level by means of a hardware approach. The disassembly technique, i.e. semi-destructive and destructive approaches, is expected to compensate for error in the missing information. Consequently, suitable disassembly tools and predefined operation procedures must be available. In addition, due to the execution of the operations and safety issues, the system has to be aware of non-detectable objects that possibly lead to crashing. The feedback associated with the low-level control of the robot is taken into account.

3.1.4 Simplification of the system based on case-study product

A very complex and flexible automation is needed to achieve generality in disassembly of a wide range of product family. The complexity arises significantly in every operating module, e.g. detection algorithm for the vision system, availability of the disassembly tools and fixture, complexity of the condition in the search-space of the cognitive robotic agent, etc. However, the main purpose of this research is to prove the concept of using cognitive robotics in disassembly. Therefore, the generality of the system is less important and the system can be simplified by limiting the scope to a case-study product.

The system can be simplified by considering two main issues, 1) the case-study of one product family and 2) disassembly methodology. First, regarding the case-study, apart from the environmental and economical aspects stated in the literature review, the LCD screens are selected as a case-study product because of the three following technical reasons:

x The product is relatively simple with a few major variations in term of component types and the product structure (see detail in Chapter 4) ; x Majority of the disassembly task can be achieved in 2.5D by considering from the reverse of assembly;

57

Chapter 3 – Methodology overview and system architect

x The developed disassembly techniques and visual detection algorithms are potentially able to be applied to other types of flat screen monitors and other Waste of Electrical and Electronic Equipments (WEEE); and, x The product is commonly available in global markets and contributes to a significant amount of WEEE.

Second, regarding the disassembly methodology, due to economic feasibility, the disassembly process in this research is only for recycling. Hence, some damage to the separated main components is acceptable. Therefore, the LCD screens are meant to be disassembled to the component level with selective disassembly methodology by the (semi-)destructive approach which is more economically feasible than the non-destructive approach. The (semi-)destructive approach simplifies the system from the aspect of position accuracy and sophisticated force feedback needed for detaching the components.

In conclusion, the operating modules are designed based on these requirements which are simplified by the limited case-study. The detail is explained in Chapters 4-6 in relation to each operating module.

3.2 Control architecture

The control architecture of the system is designed based on the framework presented in Section 3.1.2 with respect to the simplified scope of the case-study in Section 3.1.4. The overview configuration of the system is illustrated as a schematic diagram in Figure 3.5. The structure of this system can be described in two perspectives: 1) level of control and 2) operating module. The structure is illustrated in Figure 3.6. The level of control presents an overview of the flow of information and operation command in relation to each module is explained in Section 3.2.1. The technical information of each operating module is explained in Section 3.2.2. Finally, the communication among the modules is described in Section 3.2.3.

3.2.1 Levels of control

The levels of control are defined based on autonomous behaviour and the level of data abstraction of each process in the operating modules. Therefore, one operating module can consists of multiple levels of control depending on the behaviour of processes needed for carrying out the module’s functions. This system is divided into three levels of

58

Chapter 3 – Methodology overview and system architect control: 1) High-level, 2) Mid-level, and 3) Low-level. A composition of the levels of control and the operating modules is presented in Figure 3.6.

Colour camera Image grabber Depth camera

Grinder I/O Controller

Main computer FlippingTable

Local Area Robot arm Network (LAN)

IRC-5 controller

Figure 3.5: Schematic diagram of the physical connection

Cognitive robotic module

Suggested operation Human Internal Cognitive robotic agent KB interaction (Exogeneous action) assistance HIGH-LEVEL Sensing request Abstract information (sensing action) Operating request Abstract information (Primitive action)

Vision system functions: Disassembly operation procedures Recognition & Localisation MID-LEVEL Sensing request Movement Position Status command command command (on/off) Pre-processed images Feedback Feedback

Image Motion Motion Power processing control control switching

Request Actuator Actuator Actuator signal signal signal Raw images signal (colour & depth) Sensor Sensor signal signal Cameras & Flipping LOW-LEVEL Robot arm Image grabber Table Grinder

Vision system module Disassembly operation unit module Figure 3.6: System architecture – levels of control and operating modules

59

Chapter 3 – Methodology overview and system architect

First, the High-level control controls the top-level behaviour of the system and planning of the disassembly process. The request command and information transferred to other levels are in abstract form. The cognitive robotic module operates at this level. Second, the Mid-level control is used to manage the information interaction between the high- level and the low-level. The information is processed and transformed to an acceptable form for each level of control. The detection functions of the vision system and the operation procedures of the disassembly operation unit modules operate at this level. Third, the Low-level control deals with the machine-level operation of the hardware, such as signal processing regarding sensor-actuator in motion control and image pre- processing from the cameras.

3.2.2 Operating modules

The system architecture can be classified by the operating modules according to their specific functionality. Each module works completely independent from each other in order to accomplish its assigned tasks which are related to its function. As stated in Section 3.1.3, this system consists of three main operating modules: 1) cognitive robotic, 2) vision system, and 3) disassembly operation units. The disassembly operation unit can be considered as two sub-modules: 1) robot arm and 2) other mechanical units. Therefore, the section is divided into four parts.

3.2.2.1 Cognitive robotic module

This module operates only in the high-level control layer which in this module consists of two main components, including 1) CRA and 2) KB. The agent controls the top-level behaviour of the system based on cognitive ability in accordance with the existing knowledge about the DPP in the KB. The cognitive ability consists of four cognitive functions: 1) reasoning, 2) execution monitoring, 3) learning, and 4) revision. The basic behaviour of the system is influenced by the first two functions in order to execute the proper actions according to the available plans in the KB. The last two functions are involved in advanced behaviour control when the learning from a previous process or human assistance has taken place. This module is developed in the language of IndiGolog which is an agent programming language (Levesque et al. 1997, De Giacomo et al. 2001). In this research, the program operates under the Prolog environment. SWI-Prolog 5.10.4 (SWI-Prolog 2010) is used as a compiler running on the local machine (main computer).

60

Chapter 3 – Methodology overview and system architect

In regard to the compatibility, the information transferred throughout the system is designed to be in Prolog syntax.

The interaction with other modules occurs in three forms of actions: 1) primitive action, 2) sensing action, and 3) exogenous action. Detail is explained as follows and all actions are listed in Section 6.3.3 in the Cognitive Robotics chapter.

x Primitive actions are common internal and external actions used in the CRA. In this case, the Primitive actions sent to the disassembly operation unit are emphasised. They are sent to the disassembly operation unit module in order to execute the operation procedures that physically interact with the sample product. The abstract information about accomplishment and parameters of operation will be sent back as feedback. The agent also checks the execution result from the feedback. For example, the agent requests the robot to cut along the contour at a

specific location with the specified cutting method (mcut). The feedback indicating the successful is sent back.

[Primitive action]: cutContour (0,0,500,350,20, mcut). [Feedback]: done.

x Sensing action is one form of the primitive action sent to a particular module in order to get the sensing information as feedback. This is normally sent to the vision system module in order to request the sensing result of a particular condition of the product. Then, the abstract information of the detection result is sent back. As a result, the CRA can reason about the condition of the sample in the current disassembly state and make further decisions. The sensing action is also used for monitoring the execution result. For example, the agent requests to detect a back cover. The location of the bounding box is sent back as a result.

[Sensing action]: detectBackCover. [Sensing result]: box(0,0,500,350,20,0).

x Exogenous action is the action that is activated from outside the scope of the system, such as human assistance. In general, exogenous actions can be sent to the

61

Chapter 3 – Methodology overview and system architect

agent anytime depending on the external source. However, to reduce some technical complexity regarding the information flow, the exogenous action can be retrieved only when the agent requests. Therefore, it can be considered as a form of the sensing action that interacts with the human expert. Then, human assistance will be given via the Graphic User Interface (GUI) in the form of primitive actions to be executed. Regarding the cognitive functionality, this action copes with the learning and revision of the plan when human assistance is involved. For example, the agent requests human assistance and the suggested operation is given back.

[Request exogenous action]: senseHumanAssistance.

[Exogenous action]: cutContour(0,0,500,350,20, mcut).

3.2.2.2 Vision system module

This module supplies the abstract information regarding the disassembly state based on machine vision approach as requested by the CRA (Sensing action). This module operates in C/C++ incorporated with OpenCV library (Bradski 2010) under Microsoft Visual Studio 2008 environment (MicrosoftCorporation 2008) on the local machine. In regard to the control layer, this module operates at the low-level and the mid-level control layers.

x Low-level control layer captures and prepares images for the upper level. The raw colour and the raw depth images are captured by the cameras. Before being used at the mid-level, the colour image needs to be pre-processed in four steps: 1) decoding Bayer filter (Bayer 1976), 2) white balance correction (Viggiano 2004), 3) enhancement of image quality, and 4) geometrical transformation and alignment. The first two steps are skipped for the depth image. As a result, the pre-processed images are passed to the mid-level. x Mid-level control layer contains the detection functions performing 1) recognition and 2) localisation. These main functions perform the tasks according to the functional requirement in Section 3.1.3.2, e.g. detectBackcover, checkStateChange, etc. The detection outcome will be encoded to the abstract form that is compatible for Prolog syntax. Subsequently, it will be sent to the cognitive robotic agent as a sensing result.

62

Chapter 3 – Methodology overview and system architect

3.2.2.3 Robot arm

The robot arm is the main component of the disassembly operation unit module. The main task is to perform the (semi-)destructive disassembly operations according to the CRA’s movement commands (primitive action). MotionSupervision (ABB 2004) is also implemented in order to acknowledge collision. As a result, possible parameters, e.g. orientation of the cutting tool, due to the actual product’s physical condition can be refined. It will be supplied to the CRA as choice-points for reasoning and learning. This module is developed in language RAPID (ABB 2004), a high-level procedural language for ABB robots, which is operated in IRC5 controller. In regard to the control layer, this module operates at the low-level and the mid-level control layers.

x Low-level control layer performs motion control of the IRB-140 robot arm in machine-level regarding the sensor and the actuator signals. The robot can accurately perform basic movement according to a given path and trajectory using the factory-setting control scheme and parameters. x Mid-level control layer contains parameterised operation procedures which are predefined sets of basic movement. The procedures are executed once receiving the corresponding primitive action, such as cutLine, cutContour, cutScrew, etc. A trap routine for handling collision is also operated when the robot reaches the force and torque limits. As a result, the parameters of the success or failure of the procedure will be sent out as feedback.

3.2.2.4 Other mechanical units – FlippingTable and grinder

Two mechanical units, 1) the FlippingTable and 2) the grinder, are presented in this section. First, the FlippingTable is designed for handling the sample to be disassembled and removing the detached components. Second, the angle grinder is used as a cutting tool for performing the (semi-)destructive disassembly. This module operates in C/C++ under Microsoft Visual Studio 2008 environment on the local machine.

The level of control has a similar structure to the robot arm. The low-level control layer performs the closed-loop motion control of the hardware through an I/O signal. The mid- level control layer performs the operation procedures activated by the corresponding primitive actions, including flipTable and switchGrinderOn /Off.

63

Chapter 3 – Methodology overview and system architect

3.2.3 Communication among the modules

Since the operating modules are developed on various language platforms, the client- server model which can be operated in multi-platform (Reese 2000) is selected to establish the communication over the network. A communication protocol based on the Prolog syntax is used to manage the information flow via socket-messaging.

LOCAL MACHINE REMOTE MACHINE (Main computer) Client (IndiGolog) (IRC-5 controller) Cognitive Robotic Module

Server-1 (C/C++) Local Area Server-2 (RAPID) Network Communication centre (LAN) Processing Units

Vision System Other Disassembly Robot arm Module operation units

Figure 3.7: Schematic diagram of communication network structure

The network consists of three components: 1) client, 2) server, and 3) communication centre. In this case, the cognitive robotic module is a client and the other modules are servers (see Figure 3.7). The client communicates with the servers by sending requested messages to the communication centre. The requested message is either sensing actions or primitive actions which is in the form of “header().”. Subsequently, the message will be distributed to the corresponding module by matching the header to the predefined list of actions belonging to each module. Afterwards, the feedback message will be sent back to the server via the same route. This communication continuously takes place according to the operation routine illustrated in Figure 3.3 which is the main operation routine of the system. The communication system can be grouped with respect to the location of the program and language platform as the following three components (see Figure 3.7):

x Client is the cognitive robotic module that operates on the local machine. Transmission Control Protocol and Internet Protocol (TCP/IP) is established for

socket messaging in Prolog code.

64

Chapter 3 – Methodology overview and system architect

x Server-1 consists of the rest of the components of the local machine which are 1) communication centre, 2) vision system module, and 3) other disassembly operation units. These three components are combined because they operate under the same environment which is C/C++ on the local machine. Consequently, the communication among these components becomes an internal communication which reduces complexity in data transfer. For the communication with the client, the socket messaging is established by using Window Socket Library (winsock2) (MSDN 2011). It should be noted that the communication centre is important for the communication in multi-platform to resolve the compatibility problem regarding a termination character of string between different languages, especially between Prolog and RAPID. It is resolved by converting the source string from C- string and managing to put the proper termination character according to the destination module. x Server-2 is the robot arm module locating on the remote machine, IRC-5 robot controller. TCP/IP is established for socket messaging in RAPID. This server connects to the communication network through the Local Area Network (LAN).

In conclusion, the system architecture is a composition of three operating modules that seamlessly connect with each other via the network system using a client-server model. All information and commands are encoded in the compatible form of Prolog syntax according to the client’s preference. In addition, the messages are conveyed to the desired operating modules through the communication centre which resolves the compatibility problem of multi-platform.

3.3 Conclusion

This chapter gives an overview of the cognitive robotic disassembly cell from the general concept to the implication on the case-study product. In the first part, the concept of using cognitive robotics for replicating human’s behaviour in order to handle the uncertainties in the disassembly process is proposed as the methodology. The uncertainties in product and process perspectives are pointed out. Therefore, the framework of the system regarding three operating modules is developed to address those uncertainties.

In the second part, the scope of the system is narrowed down by considering the case- study products, LCD screens. The architecture of the actual system in this research is

65

Chapter 3 – Methodology overview and system architect presented. The system consists of three operating modules which are the cognitive robotic module, the vision system module, and the disassembly operation unit module. The structure of each module consists of multiple levels of control due to the autonomous behaviour and data abstraction. The vision system and the disassembly operation unit modules are supporting modules that serve the cognitive robotic agent’s decision which is represented in the form of sensing action and primitive action, respectively. In addition, human assistance is also described as exogenous Action. Regarding communication, a client-server model with socket messaging is used due to the difference of the language- platform and location of the operating module. Another major advantage of this architecture is that the operating modules are easy to be modified since they are completely independent from each other. Consequently, modifications can be made without direct impact on the others.

In conclusion, this proposed architecture is the framework of the system used throughout this research. The detail of the operating modules is explained in the following chapters.

66

Chapter 4 – Disassembly operation unit

4 DISASSEMBLY OPERATION UNIT ______

This chapter gives information about the disassembly operation unit module (DOM) which physically interacts with the products. The system is specifically developed based on the case-study product. Therefore, analysis of the LCD screens regarding the components and main structure is described in Section 4.1. The disassembly operation unit modules with respect to the hardware configuration and operating procedure are explained in the rest of the chapter.

CognitiveCognitive r roboticobotic m moduleodule

SuggestedSuggested operationoperation HumanHuman Internal Cognitive robotic agent KB interaction ((ExogeneousExogeneous a action)ction) assistanceassistance HIGH-LEVEL SSensingensing requestrequest Abstract information ((sensingsensing action)action) Part2 Operating request Operation PlanAbstractAbstra andct informationinformation (Primitive action) Procedure regarding product andVisionVi processsion systemsyst knowledgeem functions:function ins: KB Disassembly operation procedures RRecognitionecognition & LocalisationLocalisation MID-LEVEL SensingSensing requestrequest commandcommand Movement Position Status Pre-processedPre-processed command command (on/off) imagesimages Feedback Feedback

ImageImage Motion Motion Power pprocessingrocessing control control switching

RequestRequest Actuator Actuator Actuator ssignalignal Part1 signal signal RawRaw imagesimages signal Hardware((colourco unitslour & d depth)epth) Sensor Sensor signal signal CamerasCameras & Flipping LOW-LEVEL Robot arm IImagemage g grabberrabber Table Grinder

VisionVision s systemystem m moduleodule Disassembly operation unit module

Figure 4.1: System architecture in the perspective of disassembly operation unit module

According to the level of control described in Chapter 3 (see Figure 4.1), the system is explained in two parts: 1) the hardware units and 2) the operation procedure based on product and process in the Knowledge Base (KB). The first part (Section 4.2) describes

67

Chapter 4 – Disassembly operation unit

the conceptual design and general information of the module regarding the low-level control. The second part (Section 4.3) focuses on the mid-level control. Standard operating procedures and strategic disassembly operation plans for each type of component that corresponds to the information in KB are explained. Finally, conceptual testing of the proposed operation is given.

4.1 Case-study product: LCD screen

In this research, each operating module of the disassembly cell is designed to support a number of variations found in LCD screens. First, the design of the disassembly operation unit module (DOM) takes the variations in physical features, i.e. size and aspect ratio, into account. Second, the vision system module (VSM) is designed to support variation in the physical appearance of the product and components. Third, the cognitive robotic module (CRM) needs to handle the variation in the product structure. Therefore, a number of samples are needed to be examined in order to identify the scope of the expected variations. This section describes the samples selection and overview of the LCD screen in disassembly perspective. It should be noted that the detail of the variations related to each module are given in the corresponding chapters.

4.1.1 Selection of the samples

In this research, a number of different models of the LCD screen were visually inspected to examine the main product structure and significant features in order to design the suitable disassembly system. The samples consist of:

x 37 different models from 15 manufacturers; x Diagonal size ranging 15” – 19”; x Various aspect ratio: normal screen (16:9 and 16:10) and widescreen (4:3); and, x Manufacturing year 1999 – 2011.

A number of brands available in the local market were selected. Also, a number of models in one brand were selected to examine the variation in the design and technology used by a manufacturer across the product’s series. The development in the structure and components are expected to be seen in the period of 10 years. Some of these expected variations are described in (Ryan et al. 2011). However, the examination focuses on the explicit features that are visually observable since the vision system is the main sensor

68

Chapter 4 – Disassembly operation unit

used in the system. These samples were manually and selectively disassembled into the module level with the non-destructive approach. Therefore, the product structure and the features of the components can be examined.

4.1.2 Structure analysis

As stated in the literature, the product structure of the LCD screens is consistent but significant variations are in the quantity and location of the main and the connective components (Ryan et al. 2011). However, a significant difference in the main structure has been found after examining these 37 samples by means of selective disassembly. This proposed classification scheme of the main structures will be referred to in the entire thesis. An explanation regarding the components and the main structure is as follows.

4.1.2.1 Components

Regarding the functionality of an automated disassembly facility and vision system in this research, the component of typical LCD screens can be classified into six types of main components and three types of connective components. The main components consist of:

x Front cover; x Back cover; x Carrier; x LCD module ; x PCB cover; and, x PCBs (power-inverter, controller, panel switch).

The characteristics of each component are explained in detail in Section 4.3.2 in regard to the corresponding disassembly operation plans. The common variation regarding the structure is the layout of PCBs according to the functions. One PCB can perform multiple functions in some cases. Hence, the number of PCBs can be varied. In addition, some extra components are found in exceptional cases, e.g. the front cover with integrated- speakers, a Universal Serial Bus (USB) port module PCB, shields for cables and PCBs.

Regarding the connective components, LCD screens consist of three types of connective components: 1) screws, 2) snap-fits, and 3) electrical and electronic cables. The variation regarding structure is the number and the type of the connective components used

69

Chapter 4 – Disassembly operation unit

between the main components. In addition, other connectors, e.g. plastic rivets, sticky tape, clips, can be found in some cases but are insignificant in the automated process.

4.1.2.2 Main structure

Considering the assembly structure, the product structure of the LCD screens can be categorised into two types, 1) Type-I and 2) Type-II, according to the assembly direction. The main components are assembled from the back side of the LCD screen in Type-I and from the front side in Type-II. A major difference can be noticed from the relative location between PCBs and the carrier. The direction for mounting the PCBs on the carrier can represent the structure type of the entire product. In addition, the PCBs need protection from the outside environment. An extra main component, a PCB cover, is needed in the Type-I structure. Meanwhile, it is an integrated part of the carrier in the Type-II structure. Simplified structures for both types are illustrated in Figure 4.2. In summary, the structure of LCD screens is found in almost identical order. A slightly different order among PCBs, PCB cover, and carrier can distinguish between the two proposed types.

Back cover

PCB cover

PCBs

Carrier Assembly direction Assembly direction LCD module

Front cover

(a)(b)

Figure 4.2: Product structure of LCD screens (a) Type-I (b) Type-II

The minimal form of the liaison diagram (see Section 2.2.1) (Lambert and Gupta 2005) is used to represent the structure of LCD screens. This representation is compact since the

70

Chapter 4 – Disassembly operation unit

connective components are shown as the arc between two main components. The two types of the structure of LCD screens can be illustrated by liaison diagrams in Figure 4.3.

A B A B

I C E I C

F D F D

H G H G

(a) Type-I (b) Type-II

Main Component Connection (A) Front cover (F) PCB – control AB: sc, sn CF: sc FG: c (B) Back cover (G) PCB – power for CCFL AI : c CH: sc FH: c (C) Carrier (H) PCB – power and inverter BC: sc DF : c FI : c (D) LCD module (I) PCB – panel switch CD: sc DG: c GH: c (E) PCB cover CE: sc, sn (sc = screw, sn = snap-fits, c = cable) Figure 4.3: Liaison diagrams of typical LCD screens (a) Type-I and (b) Type-II

Main Component Connection E4 (A) Front cover AB : sn (B) Back cover AC : sc(4), sn A B E3 (C) Carrier AI : sn (D) LCD module CD : sc(4) E2 (E1) Small PCB cover CE1 : sn I J C (E2) Small PCB cover CE2 : sn (E3) Cable shield CE3 : sn, sc(1) E1 (E4) Cable shield CE4 : sn, sc(2) (F) PCB – control CF : sc(4) F D (G) PCB – power-inverter & CG : sc(2) power for CCFL CJ : sc(2) (I) PCB – panel switch DF : c (J) PCB – USB module DG : c G FG : c FI : c Figure 4.4: Example of a complex structure of an LCD screen

In summary, according to the 37 samples, no identical pair of the LCD screens was found (see Appendix A for the information of the samples). The aforementioned variations in the main structures and the components result in the difference of the LCD screen in certain ways. An example structure of a very complex LCD screen is illustrated in Figure

71

Chapter 4 – Disassembly operation unit

4.4. In comparison with the typical structure in Figure 4.3, a number of additional main components are found resulting in a number of connective components used.

4.1.3 Implementation of the system

A number of variations are found in the structure and component of LCD screens. Exact disassembly sequences cannot be generated if this information is not revealed a priori. Therefore, the predefined sequence is infeasible according to the main purpose of this research that aims to disassemble any model of LCD screen regardless of the specific prior knowledge. However, a broad structure of the LCD screens can be seen as the proposed main structure type. Therefore, a heuristic plan to disassemble generic LCD screens can be developed by considering the structure shown in Figure 4.2.

In addition, the development of the sequence needs to consider the constraints in an automated process. One of the constraints is the fixture system that needs to hold the samples stationary during the process. Flipping the sample without fixing references is avoided due to the possible error in relocation; so that, the disassembly must be done from only one side in any case. Based on the requirement that the LCD module needs to be secured in order to reduce the possibility of breaking the cold-cathode fluorescent lamp (CCFL), the fixture is designed to hold the sample from the front side. Prevention of breaking the CFFLs is crucial since they contain small amount of Mercury that will contaminate the workspace if these lamps are broken. Hence, the LCD module will be secured and the disassembly must be performed from the back cover.

In conclusion, the broad disassembly sequence regarding the main components is expected to be similar to the reverse of assembly direction for Type-I in Figure 4.2a. The process starts from the back cover and finishes at the LCD module. This disassembly direction is applied to both Type-I and Type-II. However, it should be noted that some conditions need to be considered when removing the PCB cover in the Type-II structure (see detail in Section 4.3.2.2).

4.2 Disassembly operation units in hardware perspective

This section gives an overview of this module which is divided into three parts. Firstly, the conceptual design based on the process requirement and limitation is described. Secondly, the overview structure and technical information of each operating unit is given. Lastly, standard operation procedure and collision awareness are described.

72

Chapter 4 – Disassembly operation unit

4.2.1 Conceptual design

The hardware design is based on the requirement in the disassembly of the LCD screens. The system must be capable of handling the uncertainties in the variation of the case- study product that will be disassembled with the (semi-)destructive approach. In summary, the conceptual design is generated based on the following requirements:

x Handling the sample with 15” – 19” diagonal size with various aspect ratios; x Performing destructive operation on various types of unidentified material; x Approaching the desired object in 2.5D or 3D due to the pose of the components; x Removing the detached objects with variation in geometry; x Handling the error and failure at the operation level due to unexpected collision; x Reducing the complexity in tool-change; and, x Being economically feasible;

As a result, a light-duty 6-DOF robot arm is selected to manipulate the disassembly tool which is an angle grinder. The angle grinder is equipped with a versatile abrasive cutter which is expected to cut off the common material in LCD screens. Therefore, the change of the cutting-tool can be ignored. Regarding the object approaching direction, the operations are designed to be executed in 2.5D due to the limited workspace of the robot and the size of the samples.

In addition, the FlippingTable (see Figure 4.5a) is developed in order to address the requirement of the tool for removing the detached objects. Instead of developing a versatile gripper that potentially leads to another complex problem, the FlippingTable is used to unload the detached objects without tool-change needed. While the whole product is fixed firmly on the turning fixture, the detached objects are expected to fall down to an underneath disposal container when the fixture flips down.

With respect to economic feasibility, the cost due to equipment and operation is taken into consideration. First, a small size robot arm satisfying the workspace and payload requirement is selected. Second, the robot has built-in torque which can be used in collision awareness. Therefore, an add-on force sensor is unnecessary which can reduce the setup cost and operation time due to computing resources. Third, avoiding the tool- change facility can also significantly reduce the cost and operating time. The actual system developed based on this design is explained in the following sections.

73

Chapter 4 – Disassembly operation unit

A

D

C

B

(A) motor (B) fixture plate (C) fixture elements (D) suction cups

(a) FlippingTable (b) Robot with a grinder Figure 4.5: Module’s components

4.2.2 Operation units

The disassembly operation unit module consists of three operation units: 1) a robot arm, 2) an angle grinder, and 3) the FlippingTable. The structure regarding the level of control is shown in Figure 4.1. Each operation unit works according to the corresponding primitive actions which are the commands sent from the cognitive robotic agent (CRA). This section gives technical information with respect to their tasks, hardware design, control, and limitation.

4.2.2.1 Robot arm

An articulated 6-DOF light-duty industrial robot, ABB IRB-140, is selected to manipulate the cutting tool which is an angle grinder (see Figure 4.6a). Therefore, the movement of the grinder, i.e. position, orientation, and speed, is controlled by this robot arm. This robot is selected due to three specification criteria: 1) maximum payload, 2) accuracy and precision, and 3) workspace. From the product specification (ABB 2004), all criteria are satisfied, maximum payload 6kg and linear accuracy 1.0mm with ±0.03mm repeatability (see Appendix B). The workspace is large enough for operating with the aforementioned samples. However, the workspace becomes very limited when considering the length of the cutting tool. The orientation of the cutting tool is always set to be vertical in order to avoid the singularity in certain locations. As a result, an effective workspace can be defined as a rectangular box over the fixture plate (see Figure 4.6b) which is suitable for working with the samples.

74

Chapter 4 – Disassembly operation unit

(A) Robot arm A (B) FlippingTable (C) Grinder

C ROBOT BASE {B}

Effective Workspace

500 mm 500 mm

100 mm FIXTURE BASE B {F}

(a) Complete setup (b) Effective workspace Figure 4.6: Disassembly operation units

With respect to the control system, the low-level and the mid-level layers of control are operated with the RAPID program on the IRC-5 controller. The motion control is in the low-level layer which is the factory default and cannot be modified. The low-level operation, i.e. move point-to-point, logical and numerical operation, I/O signal manipulation, can be executed using RAPID commands. In the mid-level control layer, user-defined operation routines are also programmed in the language RAPID. The operation routine consists of the disassembly operation procedures, socket-messaging communication procedure, and other utility functions. The robot has built-in force and torque sensors used for detecting physical collisions. This function is utilised by the module MotionSupervision. Detailed explanation is given in Section 4.2.3.

4.2.2.2 Grinder

An 850W angle grinder is mounted on the robot arm as a cutting-tool (see Figure 4.6b). A multi-purpose abrasive cut-off disc is used since it can cut though a variety of material, i.e. plastic, metal, glass, etc, which are commonly found in the LCD screens. The abrasive blade is preferable to the carbide blade due to the cutting performance and durability. However, tool wear is a major drawback since the parameters needed to be adapted as the

75

Chapter 4 – Disassembly operation unit

process goes. Therefore, a hard wheel grade is selected in order to minimise the tool wear However, tool wear still occurs during the process; so that, an automatic update of the tool length is performed by the vision system module (see Section 5.3.9.2). In summary, the Ø125mm × 1.0mm abrasive cut-off disc is selected to have sufficient accessibility.

4.2.2.3 FlippingTable

The Flipping Table is a mechatronic worktable with rotating fixture used for manipulating the product samples. It performs two main functions: 1) holding the product sample and 2) removing the detached component. The main concept is to remove the detached objects from the whole sample by gravity. After the connections between the desired parts and the whole product have been completely disestablished, they are expected to fall down after the fixture plate flips. Then, the fixture plate will flip back to the original position and be ready for the next operation cycle of the robot. The operation cycle is shown in Figure 4.7. Explanation is given in two parts: 1) sample loading stage and 2) operation cycle.

(a) (b) (c) Description

(a) At the top position (b) Flipping down (c) At the bottom position (d) Flipping back * (e) At the top position * *Detached object has been removed (d) (e) Figure 4.7: Operation cycle of the FlippingTable

First, in the loading stage, the sample will be loaded by placing the screen side on the fixture plate where it is grabbed by the suction cups. This loading direction is chosen for two reasons: 1) to guarantee that the LCD module which is the most critical component will be secured until the disassembly process is done and 2) to improve the suction performance due to the flat glass surface of the screen. The two 50 Kg-capacity suction

76

Chapter 4 – Disassembly operation unit

cups provide the vertical pulling force while four fixture elements are placed around the contour in order to prevent any horizontal movement.

Second, during the disassembly process, the fixture plate is rotated by a 0.25hp geared DC rotary motor (see Appendix B). The holding torque is sufficient to hold the fixture plate at the top position while the robot performs cutting operations. The motion is limited by two limit switches located at 0˚ and 350˚ relative to the horizontal plane. These positions correspond to the top (Figure 4.7a) and the bottom position (Figure 4.7c) of the fixture plate. The hard limit using an aluminium bar is also placed at the top position for improving accuracy of the top position to ensure the consistency of the location of the sample in different cycles. A simple On-off feedback controller (Sangveraphunsiri 2003) operating on the local machine through the I/O controller is used to control the motor’s movement according to the signal from the limit switches. The rotation speed is set to be quite slow (12rev/mm, 10s/cycle) in order to prevent damage from the impact between the fixture plate and the hard limit bar.

From the preliminary experiments, the FlippingTable was able to remove the completely detached parts effectively. A possible problem occurs if there are still a few minor connections, e.g. hidden cables or small metal parts, not removed. As a result, the incompletely detached part could possibly move from the original location to a new unpredictable location after the flipping process. This problem is expected to be resolved by improving the cutting operation scheme and incorporating with human assistance in order to ensure that all connections have been disestablished before flipping. However, the problem is a minor one in the case of the weak connections, e.g. cables, which can be broken down by the pulling force due to weight of the hanging part.

4.2.3 Operation routine

The operation units respond to the request of the CRA in different ways. Due to the communication structure in Figure 3.6, the FlippingTable and the grinder (other disassembly operation units) work on the local machine under the main operation routine of the system as the communication centre does. Therefore, it is straightforward and no further explanation needed. On the other hand, the robot arm module operates independently on the remote machine conducting its own operation routine parallel to the main system’s routine. As a result, a proper routine is needed for compatibility. Only the operation routine of the robot arm is explained in this section.

77

Chapter 4 – Disassembly operation unit

4.2.3.1 Operation routine of the robot arm

The operation routine can be divided into two parts: 1) initialisation process and 2) operation loop. The operation routine is illustrated in Figure 4.8.

Routine start

Initialisation (execute once at startup): xSet valid working area xSet parameters regarding motion control xInitialise default product coordinate CollisionHandling xActivate MotionSupervision xConnect to the communication network reactivate MotionSupervision

Move to the safety position Acknowledge the collision

Operation loop The robot moves backward along the tool’s axial direction Connect the interrupt with trap routine: CollisionHandling Deactivate MotionSupervision Condition checking and setting for handling previous collision Call Trap routine CollisionHandling Receive the requested message

Identify the requested Operation Procedure by using string matching Collision is detected (recognised by the interupt) Excessive force or torque are detected by the MontionSupervision Termination YES request ?

NO Perform the requested Operation Procedure

End routine

Figure 4.8: Operation routine of the robot arm

In the initialisation process, the system condition is reset to the following:

x The working area of the robot is limited in order to prevent physically crashing with other components, e.g. camera and FlippingTable; x Process parameters, e.g. maximum speed and acceleration, force sensitivity of MotionSupervision, are predefined;

78

Chapter 4 – Disassembly operation unit

x The origin of the product’s coordinate is initially set to the bottom-left corner of the flipping fixture plate. It will be adjusted once the real sample is located; x MotionSupervision is activated for monitoring an unexpected physical crash; x Robot moves to the safety position that does not interfere with the vision system and other components; and, x The socket-messaging communication is connected to the communication centre.

After the initialisation process has been completed, the operation routine gets into the operation loop that mainly operates according to the requested message from the CRA. Firstly, the trap routine for collision handling is connected to the interrupt signal in the interrupt service routine (ISR) which continuously operates in the background. Therefore, the interrupt signal due to the MotionSupervision will be activated when the collision takes place and then immediately calls the CollisionHandling trap routine in order to resolve the collision. The collision detection is necessary not only for reasons of safety but also for improving the accessibility of the cutting tool in the following operation. Further explanation is in Section 4.2.3.2. Afterwards, the requested message is received and the procedure of the corresponding operation plan will be executed. This loop will be repeated until the termination message is received which results in the end of routine.

Colour camera Depth camera OPTICAL AXIS TOOLTIP {T} zF yT oT zB yF yB xT

yP Object zT oF oB xF xB oP xP FIXTURE BASE {F} PRODUCT ROBOT BASE {B} COORDINATE {P}

(projection) zF =0

Figure 4.9: Simplified coordinate system with respect to the robot arm

79

Chapter 4 – Disassembly operation unit

Regarding the coordinate system, the position used in the entire disassembly system is in Product coordinate {P} since it can explicitly describe the product which facilitates the product understanding of the cognitive robotic agent. A simplified representation of the coordinate system is in Figure 4.9 (see the full representation in Chapter 5). The information regarding the position of the origin of the Product coordinate is normally received at the beginning of the process. Hereafter, all position information used in the entire system is based on this coordinate.

4.2.3.2 Collision handling

The trap routine CollisionHandling is called once the robot arm physically crashes with objects, such as non-detectable parts of the object to be cut. The collision can be categorised into two types: 1) hit by a part of the grinder’s body and 2) crash during the cutting process. The first case relates to the geometry of the objects. The crash usually happens when interacting with a part surrounded by complex objects. This form of collision occurs softly due to the cutting speed and force sensitivity. In the second case, it generally occurs when the cutter is not able to cut through the object due to many factors, e.g. tool sharpness and material hardness. This form of crash usually produces a large reactive force to the robot resulting in motors turning off automatically in order to prevent severe damage to the hardware. This rarely occurs and is expected to be resolved manually, e.g. renewing the cutter.

In this research, the cognitive robot is expected to be able to reason about the accessibility and feed speed to a particular object and learn the proper parameters. The collision detection improves the accessibility of the cutting tool by finding out the possible orientation and speed of the cutting tool for accessing a particular object and performing successful cutting operation. In this case, “successful cutting” is defined as the cutting operation can be performed throughout the entire cutting path without any crashing.

In this case, only 4 orientations (0˚, 90˚, 180˚, and 270˚ around the vertical axis) and 2 feed speed (15 and 50 mm/s) are available in order to limit the trial time. The combination of these two parameters is denoted as cuttingMethod (mcut). The robot keeps trying the new set of parameters once the collision occurs. The orientation can be changed while the speed is fixed according to the material of the object. The final value of cuttingMethod that results in the successful cut is sent to the CRA. In case that all choices have been tried and the cutting operation still fails, it can be implied that a particular

80

Chapter 4 – Disassembly operation unit

cutting destination is inaccessible and the CRA will be informed. The cutting method is defined as Equation (4.1) and (4.2).

°­M S ^`'1','2',...,'8' :successful cut mcut ® (4.1) ¯°M F ^`'0' : fail cut

Ms ^` s feeduTTT tool:, s feed ^` Low Hi š tool ^`^` N ,,,, S W E ›tool In Out (4.2)

Where sfeed = feed speed; {Low, Hi} corresponds {15, 50} mm/s; θtool = tool orientation; {N, S, W, E} corresponds to {0˚, 180˚, 90˚, 270˚} used for line and point cutting, and {In, Out} is used for contour and corner cutting where the grinder’s body is always inside or outside the cutting path. The orientation is illustrated in Figure 4.10.

Grinder body Cut-off disc

N SEW In Out

Figure 4.10: Notation of tool orientation

Regarding the handling procedure in the trap routine, the force sensitivity is initially set to 150% which is less sensitive than the default configuration (ABB 2004) because the excessive force in the cutting process is taken into account. When the collision takes place, the robot stops moving and the interrupt signal from MotionSupervision will be sent out to call this trap routine. Afterwards, the MotionSupervision will be temporarily deactivated. Then, the robot can move out from the crashed location to the safety location and the MotionSupervision will be reactivated. In addition, the system needs to acknowledge the recent collision condition, i.e. 1) the number of times that the robot crashes during this cutting procedure and 2) current cuttingMethod used. This information belonging to only the current operation procedure is used for considering the new cuttingMethod in the next trial. In brief, the new cuttingMethod is chosen according to the conditions shown in Table 4.1. It should be noted that the trial is performed only within the same set of the initial feed speed

For example, the operation starts cutting a line with cuttingMethod = 3 and crashed for the first time (Hit = 1). The new cuttingMethod = 4 will be assigned for the next trial. If

81

Chapter 4 – Disassembly operation unit

the operation crashes again (Hit = 2), the new cuttingMethod = 1 will be assigned for the next trial. It should be noted that no memory is kept once the operation procedure is complete. Therefore, the number of times it crashes will be reset (Hit = 0) for the next operation procedure. Next, due to the default procedure of the system for safety in operation, the program pointer moves to the starting point of the main routine after the MotionSupervision detects the crash. Eventually, the main routine can carry on normally after the collision has been resolved.

Current New CuttingMethod Combination CuttingMethod Hit = 0 Hit = 1 Hit = 2 Hit = 3 Hit = 4 1 N, Low 1 2 3 4 0 2 S, Low 2 1 3 4 0 3 W, Low 3 4 1 2 0 4 E, Low 4 3 1 2 0 5 N, Hi 5 6 7 8 0 6 S, Hi 6 5 7 8 0 7 W, Hi 7 8 5 6 0 8 E, Hi 8 7 5 6 0 NOTE: Hit = the number of time that the robot crashes Table 4.1: Order of the CuttingMethod according to the times that robot crashes

4.2.4 General disassembly operation procedure

The disassembly operation procedure is a procedure that contains a sequence of low-level operations in order to perform a certain task. These procedures are located in the mid- level control layer which is directly connected to the CRM in the high-level control layer. In the disassembly process, the CRA generates the command (primitive action) according to the information in KB regarding the possible disassembly operation plans for treating the components in LCD screens. Therefore, the operation procedures are typically designed based on the requirements of this treatment process (details of the component treatment are explained in Section 4.3). The structure of the disassembly operation plans and the operation procedure is illustrated in Figure 4.11.

82

Chapter 4 – Disassembly operation unit

Treatment of Treatment of Treatment of Component-1 Component-2 Component-3

Disassembly Disassembly Disassembly Operation Plan-1 Operation Plan-2 Operation Plan-n Operation Proc-1 Operation Proc-1 Operation Proc-1 Operation Proc-2 Operation Proc-2 Operation Proc-2 … … … Operation Proc-n Operation Proc-n Operation Proc-n

Low-level operation -1 Low-level operation -2 … Low-level operation -n

Figure 4.11: Structure of disassembly operation plans and the operation procedure

The operation procedures can be categorised according to the operation units. With respect to the function of each unit, the procedures belonging to the robot arm are the most complicated and have more choices in comparison with the FlippingTable and grinder. The operation procedure of the FlippingTable and the grinder are straightforward which are to flip the table and t turn on-off, respectively. Therefore, this section focuses only on the procedure for the robot which can be categorised into three groups according to the action of the robot, 1) destructive active action 2) non-destructive active action and 3) passive action.

4.2.4.1 Destructive active action

The cutting operations are performed according to the cutting location (x,y,z) and the cuttingMethod given by the CRA. The cutting speed is predefined in each step of the low- level operation due to the cutting direction. The cutting operation procedure consists of primitive cutting operations: 1) cutScrew, 2) cutLine, 3) cutContour, and 4) cutCorner. In brief, the operation starts from the safety position and moves rapidly (400 mm/s) to the safe level z at the specified (x,y). The grinder moves down slowly (50 mm/s) to the specified cutting level z since there is a high possibility of hitting non-detectable objects. The robot cuts along the specified parameterised paths with cutting speed specified cuttingMethod and moves back to the safe position when done. These speeds are calculated from the specification of the cutter and considered from the strength of the disassembly rig. The cutting path of each operation is illustrated in Figure 4.12.

83

Chapter 4 – Disassembly operation unit

P3(x3,y3,z) P3(x3,y3,z) (screw head) P2(x2,y2,z)

P1(x1,y1,z) P(x,y,z)

P1(x1,y1,z) P1(x1,y1,z)

cutContour() cutCorner() cutLine() cutScrew()

NOTE: - The shaded path represents the features specified by the parameters - Solid line with an arrow shows the directed cutting path Figure 4.12: Cutting paths of operation procedures

4.2.4.2 Non-destructive active action

The robot arm moves to predefined positions in the idle stage or for calibrating purposes. The procedure is simple since all parameters are preset. This category includes 1) moveHome and 2) moveSafe. The moveHome action moves the robot to the home position where all axes are 0˚. This position is used only for calibration purposes. The moveSafe action moves the robot to the safe position which is out of sight of the cameras and does not interfere with the FlippingTable moving range. This action is executed in every cycle before visual sensing and flipping the fixture plate.

4.2.4.3 Passive actions

This type of action manipulates the data by setting or getting the value of the robot’s parameters. Only two procedures are in this category: 1) setProdCoordinate and 2) checkCuttingMethod. First, for the procedure setProdCoordinate, the CRA gives the position of the region of interest (ROI) of the product relative to {B} once it has been detected. The position of the bottom-left of the ROI will be used as the origin of the Product coordinates. Second, for the procedure checkCuttingMethod, this is a sensing action sent from the CRA to check the ultimate status of the cutting method after the cutting operation has been done.

In summary, the operation procedures presented in this section (see summary in Table 4.2) are utilised by the CRA in the form of primitive actions or sensing actions. These operations will be used as a part of the treatment of the LCD screen’s components which is a higher level strategic plan. The details are explained in the next section.

84

Chapter 4 – Disassembly operation unit

Operation Action units type

Operation procedure Description Header (parameters) sensing Grinder Primitive Primitive Robot arm FlippingTable moveHome z z Move to home position moveSafe z z Move to safety position cutScrew (x, y, z, mcut) z z Cut a screw at a specified point cutLine (x1, y1, x2, y2, z, mcut) z z Cut a specified line cutContour (x1, y1, x3, y3, z, mcut) z z Cut 4 corners at specified rectangle cutCorner (x1, y1, x3, y3, z, mcut) z z Cut contour at specified rectangle setProdCoordinate (x1, y1, x3, y3) z z Set Product coordinate origin checkCuttingMethod z z Check the current cutting method flipTable z z Activate FlippingTable SwitchGrinderOn z z Activate the grinder SwitchGrinderOff z z Deactivate the grinder

NOTE: mcut denotes cuttingMethod

Table 4.2: Summary of the operation procedure

4.3 Disassembly operation plans

In this research, with respect to the selective disassembly, the main components of LCD screens are expected to be separated by means of semi-destructive or destructive disassembly. This section explains the disassembly operation plan which is a high-level strategic plan used for treating the components by removing them from the entire LCD screen in each state of disassembly. The disassembly operation plan is process-specific information which is a part of the KB in the CRM (part 2 in the diagram in Figure 4.1). It is a sequence of the operation procedure as shown in Figure 4.11. The CRA utilises this knowledge according to the behaviour control. Due to the aforementioned uncertainties regarding the product and process, the system works autonomously using trial-and-error process over the search space in order to find out the operation and parameters that result in successful removal of particular components. The detail regarding the cognitive robotics is explained in Chapter 6.

85

Chapter 4 – Disassembly operation unit

4.3.1 Conceptual overview of the operation plans

In this section, the general operation plans for removing each type of the main component is explained in terms of physical operation. A main component is able to be removed once all connections have been disestablished. Therefore, the operation plans focus on an effective and flexible way to disestablish the connection which is a combination of various types of connective components. To develop the generic operation procedure for handling various models of the LCD screen, strategies for developing the plans and parameters are statistically analysed from the 37 samples of LCD screens. The operation plans are designed in consideration of four following factors:

x Connection methods between the main components; x Nominal value of cutting parameter for disestablishing those connections; x Capability of the vision system in detection of the components; and, x Direction of the cutting tool for accessing the target to be cut.

The main components of the LCD screens are connected together with one or more than one of these three types of connection: 1) screws, 2) snap-fits, and 3) cables. According to the capability of the vision system, different treatment is needed for each of them. First, the screw is only a connective component that can be detected by the function in the vision system. Since the exact position can be located, it is expected to be destroyed without damage to the surrounding components. The snap-fits and cables can be visually inspected in some circumstances due to the covered location and disassembly state but cannot be detected by the available vision system. Disestablishment of the connection varies according to the accessibility of the cutting tool. The semi-destructive approach is used where the connective components are accessible. Otherwise, the destructive approach is taken into account. These two approaches are explained in detail as follows.

4.3.1.1 Semi-destructive approach

The semi-destructive disassembly producing minimal impact to the main component is performed when the connective component is visually seen at the time of disassembly. The location to be cut can be either the exact location provided by the vision system or the estimated location based on the location of the corresponding main component. The location can be estimated from the design rules and statistical information from the examined samples. The general operation plans are also developed based on this

86

Chapter 4 – Disassembly operation unit

estimation. Two example cases of the semi-destructive approaches for removal of a PCB which is the main component are illustrated in Figure 4.13. The exact location of the screw is provided by the vision system in Figure 4.13a. In Figure 4.13b, the estimated location is used in the case of the cables that cannot be detected. The cut location is offset from the border of the main component.

outer offset Cutting location due to the Cutting location from the border visual detection result from the prediction

cable cable

Main component The rest of Main component The rest of the product the product

screw screw (a) (b)

Figure 4.13: Semi-destructive approach (a) removing a screw (b) removing cables

4.3.1.2 Destructive approach

The destructive approach is used when the semi-destructive approach fails to remove the main component. This approach is also for disestablishing the visually unseen component due to the location of the component which is hidden or inaccessible. The destruction is performed directly on the main component. The cut is expected to sacrifice minor parts of the main component which are connected by the leftover connectors. As a result, the majority of the main components are detached while the minor parts are still attached to other components (see Figure 4.14). Since the exact location cannot be obtained, the estimated cutting location is indicated by the aforementioned estimation. For instance, a back cover and a front cover always connect with each other with snap-fits along the border. Therefore, cutting along the contour at 5mm inner offset from the border would be expected to disestablish this connection resulting in the main components becoming detached from each other.

87

Chapter 4 – Disassembly operation unit

Cutting location from the prediction Inner offset from the border Detached main Leftover part component

Main component The rest of the product

Hidden Hidden Connective component Connective component (a) (b)

Figure 4.14: Destructive (a) components attached (b) components detached

In summary, these two base disassembly approaches are used to develop the disassembly operation plans of each main component. The semi-destructive approach is preferable but the limitation regarding the accessibility of the connective component is strict. Meanwhile, the destructive approach is more flexible but cannot remove the main component completely. In addition, one of the challenging issues is the accessibility of the cutting tool. The cutting tool can move in 2.5D according to the limited workspace of the robot. Therefore, it can approach the object from above which is the back side of the LCD screen. This issue needs to be taken into account in the design of the operation plan.

4.3.2 Disassembly operation plan for the components in LCD screens

The disassembly plans for each main component are developed based on the concepts described previously. The operation plans and the related parameters are designed to be flexible to handle the majority of the common features of the LCD screens. The operation planss are designed by considering the components found in the 37 different models of the sample. The special features are expected to be handled by human assistance. In this section, a list of the available plans for each component is given.

With respect to the prospective behaviour of the CRA, the removal process of a particular component will start from the lowest impact operation which is the semi-destructive approach, i.e. cut screw. The process producing a higher impact will be executed if the current operation fails. Therefore, the operation plan will be presented in the same manner. In practice, the order of the plan to be executed is illustrated in Figure 4.15 (exceptional condition is added for removal of PCB cover). In the pre-processing session, the disassembly operation for a component starts from executing Plan-0 which is a semi- destructive operation. The general Plan-1, Plan-2, ..., Plan-n are executed in turn until the main component can be removed. If all general plans still fail to remove the component,

88

Chapter 4 – Disassembly operation unit

human assistance is needed in the post-processing session. With respect to the main components, the LCD screen typically consists of six main components (see Section 4.1.2) but the operation for treating the LCD module is ignored because it will be the last component remaining on the fixture. Therefore, no further disassembly operation is necessary. The operation plans for the five main components are explained as follows.

Pre-process Executing general plan Post-process Start

Execute F Execute F Execute F Execute F Human Plan-0 Plan-1 Plan-2 Plan-n Assistance

SS S S End Outcome of removing the main component: S = Success and F = Failure

Figure 4.15: Plan execution order for removing a main component

4.3.2.1 Back cover

The back cover is generally formed with 1-4 mm thick plastic. The connection between the back cover and other components (the front cover and the carrier) can be classified into two types based on the connective component: 1) snap-fits only and 2) snap-fits and screws. The snap-fits and the screws are used to connect the back cover to the front cover and the carrier, respectively. Approximately 43% of the screens in our sample are in the first category and 57% are in the second category. The 6 – 10 snap-fits are located around the border along the path line between the front cover and the back cover of both types. For the second category, four screws are typically located near the corners of the back cover (see Figure 4.16). The graph in Figure 4.17 shows the location of the screws relative to the nearest border regardless of the exceptional screw in the middle area (the distance = 0 represents the sample that has no screw). Since the back cover is generally symmetrical, four screws can be seen as two overlapping data points. From this figure, regarding the second category, 81% of the samples have all screws lying within 12 mm of any side. One more extra screw is occasionally found in the middle area for holding the back cover to the carrier more firmly (see Figure 4.18). Moreover, in some exceptional cases, a minor press-fit holding the back cover with the PCB cover is found (see Figure 4.19). The strategic operation plans for removing the back cover are described as follows.

89

Chapter 4 – Disassembly operation unit

(a) (b) Figure 4.16: Example of back cover (a) without screws (b) with screws

Figure 4.17: Location of the screws relative to the nearest border of back cover

Plan-1 Cut inner offset around the contour

Plan-3 Cut inner offset around the contour

Back cover Plan-2 Cut next to the corners

Screw ( ) Snap-fits ( ) Shaded areas along the border and at the centre are typical location of screws

Figure 4.18: Operation plans of the back cover

90

Chapter 4 – Disassembly operation unit

Cut 2 rectangles over the press-fits

Press-fits Back cover

Remaining part (PCB cover, PCBs, Carrier, LCD module)

TOP VIEW SIDE VIEW Figure 4.19: location and disestablishment of the press-fits

Plan-0: cutScrew - This plan is expected to cut the screw as detected by the vision system. However, this operation is ineffective since the screws on the back cover cannot be detected in most circumstances, i.e. being located in a deep hole and covered by a rubber cap. Hence, the detection rate is low and the screws will not be cut. However, this plan is kept and utilised due to the semi-destructive approach which is expected to be successful when the accuracy in vision system is improved.

Plan-1: cutContour at 5mm inner offset from the border -The cutting destination is at 10mm depth from the starting level to compensate the error from the vision system. This plan is expected to disestablish the connection due to the snap-fits commonly lying within 5mm from border located by the vision system. The small inner offset is assigned to make sure that the cutting will be done on the object. Therefore, the action does not mean to destroy the snap-fits directly but to cut through the back cover at the minimal distance from the expected location of the snap-fits. As a result, the back cover in the first category is expected to be removable after performing this action (see Figure 4.17).

Plan-2: cutCorner at 20mm inner offset from each corner - This plan is expected to disestablish the connection due to the screws that are typically located next to each corner. This operation aims to produce minimal damage to the back cover in order to disestablish the connection due to the screws.

Plan-3: cutContour at 12mm inner offset from the border -This plan expects to disestablish the connection due to screws that lie in the common area as shaded in Figure 4.18. This operation produces more damage but is more effective than Plan-2. The

91

Chapter 4 – Disassembly operation unit

majority of the models would be expected to have the back cover detached by executing this plan (see Figure 4.17).

Custom plan: human assistance - Two types of unusual connections, 1) screws and 2) minor press-fits can still remain, meaning the back cover is still not removable. First, the leftover screws are located in an unusual location, e.g. near the centre of the back cover. These positions are expected to be located manually, so that the screws will be cut by cutScrew or cutContour. Second, the press-fits are usually connected between the back cover and the remaining part at four lateral surfaces on the side of the PCB cover. The back cover can be pulled out easily by a manual operation conducted by human operators. However, this is a hindrance for the robot since the force control is unavailable. Therefore, the user has to manually locate the cutting location which is usually the contour corresponding to the base of the PCB cover lying inside (see Figure 4.19).

In conclusion, the back cover is usually easy to be disassembled due to soft material and straightforward operation plans. However, difficulties arise if the shape of the back cover is complex which leads to inaccuracy in visually determining the correct cutting location. Moreover, dense plastic fumes coming out during the cutting process can degrade the performance of detection of the disassembly state change. Hence, an air flow is needed to remove as much as possible of these fumes.

4.3.2.2 PCB cover

The PCB cover can be classified into two types according to the structure of LCD screens described in Section 4.1. The PCB cover is an isolated part in Type-I and a carrier- integrated part in Type-II. However, misclassification can occur between some of the PCB covers in Type-I and Type-II since no significantly different features can be detected visually (see Figure 4.20). Therefore, for clarification, the PCB cover can be classified according to the physical appearance as follows:

x Type-Ia: an isolated part with thin shiny metal plate (< 0.5mm); x Type-Ib: an isolated part with thick matte gray metal plate (0.5-1.0mm); and, x Type-II: a carrier-integrated part with thick matte gray metal plate (0.5-1.0mm).

Misclassification always occurs between Type-Ib and Type-II due to the similar material used (see diagram in Figure 4.21) because the vision system recognises the PCB cover based on contrast and colour scheme. Therefore, the CRA has to take the execution

92

Chapter 4 – Disassembly operation unit

outcome into account to classify the structure. The condition is explained after the operation plans section.

Carrier-integrated Isolated PCB cover Isolated PCB cover PCB cover

(a) (b) (c) Figure 4.20: Example images of PCB cover (a) Type-Ia, (b) Type-Ib, and (c) Type-II

Recognised as Type-II

Type-I Type-II

Type-I(a)

Type-I(b) Figure 4.21: Misclassification of structure between Type-I and Type-II

For the isolated PCB cover in Type-Ia and Type-Ib, it connects to the carrier with two types of connectors: 1) screws and/or 2) snap-fits. They are located at the base level of the PCB cover. However, they are not supposed to be disestablished directly since the cutting location makes it risky to cut through the carrier. As a result, the detached part of the carrier will completely seal the opened side of the PCB cover and an extra disassembly process will be needed for this enclosure to disassemble PCBs inside (see Figure 4.22c). The enclosure is hard to be held by the fixture due to the limited holding area. It also increases the operating cost due to this extra disassembly process. As a result, this cutting method is undesirable. Therefore, the proper operation is to cut only on the top surface of

93

Chapter 4 – Disassembly operation unit

the PCB cover (see Plan-1 and Figure 4.22b). Afterwards, the PCBs lying inside will be observable and disassembled in the next state of the disassembly process.

Cut according to Plan-1

Improper cut PCB cover Signal cables External ports to LCD module Connectors PCBs

Carrier (a) Original condition

Detached part

(b) Proper cutting according to Plan-1 (c) An enclosure caused by improper cutting Figure 4.22: Cutting options for PCB cover Type-I

For the carrier-integrated PCB cover in Type-II, there is no fastener used since the PCB cover and the carrier are considered as a homogenous object. However, this component needs to be cut from above in the same way as Type-I in order to disestablish hidden connections that cause problems while removing the carrier. The problematic hidden connection is a bundle of CCFLs control signal cables connected between the controller- PCB and the LCD module (these cables are located on the carrier and are visually observable in Type-I structure) as shown in Figure 4.23a. These cables lie underneath the carrier and usually hold the carrier from falling down when the plate is flipped during the carrier removal state. From the preliminary experiment, the connectors for the flat ribbon type cable sometimes become detached due to the weight of the hanging parts (carrier with PCB cover part and PCBs) which is approximately 600g - 1200g in total. However, the result is unreliable, so that a more effective cutting operation is developed.

94

Chapter 4 – Disassembly operation unit

Cut according to Plan-1 Cut according to Plan-2

External ports Signal cables to LCD module PCBs Carrier

(a) Original condition

No part has been detached The connection left at the external ports

(b) Cutting according to Plan-1

Detached part The hidden cables have been cut

(c) Cutting according to Plan-2 Figure 4.23: Cutting options for PCB cover Type-II

Due to the constraint that the disassembly operation is performed top-down from the back side of the LCD screen, the cables will not be able to be visually detected in any circumstances if the PCB cover part is still in place. Hence, the location of the PCB cover is used to estimate the cutting location. The estimated target of the cutting location is a contour at the base of the PCB cover with approximately 5-10 mm depth into the carrier (see Plan-2). This operation is expected to be effective where the connector on the LCD module’s side is located outside the projected area of the PCB cover’s base. Otherwise, further manual assistance is needed to locate the cutting line. In this case, the detached PCB cover does not fall down after flipping the fixture plate since it is hung by the leftover cables. However, the PCB cover usually moves from its original place and the hidden cables will be revealed (see Figure 4.24). The user can locate the cutting location and demonstrate the cutting operation to the CRA.

95

Chapter 4 – Disassembly operation unit

LCD module Carrier

PCB cover

Hanging detached PCB cover part

(a) Original condition (b) After cutting by Plan-1 and 2

Connector on Cable connection The LCD module

CCFLs control signal cables

Hanging detached PCB cover part Connector on the PCB (c) Hanging part (d) Cable connection (close-up) Figure 4.24: Hanging detached PCB cover part in Type-II structure

Based on these considerations of the type of PCB cover, the operation plans are designed as follows. Overall, the execution for either the thin plate cover (Type-Ia) or the thick plate cover (Type-Ib or Type-II) is initially performed according to the visual classification. The condition after executing Plan-1 will be considered in order to further classify between Type-Ib and Type-II. The classification strategy is illustrated in Figure 4.26. The operation plan summary is in Figure 4.25. Each plan is explained in detail as follows.

Plan-1 Cut on the top surface of PCB cover

Plan-2 PCB cover Cut on the base level of PCB cover which is the top surface of the carrier

Carrier Figure 4.25: Operation plan for PCB cover

96

Chapter 4 – Disassembly operation unit

Pre-process Executing general plan Post-process Start Type-I Execute F Human I(a) Plan-1 Assistance S

Recognise Type-II End PCB cover I(b) S Type-II Execute F Execute Human I(b) / II II F Plan-1 Plan-2 Assistance

S Type-I End

NOTE: 1) Outcome of removing the main component: S = Success and F = Failure 2) PCB cover type concerned in each state is labeled black

Figure 4.26: Classification strategy for PCB cover based on the execution result

Plan-0: cutScrew - This plan is expected to cut the screws as detected by the vision system. The screws are located around the PCB cover’s base in the Type-I PCB cover. However, a high detection rate of false positives due to some insignificant features, e.g. ventilation holes, dents, etc, dramatically increases the number of operations to be executed. Therefore, it is suggested this operation be skipped in the actual process.

Plan1: cutContour at 5-10mm inner offset from the border on the top surface - The destination of the cut location is at 5mm depth from the top surface which is generally at the level that leaves the external ports undamaged. The external ports normally include: 1) power cord connector, 2) Video Graphic Array (VGA) port, and 3) Digital Visual Interface (DVI) port. This plan expects to remove the top plate of the PCB cover Type-Ia and Type-Ib. For Type-II, the top plate will not be removable due to the uncut external ports connected to the PCBs mounted under the top plate (Figure 4.22b). These ports connect to one side of the PCB cover with strong connectors, e.g. hex screws. Therefore, achieving removal is used to distinguish between Type-I and Type-II. It can be implied that the PCB cover is Type-I if Plan-1 is successful and Type-II otherwise (see Figure 4.26). The conditions after cutting in Type-I and Type-II are shown in Figure 4.22b and Figure 4.23b, respectively. Since this plan is crucial for the classification, the targeted contour can automatically be adjusted between 5-10mm inner offset to make sure that the target will be cut.

Plan-2: cutContour at 5mm outer offset from the border on the base level - The destination of the cut location is at 10mm depth from the PCB cover base level. Cutting at

97

Chapter 4 – Disassembly operation unit

the destination depth expects to destroy all hidden cables in the Type-II structure (see Figure 4.23c).

Custom plan: by human assistance - The custom plan is needed if the PCB cover cannot be removed after executing Plan-1 and Plan-2 for Type-I and Type-II, respectively. According to Plan-1, the problem is associated with inaccuracy in visual localisation. The vision system will not be able to locate the cutting path accurately if the PCB cover is a non-rectangular box. Therefore, the correct cutting path must be manually located. With respect to Plan-2, the problem is usually the hanging part due to the uncut hidden cables. The cutting location should be manually located.

In conclusion, the disassembly process is straightforward for the PCB cover Type-I. The expected outcome is the top plate of the PCB cover. Although some part of the PCB cover remains on the rest of product, this step is significant since the inside PCBs can be revealed for disassembly in the next state. Regarding the PCB cover Type-II, the expected outcome is a part of the PCB cover with PCBs mounted underneath. Further disassembly process is unavoidable in this case. This part will be reloaded to the disassembly rig afterwards.

4.3.2.3 PCB

PCBs are fabricated from an approximately 1.5mm plate of thermoplastic material with a number of embedded components situated on them. According to the functionality, an LCD screens consist of three types of PCB: 1) a power supply - inverter board, 2) controller, and 3) a switch board (Kim et al. 2009). The proposed operation plan is used to remove any PCB regardless of the type. It should be noted that the CCFLs control PCBs mounted on the LCD module are not considered in this research because they are part of an LCD module which is not be further disassembled. Variations are found in the quantity, size, shape, and location of PCBs. On the carrier, approximately 90% of the samples have two PCBs which are a power supply – inverter and controller). Another 10% has three PCBs in which the power supply unit for the CCFL in the LCD module is separated from the main power supply PCB. The switch board is always mounted on the front cover. According to the shape, around 95% of the observed PCBs are rectangular and fewer than approximately 5% are L-shaped.

98

Chapter 4 – Disassembly operation unit

With respect to the connection, the connective components are of three types: 1) screws, 2) reusable plastic rivets, and 3) electrical and electronic cables (see Figure 4.27). The PCBs generally connect to the carrier with 3 - 5 screws according to the size and shape of the PCBs. They are usually located within 10mm from the border and corners. In order to support the middle area, either a few screws or reusable plastic rivets are used. Figure 4.28 shows the location of screws relative to their nearest border. Around 75% of the samples have all screws lying within 10 mm from any side. In addition, regarding the external ports, two hex screws connect the VGA port and the DVI port to the side of the PCB cover. The power port connects to the PCB cover with either a plastic socket or two screws. The electronic and electrical cables are connected between two PCBs or between the PCB and the LCD module. Common locations of the featured connectors are shown in Figure 4.27. These locations can vary in different models of LCD screen but the featured connectors remain the same for each PCB type. Consequently, the featured connectors are schematically illustrated in Figure 4.29.

Types of PCBs LCD module PCB-1: Power supply - inverter PCB-2: Controller

Connectors (A) Cables – CCFLs Voltage supply B (B) Cables – between 2 PCBs (C) Cables – CCFLs control signal C (D) Cables – Panel switch A PCB-1 (E) Port – Power supply (F) Port – VGA and DVI D PCB-2 E NOTE: Screws are not shown

Carrier or PCB cover F

Figure 4.27: Common location of the connectors on PCBs

99

Chapter 4 – Disassembly operation unit

Figure 4.28: Location of the screws relative to the nearest border of PCBs

PCB-1 (Power supply - inverter) PCB-2 (Controller)

Plan-1 ()Plan-2 ()Plan-3 ( ) Port ( ) Cables ( ) Shaded areas along the border and at the centre are the possible location of screws

Figure 4.29: Operation plans regarding the common location of the connectors on PCBs

The disassembly operation plans are developed according to these features. Figure 4.30 shows the disassembly state of PCBs which are the consequences of removal of the PCB cover shown in Figure 4.22and Figure 4.23. In general, the cutting destination is set to be at 15mm depth from the PCB base. This value is selected to compensate for the potential position error due to the embedded components. In the Type-I structure, the vision system

100

Chapter 4 – Disassembly operation unit

possibly senses 5-10mm higher than the actual level of the PCB base if the embedded components are big and placed densely (see Figure 4.30a). On the other hand, the PCB base is located accurately in the Type-II structure. After the detached PCB cover part with PCBs (see Figure 4.23) has been removed, the PCBs will be further separated. This part will be reloaded to the fixture by placing upside down, so that the disassembly is done from the bottom side (see Figure 4.30b). The operation plans summary is in Figure 4.29. Each plan is explained in detail as follows.

PCB base level

Embedded components PCB base plate Stands

(a) Type-I (b) Type-II (upside down) Figure 4.30: Position of PCBs to be disassembled

Plan-0: cutScrew - This operation is effective in removal of the PCBs due to the detection rate of the vision system. False positive detection may occur due to embedded components, e.g. canned capacitors and solders, which have a similar size to screws. However, this false detection can lead to extra damage on the object while the success rate of removing the main component increases.

Plan-1: cutContour at 5mm inner offset from the border - This operation is expected to disestablish the connections due to the ports and cables around the PCBs. Although the cut should be outside the area of PCB in order to avoid damaging this main component, the outer side is usually inaccessible due to interference from the leftover PCB cover.

Plan-2: cutCorner at 20mm inner offset from each corner - This operation is expected to disestablish the connection due to the screws that are typically located within 20mm of each corner. The damage is relatively minimal in comparison with Plan-3.

Plan-3: cutContour at 10mm inner offset from the border - This operation is expected to disestablish the connection due to the screws that are typically located within 10mm of each side as shown in Figure 4.28. Therefore, around 75% of PCBs are expected to be removed after executing this plan. Also, the external ports left incompletely cut from executing Plan-1 will be disestablished.

101

Chapter 4 – Disassembly operation unit

Custom plan: human assistance - The custom plan is needed to resolve two issues. First, the leftover connectors in the middle area of the PCB need to be manually located and disestablished. Second, due to inaccuracy in visual localisation, the correct cutting paths need to be manually located, especially in the case of L-shaped PCBs.

In conclusion, PCBs connect to other components with many types of connectors. Most of the connectors cannot be detected directly by the vision system but are expected to be disestablished by the proposed operation plans by cutting around the contour of the PCB. In practice, the cutting tool can cut through the PCBs and embedded components easily since they are made of low strength material. An accessibility problem may occur in some cases when the cutting tool crashes when the leftover PCB cover is fabricated from a thick metal plate. Therefore, other available tool orientations and feed rate need to be considered.

4.3.2.4 Carrier

The carrier is designed to be very strong since it is a core structure of the LCD screen. It is normally fabricated with 1 – 3mm thick metal plate. The carrier connects to most of the main components in LCD screens. However, most of the main components, including PCB cover, PCBs, and back cover, are expected to be removed in the earlier disassembly states. Therefore, only the connection with the remaining components, i.e. LCD module and front cover, is considered in the current state.

With respect to the connection, the connective components are of three types: 1) screws, 2) snap-fits, 3) electrical cables. The carrier firmly connects to the LCD module by four screws located on the left and the right sides of the LCD module. They are also connected with two pairs of CCFL voltage supply cables located on one side, either left or right, of the carrier. These cables are initially connected to the PCB and normally attached to the carrier with clips. In addition, approximately half of the observed samples have the front cover connected to the carrier. They are connected with 4-8 screws and snap-fits located within 10mm around the border of the carrier. The common locations of these connections are shown in Figure 4.31 and schematically illustrated in Figure 4.32. The operation plans are designed based on these expected locations.

102

Chapter 4 – Disassembly operation unit

Connection with LCD module C D (A) Screws (B) Cables – CCFLs control signal A B A B (C) Snap-fits

Connection with Front cover (D) Screws

Carrier

A B A B

Figure 4.31: Common location of the connections belonging to the carrier

The operation plan is expected to disestablish these connections by cutting the carrier around the inner contour. Therefore, the main part of the carrier will be removable while the small leftover parts around the border will still be attached to the remaining product. The cutting destination location is set to be 10mm deep from the top level to ensure that all hidden cables underneath are disestablished. It is important to note that cutting too deep may result in damaging the LCD module.

Plan-1 Cut inner offset around the contour

Carrier Carrier

LCD module

Front cover Front cover TOP VIEW SIDE VIEW

Screw (top-view side-view ) Snap-fits ( ) Cable ( ) NOTE: The figures are rescaled for better clarification

Figure 4.32: Operation plan for carrier

Plan-0: cutScrew - This plan is expected to cut screws as detected by the vision system. However, most screws cannot be detected since they are located on the sides of the

103

Chapter 4 – Disassembly operation unit

carrier. False positive detection due to some insignificant features as detected in the PCB cover also causes a number of excessive operations. Therefore, it is suggested this operation be skipped in the actual process.

Plan-1: cutContour at 5mm inner offset from the border -This operation is expected to disestablish the connection due to the LCD module and the front cover (see Figure 4.32). This operation is straightforward and expected to be very effective since the number and the location of the connectors are consistent in every LCD screen.

Custom plan: human assistance -The manual operation is used to resolve the problem of locating the contour of the carrier due to inaccuracy of the vision system. Therefore, the correct cutting location and depth of cut should be supplied.

In conclusion, the core structure of the LCD screen is supported by a carrier which is initially connected to a number of the main components. In this current state, only the LCD module and the front cover are expected to be attached to the carrier with screws and snap-fits around the border area. Therefore, cutting around the contour of the carrier results in an effective outcome.

4.3.2.5 Front cover

The front cover is a plastic frame with a front panel switch. A panel board PCB (smaller than 10 cm2) is commonly attached to the front cover but does not need to be separated according to the WEEE directive (Parliament 2003). With respect to the disassembly process, the front cover is the last component remaining attached to the LCD module. The LCD module and the front cover are shown in Figure 4.33. They are connected by 4-8 snap-fits located around the LCD module (see Figure 4.34). Therefore, the operation plan is designed to cut around the contour of the LCD module to disestablish these snap-fits. If the vision system cannot locate the LCD module properly, the cutting location can be estimated from the location of the previously detected component such as the back cover or carrier. There is only one plan available and it is described as follows.

104

Chapter 4 – Disassembly operation unit

Front cover C

A B

LCD module A

LCD module

Features on LCD module (A) CCFLs voltage supply cables (B) CCFLs control signal cables (C) LCD module’s PCB cover (a) (b) Figure 4.33: Front cover and LCD module

Plan-1 Cut at the offset contour estimated from the location of the carrier

LCD module Leftover Carrier

LCD module

Front cover Front cover TOP VIEW SIDE VIEW

Screw (top-view ) Snap-fits ( ) NOTE: The figures are rescaled for better clarification

Figure 4.34: Operation plan for LCD module and front cover

Plan-1: cutContour at 5mm outer offset from the border of LCD module - a small 5mm outer offset is assigned in order to avoid damaging the LCD module. There is a high possibility of breaking the CCFLs since the cutting path is close to the location of the CCFLs lying inside the LCD module. The front cover will be cut all the way through until reaching the top surface of the flipping plate.

Custom plan: human assistance - The manual operation is used to resolve the problem due to inaccuracy of the vision system in locating the cutting path. Therefore, the correct cutting location and depth of cut should be supplied.

105

Chapter 4 – Disassembly operation unit

However, since most of the snap-fits are expected to be indirectly disestablished in earlier states, e.g. plan-2 of the back cover that cuts four corners, this component can be easily detached with minimal force exerted. Therefore, this proposed operation plan may not be necessary in those circumstances. Moreover, even though this operation is effective, this destructive operation is high risk and causes a lot of damage to the components. The outcome is less significant in comparison with the operation cost, i.e. time consumption, tool replacement, risk level, etc. Therefore, it is suggested this operation be ignored in the actual experiment.

4.4 Conceptual testing

This conceptual test aims to prove the effectiveness and efficiency of the proposed operation plan to remove the components with respect to the disassembly operation module solely. The objectives are summarised as follows:

x To prove effectiveness of the proposed operation plans to disassemble the LCD screen into the main components at the desired depth of disassembly; x To measure efficiency of the removal process in regard to completeness of the detached components since destructive approaches are used.

A semi-automatic operation is performed in order to eliminate the discrepancy regarding the decision making process of the CRA. A sequence of the main components to be removed is manually controlled. Meanwhile, the execution order of the operation plans and parameters are identical to the expected automatic process. The vision system is used to preliminarily recognise and locate the border of the main component. However, this detection result will be justified by the operator before proceeding to the cutting process. In case of major inaccuracy in localisation and recognition, the cutting paths will be manually supplied through the graphical user interface (GUI).

In summary, the disassembly operation units conduct the disassembly process according to the sequence given as manual commands. The cutting paths are normally located by the vision system and manually revised in case of major error. The operating cycle for removal of a component is as follows.

106

Chapter 4 – Disassembly operation unit

4.4.1 Testing procedure and operating cycle

A model of an LCD screen was selected to be a case-study. This model is a 17” Type-I structure LCD screen. It was selected because the size and structure are similar to majority of the available samples. Therefore, similar behaviour of the system is expected to be expressed in a number of cases in the full experiment. First, the sample is loaded to the fixture. It will be monitored through the GUI during the majority of the process. The GUI is explained in Appendix D. The operator controls the vision system and the disassembly operation module manually via this GUI. In comparison with the fully automatic process controlled by the cognitive robotic agent, this conceptual testing is more flexible to handle some unexpected circumstance, e.g. physical and program crash. The procedure for disassembling each component is explained as follows.

After the current component has been recognised and located, the detection result needs to be justified for accuracy. The location will be used as the cutting paths for the operation if the position error is minor, in this case, ±5mm. The correct position will be located in case of major error. Next, the available disassembly operation plans will be executed according to the order: Plan-0, Plan-1, ..., Plan-n. It is noted that the location of the non-detected screws will be manually located for executing Plan-0. For each general cutting plan, the cutting target is deepened by 1-2 mm per cycle according to the material type. Next, the FlippingTable will be activated to check if the component has been completely detached. This procedure will be repeated until reaching the depth limit or the component has been removed. If the component is still attached, the alternative plan will be executed according to the order until the component is removable. Afterwards, the operation for the next observable component will be performed in the same manner. Eventually, this conceptual testing will be done once reaching the goal state, in this case, the LCD module found.

4.4.2 Testing result

Based on the assumption that the exact product structure is not known a priori, the operation sequence of this disassembly process is not predefined but generated based on the main components found in each state of disassembly. In summary, after this sample was successfully disassembled, 8 stages and 6 types of main components are illustrated in

Figure 4.35 and the operation plans are illustrated in Figure 4.36. The detail of each state is described as follows. It should be noted that the cuts shown in this section represented

107

Chapter 4 – Disassembly operation unit

the expected actual cuts using the grinder. Hence, short lines are extended from the original lines shown in the aforementioned operation plans (see Figure 4.12).

State-1 Back cover

Back cover

State-2 PCB cover

PCB State-3 cover PCB-1 State-4 PCB-2

State-5 PCB-3 PCBs

State-6 Carrier

Carrier

State-7 Front cover

Front cover

Goal state

LCD module

Figure 4.35: Disassembly states and detached main components

108

Chapter 4 – Disassembly operation unit

Expected detection Expected cuts by the vision system (actual cut with the grinder)

Plan-1 Plan-2 (Back cover) State-1 (a) (b) Plan-1 (PCB cover) State-2 (c) (d) Plan-0 Plan-1 Plan-2 (PCBs)

NOTE: The non-detected screws

State-3 to 5 were manually located for testing Plan-0 (e) (f)

Plan-1 (carrier) State-6

(g) (h) Plan-1 (Front cover) NOTE: This plan was not be executed due to the risk State-7 of breaking CCFLs (i) (j) Figure 4.36: Disassembly states and expected operation plans

109

Chapter 4 – Disassembly operation unit

4.4.2.1 State-1: Back cover

The back cover was found to be mounted on the remaining product with four screws at the corners (see Figure 4.37a). Plan-0 was not available for the back cover since it is not effective to deal with the hidden screws in the majority of the samples. Therefore, Plan-1 was executed to cut around the border 2 mm deeper per cycle until reaching the depth limit. The back cover was still not detached since it was obviously connected by the screws. Subsequently, Plan-2 was executed in order to disestablish the connection due to these screws which were located on the top-left and top-right far from the corners. The operation cut across those screws instead of cutting through the plastic part of the back cover. However, the operation still succeeded in detaching the components. The outcome of these operations is shown in Figure 4.37b and the back cover was able to be removed afterwards.

(a) Before cutting (b) After cutting Figure 4.37: Removal of back cover

4.4.2.2 State-2: PCB cover

The PCB cover was found as a thin shiny metal rectangular box. This was classified as Type-I structure, so that Plan-1 was executed. As a result, the top plate of the PCB cover was cut and easily removed as shown in Figure 4.38. Although this operation left the majority of the components remaining on the product, it was a significant step since the components lying inside were able to be observed in the following state.

110

Chapter 4 – Disassembly operation unit

Top plate of Cutting with PCB cover Plan-1

Remaining part of PCB cover

Figure 4.38: Removal of PCB cover

4.4.2.3 States-3 to 5: PCBs

After the PCB cover had been taken out, three PCBs were found under the PCB cover. In comparison with the ideal condition in Figure 4.39a, the remaining part of the PCB cover was found in the actual case Figure 4.39b. This remaining part may become an obstacle for the grinder to approach the PCBs. The accessibility can be considered from the height of the remaining part in comparison with the size of the cutting blade of the grinder. In this case, the grinder was able to access the cutting target. After the plans were executed, the PCBs were removed from the carrier as shown in Figure 4.39c and Figure 4.39d.

The removal of each PCB was considered as a state of disassembly. Therefore, these PCBs corresponded to three continual states. PCB-1, PCB-2 and PCB-3 were removed in states-3, 4, and 5, respectively. Plan-1 was executed for removing the PCB-1 by cutting around the contour. This operation successfully disestablished the connections due to screws and external port (see Figure 4.40). For PCB-2, Plan-1 was executed in the same manner but was not able to detach these components due to the leftover screws near the corners. Hence, Plan-2 was performed to disestablish them. For PCB-3, there was no external port connection. Only Plan-1 was also able to detach this PCB. In summary, the cutting paths of these PCBs are shown in Figure 4.41. It can be noticed that the cutting location presented on the undamaged PCB (first row) is not identical with the actual result (second row). This was caused by the position error according to the location given by the vision system. However, this error is expected to be addressed by Plan-3 which cut farther into the PCB.

111

Chapter 4 – Disassembly operation unit

Remaining part of PCB cover PCB-1 PCB-2

PCB-3

(a) Ideal condition (b) Actual condition

(c) Ideal condition (d) Actual condition Figure 4.39: Removal of PCBs

PCB-1 PCB-2 PCB-3 Undamaged PCBs (non-destructive disassembly) (Plan-1) (Plan-1 and Plan-2) (Plan-1) Damaged PCBs (destructive disassembly)

Figure 4.40: Comparison of the disassembly outcome of PCBs

112

Chapter 4 – Disassembly operation unit

Connection type sc sc = screw c = cable connector

sc c sc c s sc

sc c c Cutting path

Figure 4.41: disestablishment of the connections of the PCB

4.4.2.4 State-6: Carrier

Removal of the carrier was straightforward since there is no variation in the location of the connectors. Therefore, only Plan-1 was executed to cut along the border of the carrier as shown in Figure 4.42a. The target depth was 10mm from the top surface of the carrier. This was intended to cut the hidden CCFLs voltage supply cables which connect to the LCD module (see Figure 4.42b). Even though the cables were not completely cut by this operation plan, they were found to be disestablished in an earlier state which was possibly the removal of PCBs. As a result, the carrier part was able to be removed.

4.4.2.5 State-7: Front cover

This was the final state since the LCD module was detected as in Figure 4.42b. Plan-1 was expected to cut around the contour to detach the front cover part. However, this operation was avoided according to the aforementioned explanation regarding the risk of breaking the CCFLs. As a result, the front cover was manually separated after the final remaining part had been unloaded from the fixture. Finally, the LCD module achieved separation which can be concerned as the Goal-state. It should be noted that the inspection of the damage on the CCFLs should be done after completing the entire process. However, they cannot be seen directly according to these cutting operations. The CCFLs generally located within 10mm from each side of an LCD module. Therefore, the damage can be indirectly justified from the cuts on the back of the LCD module.

113

Chapter 4 – Disassembly operation unit

(a) Carrier (b) LCD module Figure 4.42: Removal of carrier and LCD module

4.4.3 Conclusion of the conceptual test

The sample LCD screen was able to be completely disassembled into eight main components corresponding to eight states. The proposed operation plans were able to separate each component by the destructive approach. The end condition of the detached main components was partial damage due to the cutting operations (see Figure 4.40). In this case, the efficiency of the operation can be indirectly measured from the comparison between the weight of actual detached part and the ideal undamaged part.

Due to the cutting operation, the weight of the detached components can be changed according to two conditions, 1) loss of weight due to the cut off pieces and 2) gain of weight due to the leftover scraps of other components (see Equation (4.3)). For example, in Figure 4.42a, the remaining carrier lose weight due to the cut-off material around the border while they gained weight due to the attached parts of PCBs and PCB cover. In brief, the efficiency is high if the majority part of a particular component is separated from other components. The remainingcut and residuecut are defined as Equation (4.4) and (4.5). The characteristics of the component at the end condition can be clearly explained by these terms. The efficiency can be determined by the difference of the absolute value of residue from the ideal case as in Equation (4.6).

weight weight weight  weight (4.3) actual ideal gain loss actual

weightactual remainingcut (%) u100% (4.4) weightideal

114

Chapter 4 – Disassembly operation unit

residuecut (%) 100 remainingcut (%) (4.5)

efficiencycut 100  residuecut (4.6)

The disassembly outcomes are presented in two perspective, 1) component and 2) material type. More detail in the outcome according to the operation plans can be represented in the component perspective. However, in the actual disassembly, since all detached parts will fall down into the disposal container, the parts can be mixed and the types of component are visually undistinguishable. Therefore, the measurement with material perspective is more practical. The full experiment is presented in Chapter 7. The outcome in the component and material perspective are shown in Table 4.3 and Table 4.4.

Ideal Actual Efficiency Material Component Weight Remaining Residue Weight (%) (g) (g) (%) (%) Back cover 343.3 320.0 93.21 -6.79 93.21 Plastic Front cover 159.4 172.6 108.28 +8.28 91.72 PCB-1 30.1 26.9 89.37 -10.63 89.37 PCB PCB-2 160.5 143.0 89.10 -10.90 89.10 PCB-3 68.6 47.7 69.53 -30.47 69.53 PCB cover 163.5 77.7 47.52 -52.48 47.52 Steel Carrier 691.7 631.1 91.24 -8.76 91.24 Compound LCD module 1000.7 1043.9 104.32 +4.32 95.68

Table 4.3: Outcome of the destructive disassembly in component perspective

Ideal Actual Efficiency Material Weight (g) Weight (g) Remaining (%) Residue (%) (%) Plastic 502.7 492.6 97.99 -2.01 97.99 PCB 259.2 169.9 65.55 -34.45 65.55 Steel 855.2 708.8 82.88 -17.12 82.88 Compound 1000.7 1043.9 104.32 +4.32 95.68 Product 2617.8 2462.9 94.08 -5.92 94.08

Table 4.4: Outcome of the destructive disassembly in material type perspective

In practice, it is expected that the back cover, PCB cover, PCBs have residue < 0 since some part of them are cut off. The LCD module and the front cover are expected to have residue > 0 due to the leftover metal scraps of the carrier. Ambiguity occurs in the case of

115

Chapter 4 – Disassembly operation unit

the carrier that possibly has some parts cut off as well as leftover parts from other components. Therefore, it has to be justified case by case with respect to the material types of the leftover parts.

Regarding the component perspective in Table 4.3, most components – i.e. back cover, PCB-1, PCB-2, the carrier, the front cover, and the LCD module − have the residue approximately within ±10% from which it can be implied that the efficiency of those individual components is approximately 90% or greater. Significant percentage of residue was found in the PCB cover and PCB-3. The residue of the PCB cover was around 50% which means that the cutting operation was not efficient. However, it was effective since the main components inside were revealed. The residue of PCB-3 was around 30% corresponding to the significant area around the border that was cut off shown in Figure 4.40 and Figure 4.41. Overall, the trend of the residue according to the ± sign of each component was as expected. In the case of the carrier where the sign can be misled, the leftover part from the PCBs and the PCB cover were found. They are made of lightweight material, so that the cut-off parts of the carrier which is high density metal had more influence on the residue percentage. A more practical outcome in regard to material perspective is shown in Table 4.4. Similar trends were expressed for the result of the individual components.

In summary, the proposed operation achieved disassembling of the sample into components. The LCD module was able to be separated with minimal damage. According to the destructive disassembly, most of the detached components were collectible as lumps considered as approximately 94% by weight. The remaining 6% was completely destroyed and turned out as scraps and dust.

4.5 Conclusion

In this chapter, the automated disassembly system is described in the disassembly operations perspectives. The structural analysis of the case-study product, LCD screens, is performed. The disassembly operation units are designed to support the disassembly of this product family. Lastly, the strategic operation plans for disassembling the LCD screens into the module level is developed. They can be concluded as follows.

First, according to the case-study, the analysis was done on 37 different models of LCD screens, sized 15”-19”, manufactured in 1999-2011, and attributed to 15 different

116

Chapter 4 – Disassembly operation unit

manufacturers. By means of selective disassembly, the LCD screens typically consist of 9 types of main components connected by 3 types of connective components. However, no identical pair samples were found due to a number of variations regarding the components and structure. They are summarised as follows:

x Types of the main structure according to the assembly direction; x Physical appearance of the main and the connective components; x Location of the main and the connective components; x Quantity and layout of the main component, e.g. PCBs; and, x Quantity and type of the connective components.

In order to perform the automated disassembly, these variations are expected to be sensed by the vision system. However, the strategic disassembly operation is taken into account to compensate for errors in the case of visual detection in certain circumstances, e.g. disestablishment of the hidden cables and non-detected screws.

Second, the disassembly operation units are designed based on these samples. The operation unit consist of three main components: 1) robot arm, 2) angle grinder, and 3) FlippingTable. The main tasks are conducted by the robot arm that performs destructive operations using the grinder. According to the level of control, the parameterised standard procedures are pre-programmed in the mid-level for supporting the primitive actions requested by the CRA. Moreover, the robot can acknowledge the crash during the process and automatically find out the proper cutting orientation using the built-in torque sensor and MotionSupervision. Therefore, movement and collision awareness are controlled automatically within this module.

Lastly, the operation plan is developed according to each type of main component as a summary in Table 4.5. These plans are located in the KB as a part of the high-level control layer. The disassembly operation is done by the semi-destructive and the destructive approaches. However, although the semi-destructive approach produces minimal damage to the main component, the destructive approach is more effective in most cases where the connector cannot be detected and disestablished directly. The operations are expected to disestablish both detectable and non-detectable connections by cutting at the possible location that the connective components are situated. Human assistance gets involved in exceptional cases due to the unusual location of the connectors

117

Chapter 4 – Disassembly operation unit

and inaccuracy in visual detection. Overall, the operation plans are performed straightforwardly except for the removal of the PCB cover where special conditions need to be considered in association with the structure types.

From the experiment, it can be concluded that the proposed operation plan is effective since it can disassemble the samples until the targeted main component, LCD module, is reached. Also, the desired main components are separated with some damage. Efficiency of the operation is measured from the relative weight of the detached main components. Most components achieved 90% efficiency. Also, approximately 94% by weight of the components turned out as lumps. The remaining 6% turned out as dust and scraps. The performance testing result of the system is given in Chapter 7.

Operation plan Main Primitive Inner offset Plan Intention or Component cutting from the No. Connection to be disestablished operation border (mm) 1 cutContour 5 Snap-fits around border area 2 cutCorner 20 Screws at the corners Back cover 3 cutContour 12 Screws around border (c1) - Press-fits at with the PCB cover H * * - Uncut screws in the middle area - Correct the inaccuracy 1 cutContour 5-10 Remove the top plate PCB cover 2 cutContour 5 (outer) Hidden cables underneath the carrier (c2) - Uncut hidden cables H * * - Correct the inaccuracy 0 cutScrew n/a Screws at particular locations 1 cutContour 5 External ports, cables, screws PCB 2 cutCorner 20 Screws at the corners (c3) 3 cutContour 10 Screws around border area - Uncut connectors in the middle area H * * - Correct the inaccuracy Carrier 1 cutContour 5 Screws, snap-fits around border area (c4) H * * - Correct the inaccuracy Front cover 1 cutContour 5 (outer) Snap-fits around border area (c6) H * * - Correct the inaccuracy NOTE: * Based on the decision of the operator; H = human assistance; Operation plan used in cognitive robotics denotes as op(,), e.g. op(c1,1) represents operation plan-1 for treating a back cover.

Table 4.5: Summary of operation plans for removing the main components

118

Chapter 5 - Vision System Module 5 VISION SYSTEM MODULE ______

This chapter explains the Vision System Module (VSM) which is the main perception unit of the system. This module perceives knowledge of the external world and supplies it to the Cognitive Robotic Module (CRM) as shown in Figure 5.1. The sensed knowledge is used for reasoning and execution monitoring processes which influence the behaviour of the system. This chapter is divided into four parts. First, an overview of the system in hardware perspective and interaction to other modules are explained in Section 5.1. Second, the vision system’s functionality including recognition and localisation are explained in Section 5.2. Third, more detail on the methodology of the main and utility functions is given in Section 5.3. Finally, the experiment for measuring the performance of the system is explained in Section 5.4.

CognitiveCognitive r roboticobotic m moduleodule

SuggestedSuggested operationoperation HumanHuman InternalInternal CognitiveCognitive roboticrobotic agentagent KBKB iinteractionnteraction (Exogeneous(Exogeneous action)action) aassistancessistance HIGH-LEVEL SensingSensing requestrequest AbstractAbstract i informationnformation (sensing(sensing action)action) OOperatingperating requestrequest AbstractAbstract informationinformation ((PrimitivePrimitive a action)ction)

Vision system functions: DisassemblyDisassembly operationoperation proceduresprocedures Recognition & Localisation MID-LEVEL Sensing request command MovementMovement PositionPosition SStatustatus Pre-processed commandcommand ccommandommand ((on/off)on/off) images FeedbackFeedback FeedbackFeedback Vision System module Image MotionMotion MotionMotion PowerPower processing ccontrolontrol ccontrolontrol sswitchingwitching

Request AActuatorctuator ActuatorActuator ActuatorActuator signal ssignalignal signalsignal Raw images signalsignal (colour & depth) SensorSensor SSensorensor ssignalignal ssignalignal Cameras & FlippingFlipping LOW-LEVEL RobotRobot armarm Image grabber TTableable GrinderGrinder

Vision system moduleD Disassemblyisassembly o operationperation u unitnit m moduleodule

Figure 5.1: System architecture in the perspective of the vision system module

119

Chapter 5 - Vision System Module

5.1 Overview of the vision system module

Vision-based sensing techniques are widely used in a number of automated disassembly systems (Büker et al. 2001, Gil et al. 2007). It is used in many applications because of flexibility achieved by non-contact sensing. In this research, the vision system needs to be flexible enough to deal with uncertainties regarding physical appearances of the components in a particular state of disassembly. The hardware and software is developed based on these requirements with respect to the product case-study. According to the control architecture in Figure 5.1, the hardware issue is directly involved in the low-level layer as regard to the cameras. The software issue is involved in low-level, mid-level, and interaction with the high-level layer. The image pre-processing takes place in the low- level layer and the key detection functions are in the mid-level layer. Eventually, the outcome in the form of abstract information produced in the mid-level will be encoded and passed to the CRA. In this section, the development of the system is presented in three perspectives: 1) software, 2) hardware, and 3) interaction with the CRA.

5.1.1 Structure of the module in software perspective

This module consists of a number of classes and functions performing the sensing process as requested by the CRA. This module is developed in C/C++ environment incorporated with an open source computer vision library, OpenCV (Bradski 2010). A concept of object-oriented programming (OOP) is implemented to effectively organise the software structure with respect to data abstraction.

According to the class diagram illustrated in Figure 5.2, the csdVision is a main class consisting of three main parts: 1) main functions, 2) utility functions, and 3) data structure. First, the main functions are the principal part of the mid-level control. The key functions are detection of the components, the state of disassembly, and model of the LCD screen. Second, the utility functions are those used to support the main functions. They are in both mid-level and low-level. Colour images and depth images are used as the input sources for most of the functions in order to obtain the information in colour and geometry, respectively (see detail in Section 5.2 – 5.3). Third, the data structure contains the image data, variables, and parameters used in the sub-class and main class. Significant types of data are defined and used to represent the geometry of the components in LCD screens.

120

Chapter 5 - Vision System Module

In addition, the csdVision is wrapped by the csdVisionGolog which is used to transform the abstract information obtained by the csdVision to the CRA utilisable form.

Vision system module DEPTH IMAGE COLOUR IMAGE

Detect Back cover K Detect Back cover csdVisionGolog Detect PCB cover K Detect PCB cover csdVision

Detect PCBs

Main Cognitive Detect Carrier K Detect Carrier functions robotic module Detect LCD module K Detect LCD module Detect Screw Communication Utility centre StateTransition K State Transition functions Detect model

Calibration K Calibration

Space conversion

Get Depth Image Get Colour Image

Output Golog Data File stream structure

Figure 5.2: Class diagram of the vision system

5.1.2 Hardware

The vision system utilises two types of the input images captured from two cameras: 1) colour camera and 2) depth camera. Two cameras are mounted parallel and next to each other at 1.2 m over the fixture plate. At this mounting distance, the images of the samples can be captured with the highest precision (mm/pixel) which fit to the entire area of the top-view of the fixture plate (see Figure 5.3). This location is also above the robot’s working area. Therefore, any crash is prevented. To simplify calibration in image perspective between these cameras, the lines of sight of both cameras are adjusted to be parallel. Detail of each camera is explained as follows.

121

Chapter 5 - Vision System Module

(a) Colour image (b) Transformed depth image

Figure 5.3: Images from the top-view

5.1.2.1 Colour camera

This camera captures still colour images for the colour image processing. The raw image is a 1000 × 1000 pixel single channel image encoded with the Bayer filter (Bayer 1976). The corresponding colour image is decoded from the raw image in the pre-processing session. The local minimum precision in horizontal plane-xy is 0.57 mm/pixel at the top level of fixture plate (ZF = 0). The precision slightly increases in the higher level-z due to the perspective of the camera. Overall, the image obtained from this camera is high resolution and appropriate for observing small image details and colour-based features.

5.1.2.2 Depth camera

This camera captures still depth images representing the objects in 2.5D in the form of a single-channel depth field image. The 2.5D map of the sample image in Figure 5.3b is represented in Figure 5.5. The depth data serves two purposes, 1) observing the estimated 3D geometrical features of the object and 2) measuring the distance of the object. In this research, a Microsoft Kinect sensor (MicrosoftCorporation 2011) is used as the depth camera providing 640 × 480 pixel depth image. The precision in the horizontal plane at the level of the fixture plate level is 1.74 mm/pixel and the depth resolution is 3.37 – 4.39 mm/bit within the operational range (see Figure 5.6 and detail in Section 5.2.1.3). The data acquisition and image pre-processing is programmed based on an open source library, Libfreenect (OpenKinect 2010). The mounting location is selected to minimise the effects from lenses’ distortion and perspective as illustrated in Figure 5.4. As a result those effects are minimal and can be ignored in the calibration process.

122

Chapter 5 - Vision System Module

In comparison with other types of distance sensing techniques, this sensor is selected because of two advantages: 1) simple data representation that can significantly reduce the computing resource and 2) low-cost. However, the possibility of data loss can occur in some circumstances due to the infrared (IR) based sensing technique. The data loss occurs in three circumstances: 1) shaded area caused by obstruction by the emitting infrared beam, 2) the reflective surface that is normal to the line of sight of an infrared camera and 3) inaccurate sensing at the edge of the object. Therefore, these conditions should be avoided in the actual disassembly process.

*(a) Distortion of depth image (b) depth image (c) colour image Figure 5.4: Raw images and distortion field of Kinect (Khoshelham and Elberink 2012)*

Figure 5.5: Depth image represented in 2.5D map

123

Chapter 5 - Vision System Module

Depth accuracy and resolution in operational space

4 Operation range

3

2

1 Resolution in Z-axis (mm) in Resolution

0 0 200 400 600 800 Distance from the depth camera (bit) Figure 5.6: Depth accuracy and resolution in z-axis within the operation range

5.1.3 Interaction with other modules

With respect to the control architecture as shown in Figure 5.1, the vision system involves two levels of control layer: 1) Low-level and 2) Mid-level. Image capturing and pre- processing are conducted in the low-level control layer. Both hardware and software issues get involved in this level. Afterwards, the processed data will be passed to the functions in the mid-level control layer where the detection functions are performed. A number of algorithms have been developed and deployed at this level. Finally, the abstract information in regard to the disassembly of product will be passed to the cognitive robotic agent (CRA) situated externally in the high-level control layer.

The cognitive robotic agent communicates with this module through the communication centre by socket messaging. According to the client-server model, the CRA is a client and the vision system module is a server. This module responds to the Sensing action by giving the corresponding abstract information in the form of Fluent that represents the four following properties:

x Existence of the component to be detected; x The number of the detected components; x Location of the detected components in 3D operational space (x,y,z); and, x Logical value due to the transition of disassembly state.

From the class diagram in Figure 5.2, the interaction occurs via csdVisionGolog which is a wrapper class used to pass the sensing action command to the corresponding main functions. The detection results are converted to Prolog semantics which is compatible

124

Chapter 5 - Vision System Module with IndiGolog. In summary, the output fluents are presented in three forms: 1) logical value (Equation(5.1)), 2) single component (Equation (5.2) and (5.4)), and 3) list of components (Equation (5.3) and (5.5)).

output = yes . / no . (5.1)

output = box x112 , y ,x , y 212 ,z ,z . (5.2)

output = ¬¼ªºbox x11 , y 21 ,x 21 , y 21 ,z 11 ,z 21 ,... , box x1iiiiii , y 2 ,x 2 , y 2 ,z 1 ,z 2 . (5.3)

output = loc x, y,z . (5.4)

output = ¬¼ªºloc x111 , y ,z,... loc xiii , y ,z . (5.5)

Where box represents object location and loc represents point location.

In regard to the disassembly operation unit modules, the exact location of components is mainly used to determine the cutting paths. The position relative to the Product coordinate {P} is used among the modules (see detail in Section 5.2.2) in order to present the product specific geometry. However, no direct communication is established between the disassembly operation units and the vision system. The primitive actions associated with the cutting operation are generated and sent out from the cognitive robotic agent.

5.2 Computer vision functionality

In regard to the common problems encountered in the field of computer vision, the vision system module is described in four areas: 1) optical problem and image quality, 2) camera configuration, 3) recognition and localisation, and 4) model representation. The approaches presented in this research are limited to the domain of the resulting automated disassembly system. Therefore, the problems have been simplified.

5.2.1 Optical problem and image quality

5.2.1.1 Lighting condition

Regarding the optical problems in shading and contrast, four daylight light bulbs (18W, 6400K), projecting 45˚ light perpendicular to the horizontal plane of the fixture, are installed in order to control the lighting condition. Consequently, the majority of shading

125

Chapter 5 - Vision System Module areas are revealed. The high intensity of light is provided, so that the camera can obtain a better image quality with regard to lower noise and wider depth of field. As a result, the captured images are clearer and the examined objects are in focus. However, a problem regarding colour balance arises because the colour temperature of this extra lighting marginally deviates from the actual daylight temperature.

5.2.1.2 Colour balance

Accuracy of the colour is crucial due to the colour image processing techniques used in the proposed main functions. The colours of the examined objects in the captured images are dominated by ambient light. In general, the true colour image can be obtained under daylight. The temperature of the controlled ambient light (6400K) deviates from the actual daylight temperature (5500K – 6000K). Therefore, colour balance calibration is necessary.

Calibration of colour balance is performed in Red-Green-Blue (RGB) colour space with white balancing algorithm as in Equation(5.6) (Viggiano 2004). The balanced illumination value of each pixel in each channel (,RGB ,) is computed by scaling the original value (,RGBccc ,) with the value of the real white pixel under the controlled

ambient light (,RGBWWWccc ,). A white paper is used as a calibration object for obtaining the real white pixel. Brightness and contrast of the image are also compensated automatically.

ªºRR ª255wcc 0 0 ºªºR «»GGG «0 255cc 0 »«» «» «w »«» (5.6) ¬¼«»BB ¬«0 0 255 wcc ¼¬¼»«»B

5.2.1.3 Calibration of the depth image

The 640 × 480 pixel single channel 11-bit depth image acquired from the depth camera is aligned to the 1000 × 1000 pixel colour image which is the main image. An optical axis of each camera is perpendicular to the fixture plate so that the image planes are parallel to the fixture plate in order to minimise the perspective deviation (see Figure 5.7). The calibration is performed in two parts: 1) 2D geometrical calibration and 2) distance calibration.

126

Chapter 5 - Vision System Module

Colour camera Depth camera

OffsetLD

Reference Point

For distance LF L Actual Distance calibration (measurement) OPTICAL AXIS OPTICAL AXIS Sensing Distance

ZF ZF =0

Sample δDistance (Distance error) Fixture plate (ZF =0) NOTE:

LLF = Distance between lenses centre and fixture base

ZF = Vertical distance from the fixture plate OffsetLD = Distance between lenses centre and depth camera

Figure 5.7: Configuration of the cameras over the fixture plate and distance calibration

The 2D geometrical calibration - affine transformation is applied to the depth image in order to geometrically align it to the colour image. The affine transformation matrix is a 2 × 3 matrix containing three types of geometrical transformation parameters: 1) 2D rotation 2) scaling and 3) translation. These parameters are represented as a 2 × 2 rotational matrix and 2 × 1 translational vector in the 2 × 3 transformation matrix. The source image maps to the destination image by warping as according to Equation (5.7) (Bradski and Kaebler 2008). This equation refers to each pixel in image coordinate (c, r) where c = column, r = row. The origin point (c, r) = (0, 0) at the top-left corner of the image. According to the notation between image coordinate and spatial sampling, c = xS and r = -yS. Equation (5.7) can be rewritten as Equation (5.8) which is compatible with the numerical solver. In this case, the elements in MAffine can be obtained indirectly from a set of pairs of the corresponding points in both images. These parameters:

aaaab00, 01, 10, 11, 0, and b 1 are numerically solved from three pairs of points representing the corresponding location in two comparable images

ªºc ªºc ªºaab MXc 00 01 0 «»r (5.7) «» Affine src «»«» ¬¼r aab10 11 1 dst ¬¼«»1 ¬¼src

dst c, r a c a c b a r a r b (5.8) 00src 01 src 0 10src 11 src 1

127

Chapter 5 - Vision System Module

Where src = source image; dst = destination image; MAffine is 2 × 3 affine transformation matrix; and, X' is a 3 × 1 vector of the image coordinate.

Distance calibration: the distance (Dsense) of objects at a particular point (c, r) in the image coordinate of the depth image is calculated from the corresponding pixel value of the depth image ranging 0-2047 (11-bit) according to Equation (5.9) (OpenKinect 2011).

§·PixelValue c, r Dcrsense , 123.6u tan¨¸  1.1863 (5.9) ©¹2843.5

z c,r L Offset  D c,r D FL FLDsense actual (5.10)

The distance calibration – This calibration is performed by comparing this sensing distance to the actual distance (Dactual) which is physically measured from the depth camera to the upper surface of the fixture plate. The distance used for calibration is an average distance of four reference points located nearby the corners of the fixture plate

(see Figure 5.7). In this research, the vertical distance is represented by zF which is the vertical distance above the fixture plate. zcrF (,)at a particular coordinate is computed from Equation(5.10).

5.2.2 Camera configuration and mapping of coordinate frames

The relation between the image space (spatial sampling) and operational space is determined by the coordinate mapping process. The mapping process is applied to both images after they are geometrically aligned as in Section 5.2.1.3. The frame mapping is performed based on a camera calibration matrix containing two types of parameters: 1) intrinsic parameters and 2) extrinsic parameters (Siciliano et al. 2009). First, the intrinsic parameters represent the characteristics of the lenses and the image sensor, consisting of focal length (f), scale factor in both directions (αx and αy), and offset of the image coordinate with respect to the optical axis (X0 and Y0). Second, the extrinsic parameters represent the relation of position and orientation between each coordinate system in the entire system. Therefore, the position in operational space can be written as a function of image space and these parameters as in Equation (5.11). In this research, the approach to obtain these parameters is simplified by two assumptions: 1) the camera is equipped with low distortion lenses and 2) the position and orientation of the camera is a finely physical adjustment.

128

Chapter 5 - Vision System Module

Position x,, y z H c , r | Intrinsic Parameters, Extrinsic Parameters ^ `^ ` (5.11)

The configuration of the system illustrated in Figure 5.8 typically consists of three physical components resulting in four physical coordinate frames: 1) robot base frame {B}, 2) fixture base frame {F}, 3) tooltip frame {T}, and 4) lenses centre frame {L}. In addition, with respect to the vision system, two virtual coordinate frames are set up to derive the geometrical relation inside the colour camera. They are: 1) a spatial sampling frame {S} and 2) an image plane frame {I} (see Figure 5.9). The spatial sampling frame is the visual coordinate used to define the 2D position on the image sensor in pixel. The captured image from the colour camera is a projection of the objects on the xy-plane of this spatial sampling in which the origin is located at the top left of the image.

In addition, the product coordinate {P} is defined in order to describe the geometry and disassembly operation parameters directly for each product. In the disassembly process, {P} is mainly used since the product-specific information can be explicitly recorded as part of machine learning. The movement path of the robot is also represented in relation to {P}. Therefore, the conversion between {B} and {P} is done along the process. The relation between these coordinates observed from top-view is shown in Figure 5.10. In summary, the coordinate frames in this system are listed in Table 5.1.

Coordinate frame Type Location of the origin {B} Robot base Physical Centre of robot base {F} Fixture plate base Physical On fixture plate at colour camera line of sight {T} Tooltip Physical End of the cutting tool {L} Lenses centre Physical Centre of the colour camera’ s lenses {P} Product coordinate Physical Bottom-left of the product sample {S} Spatial sampling Virtual Top-left of the colour image {I} Image plane Virtual Centre of the image sensor of colour camera

Table 5.1: Summary of coordinate frames

129

Chapter 5 - Vision System Module

zL yL Colour camera xL oL Depth camera LENSES CENTRE {L}

L

object

P OPTICAL AXIS SPATIAL

yS LF SAMPLING {S} L (projection)

oS xS TOOLTIP {T} Object zF yT z oT B B Pobject y xT P B yF P object yP zT B LF oF oB xF B xB LP oP xP FIXTURE BASE {F} PRODUCT ROBOT BASE {B} COORDINATE {P}

(projection) zF =0

Figure 5.8: Configuration of the disassembly cell

yS SPATIAL SAMPLING {S} xS c r yF yL yI zI OPTICAL AXIS zF zL xI xL L xF Pobject f =LIL LLF

IMAGE PLANE {I} LENSES CENTRE {L} FIXTURE BASE {F} Figure 5.9 : Perspective transformation in the camera

130

Chapter 5 - Vision System Module

yS SPATIAL Image space SAMPLING xS 1000×1000 pixel image {S} c r ROI Object yF

yB {F} xF B LF P {B} yP LF xB x Product LB P ROBOT BASE P {P} COORDINATE PRODUCT

COORDINATE (at zF =0)

Figure 5.10: Frames coordinate and image space observed from top-view

B In conclusion, the location in 3D of the object Pobject xyz B,, B B is a function of 2D image space (r,c) and vertical distance from the upper plane of the fixture plate ( zF ) in accordance with the intrinsic parameters (f, αx, αy, X0, Y0) and the extrinsic parameters (

BF BF BF LLF , LX , LY , LZ ) as in Equation (5.12). The parameters, variables, and their obtaining approach are listed in Table 5.2. However, the location in product coordinate is related to the calibration process. Further explanation is in Section 5.2.4.

ªºªº1 BF «»«» LzcrcXLLF F , 0 x «»¬¼D x f «» ªº1 PB (,xyz ,) «» L zcr , rY LBF (5.12) object B B B«»«» LF F 0 y «»D y f «»¬¼ BF «»zcrLFz ,  «» ¬¼«»

131

Chapter 5 - Vision System Module

Parameter or Definition Value acquisition approach Unit Variable

Lx, Ly, Lz offset between {B} and {F} measurement mm offset between {L} and {F} L measurement mm LF along the optical axis

X0, Y0 offset between {I} and {S} measure from the captured image pixel calibrate with the actual size of object at z = 0 in Equation(5.12) F mm ªº'PmmB α , α scale factor LLF i() object >@ pixel x y D «» i fP' S pixel ¬¼«»i() object >@ (5.13) where i = direction x and y f focal length of the lenses lenses specification mm x , y position on the plane of output from processing of the s s pixel (variable) spatial sampling coordinate captured image z vertical distance of the processing from the depth image by F mm (variable) object from {F} Equation(5.10)

Table 5.2: Summary of parameters and variables for calibration

5.2.3 Recognition

Regarding the representation of structure of the products in Chapter 3, pattern recognition needs to be implemented on two types of components: 1) main components and 2) connective components.

5.2.3.1 Main component

With respect to one kind of component in one product family, variations in physical features of the component, i.e. structure, geometry, size, colour, and material, can be seen throughout a number of examined samples. These features vary according to the product design and technology belonging to the products’ manufacturer. The vision system needs to be generic and flexible enough to deal with these variations. Therefore, the concept of a white-box vision system (Koenderink et al. 2006) is implemented. The advantage of this approach is more robust, more extensive, and faster in comparison to the black-box or the case-based approaches. The detection rules are generated from “common features” belonging to each type of component. The selected common features need to be consistent according to the functionality of the class of components. In this research, the

132

Chapter 5 - Vision System Module common features have been collected and validated throughout a substantial number of the samples. Further explanation is given in Section 5.3.

5.2.3.2 Connective components

The connective components can be classified into two types according to their physical appearance: 1) quasi-component and 2) virtual component. The quasi-components are visually observable and significant parts of them are noticeable in the products, e.g. heads of screw and rivet. The virtual components passively establish the connection among the main components. Examples of these components are snap-fit, press-fit, mating, adhesive, solder, weld, and etc. The majority of them are neither observable nor distinguishable from the main component (see detail in Chapter 3).

In this research, the detection process aims at quasi-component fasteners in order to achieve a semi-destructive disassembly approach. The destructive approach is expected to be performed in the case of virtual components. Since most of the quasi-components are standard components, significant geometrical features are consistent even amongst different products. Therefore, these features can be used for the detection algorithm. In summary, the knowledge-based approach incorporated with Machine Learning (ML) is mainly used for the recognition process. Further explanation is given in Section 5.3.7.

5.2.4 Localisation

The localisation process is performed after the desired components have been recognised. The object can be an interesting point, edges, corners, or borders, which represent the location of the component. These objects are initially distinguished from the irrelevance background by segmentation techniques. Afterwards, the location in 2.5D operational space is obtained from location in 2D image space (c, r) and the corresponding zF . Eventually, the extracted information is stored in the appropriated data structure for further usage. According to the case-study product, three assumptions have been made in order to simplify the localisation process while maintaining the efficiency. Verification of the following assumptions has been made that they satisfy more than 97% of the samples

x The product is placed parallel to the orthogonal axes; x The components are aligned with the orthogonal axes of the system; and, x The components are a perfect rectangle.

133

Chapter 5 - Vision System Module

In summary, localisation can be considered in three aspects: 1) segmentation techniques 2) location in operation space and 3) data structure. Details are explained as follows.

5.2.4.1 Segmentation

Segmentation distinguishes the regions belonging to the desired components. Region- based segmentation techniques are implemented in this research regarding the characteristics of the components in the case-study products. A number of segmentation techniques are applied in order to deal with a variety of components’ qualifications. The segmented region will be enclosed by a minimal bounding rectangle (MBR) as a bounding box which is sufficient to locate the exact position of the border of the components based on the aforementioned assumptions. The segmentation techniques are explained according to two types of image as follows:

Depth image segmentation - the segmentation is done based on the 3D geometry of the object which is more robust than the colour image segmentation. However, the required condition is that the difference in height between the components and the background must be significant (more than depth accuracy). In summary, the following algorithms are implemented: K-mean clustering (MathWorks 2011), fixed-level thresholding, and blob detection (Liñán 2010).

Colour image segmentation - histogram-based techniques are effective if the object and the background have sufficient difference in colour qualification. The colour image is represented in Hue-Saturation-Value (HSV) colour space because the chromatic properties and brightness can be isolated. Therefore, the colour can be considered regardless of ambient illumination (Vezhnevets et al. 2003). The segmentation is performed by filtering the regions of satisfied colours that are predefined as the possible colour found in the expected components. This algorithm is adapted from human skin tone detection based on boundary clustering method (Kovac et al. 2003). The filtered pixels will be classified by applying fixed-level thresholding and grouped by blob detection.

5.2.4.2 Operational space

As described in Section 5.2.2, the locations of the object in 3D operational space are presented in two coordinate frames, 1) the robot base coordinate {B} and 2) the product coordinate {P}. The robot coordinate {B} describes the moving paths for controlling the

134

Chapter 5 - Vision System Module robot within the scope of the program operated by the robot controller. Meanwhile, the product coordinate {P} is used in the rest of the system. The location of the product- specific features can be described relative to the product itself. Therefore, the product- specific information can be directly stored in the knowledge base (KB) in the learning process conducted by the CRA.

The location in {B} can be obtained from Equation (5.12) by supplying r, c, and zcrF (,). The location of the reference frame {P} is able to relocate and needs to be updated once the new sample has been loaded. From Figure 5.10, the location in {P} can be derived from the relative location in {P} with respect to the offset between these frames. The location of the object relative to {P} is in Equation(5.14).

ªºªº1 BF «»«» LzcrcXLLF F , 0 x «»¬¼D x f «» ªº1 PLP (,xyz ,) «» L zcr , rY LBF B (5.14) object P P P «»«» LF F 0 y P «»D y f «»¬¼ BF «»zcrLFz ,  «» ¬¼«»

P PBPBPBT Where LBxyz [,,]LLL is an offset between the origins of {P} and {B} with respect to frame {B} along each axis.

Region of Interest (ROI) and Volume of Interest (VOI) are also assigned in {B} according to {P}. Therefore, the image will be processed within this confined space resulting in reduction of the processing time and disturbances from irrelevant surroundings. The ROI and the VOI are set to cover around the sample as shown in Figure 5.11 and 1.13. The real object is enclosed within the VOI which can be observed by the camera as the equivalent VOI due to the perspective. The ROI is a projection of the equivalent VOI on the base plane in order to ensure that the ROI covers the entire object. The origin of the product coordinate and the bottom-left corner of the ROI are set at the position crcc, in the image space. The corresponding position in the operational space

xyBBcc, is also set at zF = 0. Eventually, the location xyBBcc, is sent to the robot controller and the CRA to acknowledge the current product coordinate at the beginning of the disassembly process. Example of the assignment and implementation of ROI and VOI is shown in Figure 5.12.

135

Chapter 5 - Vision System Module

VOI

{S} c yB r

{B} xB Equivalent VOI in {P} perspective view yP seen by the camera (xB', yB') x (r', c') P Projected ROI zF =0 Figure 5.11: ROI and VOI according to the Product coordinate

Original image Assign ROI and VOI ROI and VOI in later state

(a) (b) (c) Figure 5.12: Assignation and implementation of ROI and VOI

5.2.4.3 Feature representation

The components and the corresponding cutting paths can be represented in four primitive geometries: 1) Point, 2) Line, 3) Rectangle, and 4) Box. These features are used in both image and operational spaces. The conversion is done by Equation (5.12) and (5.14) by applying at each key point, e.g. corners of a rectangle. The features are simplified according to the aforementioned three assumptions in Section 5.2.4 and all key points are on the same level-z (except Box). Therefore, the number of key points used is minimal. Details of each data structure are explained as follows and summary is in Table 1.3.

x Point: represents location of the centroid of small objects, e.g. screw head, where the size is insignificant. The location is also used for the cutting destination. x Line: represents the cutting path between two points. No part of the component is represented by this feature.

136

Chapter 5 - Vision System Module

x Rectangle: represents the boundary of the object as an MBR with two points, bottom-left and top-right. It represents the component in the image space in the early detection process before converted to the Box and used as the cutting path. x Box: is an extended form of the Rectangle to represent the component in 2.5D. Two z-levels are used to indicate the vertical boundary. It is only used to represent the component.

Objective

Geometrical Image space Operational space Feature key points Number of Cut path Represent Point loc(c, r) loc(x, y, z) 1 z z

Line line(c1, r1, c2, r2) line(x1, y1, x2, y2, z) 2 z

Rectangle rect(c1, r1, c2, r2) rect(x1, y1, x2, y2, z) 2 z z

box n/a box(x1, y1, x2, y2, zin, zout) 2 z

Table 5.3: Feature representation

5.3 Detection algorithms for disassembly of LCD screens

According to the product analysis of LCD screens in Chapter 4, LCD screens typically consist of six types of main components and three types of connective components. The main components consist of: 1) front cover, 2) back cover, 3) carrier, 4) LCD module, 5) PCB cover, and 6) PCB. The connective components consist of 1) screws, 2) snap-fits, and 3) electrical and electronic cables. The vision system is developed for detecting all of these components individually. However, according to the (semi-) destruction strategy presented in Chapter 4, only detection of screws is necessary since the other connective components can be indirectly terminated. In addition, the detection algorithm of the change of the disassembly state is developed.

In general, the detection function corresponding to a particular component will be called by the CRA only in the disassembly state that the component expects to be found. The detection process consists of two stages: 1) recognition and 2) localisation. The common features are used in the recognition process to determine existence of the component. Afterwards, in case the component has been recognised, the location of the border will be determined by the enclosed bounding box. The data will be encoded in the form of

137

Chapter 5 - Vision System Module

box(x1, y1, x2, y2, zin, zout) passing to the CRA. In general, the detection algorithm for a particular component assumes that only one piece of component is found in each state. However, PCBs and screws are exceptional cases since more multiple instances are commonly found according to the designed configuration.

This section is organised as follows. A general concept of common features used for developing the detection algorithms is described in Section 5.3.1. The common features which are used for detecting the individual components are explained in Sections 5.3.2 − 5.3.7. In addition, the detection of state transition is explained in Section 5.3.8, detection of model in Section 5.3.9, and other utility functions in Section 5.3.10.

5.3.1 Common features

The common feature of a type of component is defined according to the qualifications regarding physical appearance which are commonly found in every component within their type. The main purpose of using this feature is to overcome the variation problems expected to be encountered during the disassembly process of various product models. The common features are analysed from the observed samples which consist of 37 different models of LCD screens. The physical appearance of each component is quite consistent in all samples as it is directly related to the functionality of the component. These common features are used to formulate the rules for rule-based detection. It is implied that a particular component is detected in the current state if the corresponding set of component-specific rules are satisfied. In Equations (5.15) and (5.16), an object x is a component type y if the object x satisfies all rules corresponding to the component type y.

¬¼ªºrule12 x, yšššŠ rule x , y ... rulen x , y component x, y (5.15)

component x, y{š¬¼ªº object x componentType y š x y (5.16)

5.3.1.1 Geometry

Geometry of components can be considered in two perspectives according to the images. The size of the component is observed from top-view through the colour image. The height of the component is observed through the depth image. These properties are found to be in consistent ranges for the medium size LCD screens. In regard to the aspect ratio, nominal screen and aspect ratio are within a standard range which can be used as the classification rules. In summary, under the initial assumption that the detected object x is

138

Chapter 5 - Vision System Module a component y, the following rules are used to test with the criteria set for a particular type of component y. Equations (5.17), the rule of size holds if the bounding box of the object x is between the upper and the lower limits of size. Equation(5.18), the rule of aspect ratio holds if the bounding box of the object x is between the upper and the lower limits of the aspect ratio.Equations (5.19), the rule of shape holds if the object x is a rectangle. Equation (5.20), the rule of height holds if the object x is taller than the minimum height.

rule x,, yšddªº component x y minSize y size mbr x maxSize y (5.17) size ¬¼ ^`

ruleAspectRatio x, y  (5.18) ªcomponent x , yšd minAR y aspectRatio mbr x d maxAR y º ¬ ^` ¼

ruleshape x,, yš¬¼ªº component ^ x y x Rectange ` (5.19)

rule x,, yšdªº component x y minHeight y height x height ¬¼ ^` (5.20)

5.3.1.2 Colour range

Due to the material specifications and production techniques, only minor colour variations are found in each component type because the material used is related to the functionality. Therefore, it can be implied that the component is detected if sufficiently large connected regions of the preferred colours can be detected. The rule is defined as in Equation (5.21), the rule holds if the connected region (detected by blob detection (Chang et al. 2004)) of satisfied colour is larger than the defined criteria. The pixel satisfies the colour criteria according to the definition in Equation (5.22). Equation (5.22), this condition holds if a pixel I that belongs to the area of object x has intensity value within the range of channel-H and S defined for component y. Hue-Saturation-Value (HSV) colour space is used to represent the colour because the chromatic properties and brightness can be distinguished. Consequently, the colour can be considered regardless of ambient illumination (Vezhnevets et al. 2003).

rulecolour x,, yš¬ª component x y (5.21) area blob xt) y š satColourPixel I , h , s , v , x , y º ^` Blob ¼

139

Chapter 5 - Vision System Module satColourPixel I,,,,,, h s v x y {

¬ªcomponent x , yšš pixel I , colour h , s , v (5.22) ^`minH ydd h x, I maxH y š^` minS y dd s x, I ,maxS y ¼º

Figure 5.13: Histogram of the base colour in S-Channel collected from the samples

Figure 5.14: Histogram of the base colour in H-channel collected from the samples

140

Chapter 5 - Vision System Module

The components in LCD screens are one of four base colours: 1) matte gray, 2) light gray, 3) green, and 4) yellow. The part made of hi-strength and thick plate steel, such as carriers, is matte gray. The parts used for covering purpose, e.g. back of LCD module and PCB cover, are made from light plate steel which is light gray. The green and yellow are colours of typical PCBs. A proper fixed-level thresholding is able to classify a particular component. The colour ranges are summarised in Table 5.4 and histograms are shown in Figure 5.13 and Figure 5.14.

HSV colour range Component Colour name H (0,360˚) S (0,100) Min Max Min Max Back cover n/a - - - - Matte gray 73˚ 135˚ 10 27 PCB Cover Light gray 40˚ 128˚ 9 35 Carrier Matte gray 73˚ 135˚ 10 27 Green 70˚ 200˚ 35 80 PCBs Yellow 20˚ 70˚ 35 90 LCD Module Light gray 40˚ 128˚ 9 35

Table 5.4: A satisfied colour range of the component in LCD screens

5.3.1.3 Texture and the connected region

The 2D surface texture of the component is directly related to the function of the component. It can be classified into two types: 1) homogeneous and 2) non- homogeneous. First, the homogeneous one is apparently found in the metallic main components, e.g. PCB covers, carriers, and LCD modules. The corresponding area of the homogeneous texture can be considered a large connective plain region with minor image detail with respect to small specific features, e.g. ventilation holes, IR reflect, connective component, etc. Second, the components having non-homogeneous texture usually contain sub-components, such as PCBs. The sub-components are noticeable due to their distinctive colours and textures from the base. However, a connected region must be significantly large in order to recognise a component as the rule in Equation (5.23). The rule holds if the key indicators in Equation (5.24) and Equation(5.25) are satisfied. The ratio between blob cluster and the size of its BMR determines homogeneity of the detected area (see Equation (5.24)). The ratio between the MBR and the entire size of the LCD screen determines the significance of the size of the detected area (Equation(5.25)).

141

Chapter 5 - Vision System Module

In addition, the rule about the surface roughness in Equation(5.26) holds if the surface is flat enough which is represented by minimum acceptable roughness value (Ra).

ruleconnectedArea x,, yš component x y (5.23) ^`areaBlobMbr x t)Blob// MBR y š^` areaMbrLCD x t) MBR Area y

area blob x areaBlobMbr x (5.24) area mbr x

area mbr x areaMbrLCD x (5.25) area entireLCDscreen

rule x,, yšdªº component x y R x minR y Roughness ¬¼ ^`a a (5.26)

In summary, the detection rules in Equation (5.17) - (5.26) are developed from the selected common features according to each type of component. The common features for each component are summarised in Table 5.5 and a flowchart for the detection process is in Figure 5.15. The detail is explained in the following sections.

Common features Colour image Depth image 2D Geometry Component

Recognition Localisation Localisation region Height Height feature Size Size ratio ratio Surface Surface Haar-like Shape Shape Aspect Connected roughness Colour range Back cover z z z z z z PCB cover z z z z z z z PCBs z z z z z z Carrier z z z z z LCD module z z z z z Screws z z z z

Table 5.5: Common features for the detection of the components in LCD screens

142

Chapter 5 - Vision System Module

Function call as Sensing Action Start by Cognitive Robotic agent

Capture colour image Capture depth image Image pre-processing

Convert Bayer to RGB Align to colour image by affine transformation Calibrate colour balance

Apply ROI and VOI Detection rules Apply detection Rule-1

Apply detection Rule-2

Apply detection Rule-n

YES NO Satisfy all rules

Process the abstract Locate the MBR in information operational space (x,y,z) Encoded the result in Golgog Component detected Component send back Primitive Fluent at this location NOT detected to Cognitive Robotic agent

Figure 5.15: General process flowchart for component detection

5.3.2 Detection of back cover

The back cover of the LCD screen is the first component that is expected to be visually inspected and removed. The ROI and the VOI for the entire disassembly process will be assigned according to the detected boundary.

5.3.2.1 Common features

The back cover is a large plastic part covering the entire internal structure of the LCD screen. It is a shell structure of approximately 2 – 3 mm thickness made of halogen free plastic (Franke et al. 2006) and typically appears in two base colours including black and white. From the top-view, the majority of the back cover can be recognised as a perfect rectangle corresponding to the shape of the LCD module. In the case of a non-perfect rectangle, switch-panel PCBs and speakers are found in the extra part. The diagonal size ranges from 17” – 30” with various aspect ratios, i.e. 4:3, 16:9, 16:10, and 2:1 (Kyrnin

143

Chapter 5 - Vision System Module

2010). Since the nominal diagonal size indicates solely the area of LCD glass, an additional 10 – 40 mm of border thickness is taken into account for the overall size. Overall thickness of the LCD screens is influenced by the thickness of the back cover which ranges from 10 – 80 mm. The thickness varies according to the diagonal size and the model year and the new models tend to be slimmer. Samples of the back cover are shown in Figure 1.15. In summary, the common features used in the recognition process are:

x The diagonal size of the outer border is 400 – 800 mm x Maximum aspect ratio is 2.1 x The thickness is 10 – 70 mm

(a) Captured colour (b) Captured depth (c) Threshold depth (d) Detection result

Figure 5.16: Samples of the back cover seen from top-view

5.3.2.2 Detection algorithm

The data from the depth image is used due to the geometrical-based common features. To distinguish the possible region belonging to the sample, fixed-level thresholding is applied to the depth image at zF = 10mm corresponding to the minimum thickness of the back cover. Consequently, this extracted region will be represented as positive pixels. Afterward, the largest positive region will be surrounded by the bounding box. This initial border is not precise because of the inaccuracy of the depth image at the edges of the object. The pixels representing the edge fluctuate within approximately ±3mm around its nominal value.

An edge-refining process is applied to each edge by individually moving each edge toward the centre of the component. It will be incrementally moved until the number of pixels on that edge reaches 90% of the size of the corresponding side of the bounding box (see Figure 5.17). This refining process makes the detection more robust to noise and the presence of irrelevant objects belonging to the fixture base. It also guarantees that the

144

Chapter 5 - Vision System Module detected boundary is on the actual object’s boundary which is crucial for further generating the cutting path. Finally, the criteria regarding diagonal size and aspect ratio are applied to this bounding box.

Scanning toward the centre of the object until Object Positive pixel = % of Bounding box size

Bounding box size

Figure 5.17: Edge refining process

5.3.3 Detection of PCB cover

The PCB cover is expected to be the second component found after the back cover has been removed. This detector is designed to detect the part of the PCB cover which is simply a box with significant height situated on the carrier. According to the main structure of the LCD screen, a PCB cover can be found in two types: 1) an isolate PCB cover (Type-I) and 2) an integrated part of the carrier (Type-II) (see detail of the types in Section 4.3.2.2). The CRA utilises this information incorporated with the detection result of the carrier to achieve further classification (see detail in Chapter 6)

5.3.3.1 Common features

The PCB cover is generally a 10 – 50 mm height metal box used to cover the majority of electronic components, e.g. PCBs and cables. This cover completely isolates this electronic area from the outside environment to prevent dust and unexpected physical damage. As observed from the top-view, more than 90% of the samples are rectangular and cover an area of approximately 28,000 – 80,000 mm2 of an LCD screen. Around half of the samples have chamfers which are oblique planes on the top edge corners. This can be obviously noticed from the side-view.

According to the type of PCB cover, two types of material can be found, including 1) thin shiny metal plate (found in Type-Ia) and 2) thick matte gray metal (found in Type-Ib and .(35 ,12) א and S (˚130 ,˚35) א Type-II). The colour of these materials is in the range of H More than 95% of the PCB cover contains a large homogeneous gray tone area with a certain number of features, e.g. ventilation holes. A minority of the covers is shielded by colourful shielding film. However, the IR pattern light from the depth camera confuses the colour perception of the colour camera. The pattern can be obviously seen as red dots

145

Chapter 5 - Vision System Module on the reflective surface. A comparison between colour image with and without IR is shown in Figure 5.18. Therefore, the colour criterion is not considered under the detection rule due to a high possibility in fault negative detection. It is noted that the classification between both material types can be achieved in accordance with the carrier detector. In summary, the common features used for detecting a PCB cover are:

x The detected area is 28,000 – 80,000 mm2; and, x The top plane is 10 – 50 mm higher than the carrier.

(a) Colour image without IR (b) Colour image with IR (c) Depth image Figure 5.18: PCB cover under different condition of IR

5.3.3.2 Detection algorithm

The depth image is used due to the geometrical-based common features. Thresholding is applied to the depth image in the ROI to distinguish the pixels belonging to the PCB cover according to the criteria of height difference. The threshold level is in the middle between the top plate and the base of the cover. Since the height can vary within the range of 10 – 50 mm, the k-means algorithm is used to automatically identify the proper threshold level. The k-means using a two-phase iterative algorithm (MathWorks 2011)is applied to the corresponding 1-D vector that contains the pixel value of the entire ROI to group the pixels into two clusters. Therefore, the threshold level is set at the centre between the two clusters’ centroid (see Figure 5.19).

Afterwards, the positive pixels obtained from the thresholding process will be enclosed by a bounding box. The edge refining process with 50% of positive pixel at each side is performed to remove irrelevant objects and noise. This parameter is reduced to 50% in order to compensate an error regarding the reflective area due to IR in case of the shiny plate PCB covers. Finally, the rule regarding the size and edge-refining process are applied on the bounding box.

146

Chapter 5 - Vision System Module

Figure 5.19:: Histogram and centroids obtained from k-means

5.3.4 Detection of PCB

The PCBs are expected to be found under the PCB cover. Unlike the detection of other components, multiple PCBs are expected to be found in one disassembly state. Therefore, an additional algorithm to determine the quantity is developed for this detector.

5.3.4.1 Common features

The PCBs can be found in a number of different appearances according to the design and functionality. As a result, major variations are the types and arrangement of the embedded electronic components. Regarding the geometry, PCBs are rectangular with various sizes ranging from 60 – 400 mm length and 10 – 120 mm width and the thickness is consistent at 1 –1.5 mm. Aspect ratio ranges between 1:20 and 1:1. Since this range is too broad, this criterion becomes insignificant. The area of PCB ranges from 1,000 – 40,000 mm2. The base colours of PCBs are vivid and distinguishable (i.e. green and yellow) from other components in the LCD screens. Therefore, these colour ranges are used as the main feature to recognise this component. In summary, the object is regarded as a PCB if it satisfies the following assumptions:

or (80 ,35) א and S (˚200 ,˚70) א x Green colour range : H ;(35,90) א and S (˚70 ,˚20) א x Yellow colour range : H x Large connected region of the satisfied colour; and, x The detected rectangle area ranges from 1,000 – 40,000 mm2.

147

Chapter 5 - Vision System Module

From the samples, S = 35 is selected as a threshold level for the S-channel because it can differentiate suspected PCBs’ region from the non-PCBs’ region. S < 35 represents 99% of the base colour of the LCD module and the carrier. Meanwhile, S > 35 represents more than 85% of the base colour of the PCBs (see Figure 5.13 and Figure 5.14). Afterward, the green and the yellow PCBs can be classified by using the H-Channel. Each detected region must have a sufficiently large connected area resulting in the bounding box that satisfies the criteria of size.

5.3.4.2 Detection algorithm

The detection of PCB has become more complicated due to the chance of multiple PCBs located in one disassembly state. The detection algorithm can be divided into two phases. In Phase-1, the suspected regions of PCB are roughly identified according to the colour and connected pixel. Therefore, there is a possibility that multiple connected PCBs having similar colour are recognised as one large PCB. In Phase-2, this large region will be further divided into the correct sub-regions corresponding to the individual PCBs.

In Phase-1, regarding the colour criteria, the pixel containing a colour in the preferred ranges, 1) green and 2) yellow, is extracted by the colour filtering algorithm. The filtering algorithm is applied to the entire image according to one colour range at a time. Therefore, the PCB that has a different colour can be clearly distinguished from the others. Subsequently, the satisfied pixel will be marked as a positive pixel and clustered by the Blob detection algorithm. However, the blob detection can falsely divide a PCB to multiple small regions due to the disconnected regions caused by two factors: 1) the embedded components on the PCB that do not satisfy the colour range and 2) overexposure of an area of reflective surface resulting in loss of image data. Therefore, this disconnectivity is fulfilled by applying morphological closing (dilation and erosion) (Shapiro and Stockman 2001) before applying blob detection. Blob detection of the disconnected and different colour regions is shown in Figure 5.20.

148

Chapter 5 - Vision System Module

(a) Captured image (b) Blob detection on morphed positive area

Figure 5.20: Blob detection on disconnected and different colour regions

In Phase-2, the criteria regarding the nominal size of a PCB are applied to the blob detected region. An oversize region (larger than 40,000 mm2) is suspected to contain multiple PCBs which will be partitioned into two sub-areas. For the partitioning process, the partition line will be created at the thinnest area where the number of pixels is minimal. The line is scanning for the thinnest area along the horizontal direction. Afterwards, the bounding boxes will be reassigned for each partition of the post processing (see Figure 5.21). Eventually, in the post processing, those regions that are completely enclosed by another region will be removed.

Scanning Partitioning line Partitioning at the thinnest area

PCB

PCB 1 oversized PCB

(a) before partitioning (b) after partitioning Figure 5.21: Partitioning of the oversize region containing multiple PCBs

5.3.5 Detection of carrier

The carrier is found in the second last state of disassembly before approaching the LCD module. As described in the detection of PCB cover in Section5.3.3, the detection result of the carrier is taken into account for classifying between two types of material. Therefore, this detector needs to be able to identify whether the entire carrier can be seen

149

Chapter 5 - Vision System Module or if it is partly covered by another component. Type-Ia PCB cover can be noticed from this detection characteristic.

5.3.5.1 Common features

The carrier is a main component which is a core structure of the LCD screen. Regarding the functionality, this component needs to have enough strength to hold the LCD module and other components in a vertical direction. Therefore, the carrier is generally fabricated from 1– 3 mm thick metal plate which appears as the homogenous gray tone connected region. More than 95% of the carriers are matte gray with a non-reflective surface which ,From the top-view .(27 ,10) א and S (˚135 ,˚73) א is represented by the colour range of H the carrier covers 45 – 90 % of the entire LCD screen. The absent percentage is contributed to other components, i.e. screws, cables, cable holders, parts of the front cover, and PCBs. The criteria used for detection are summarised as follows:

,and ;(27 ,10) א and S (˚135 ,˚73) א x Gray colour range: H x Large connected region that is larger than 45% of the entire LCD screen.

5.3.5.2 Detection algorithm

The colour filtering for the gray tone colour range is applied to extract the positive pixels. These connected positive pixels are clustered by the Blob detection and enclosed by a bounding box. This connected region must be sufficiently large to represent the large homogenous area of the carrier with the acceptable amount of occlusion. The homogeneous area is checked by the conditions of Ф Blob/Box > 65% and Ф Box/Area > 45%. After the carrier has been recognised, the edges have to be refined for higher accuracy in localisation. The blob detection and the result are shown in Figure 5.22.

(a) Blob detection of the positive region (b) Detected carrier Figure 5.22: Blob detection on disconnected and different colour regions

150

Chapter 5 - Vision System Module

5.3.6 Detection of LCD Module

The LCD module is a crucial component in the LCD screen in term of functionality. It is the last component expected to be detected in the goal state. Since no further cutting need be made after this state, localisation is unnecessary for this function.

5.3.6.1 Common features

An LCD module is a large rectangular box of 8 – 15 mm thickness varying according to the size of the LCD screen. With respect to the pose of the sample on the fixture, only the back frame can be seen from the top-view. It can be seen as a perfect rectangle covering approximately 90% of the entire area. More than 95% of the sample LCD modules have a back frame made of a shiny large flat metal plate. The large area of the reflective surface א and S (˚128 ,˚40) א with gray tone colour is noticeable. The defined colour range of H (9, 35) covers 98% of the base colour found in the LCD module samples. A certain number of occluding components are expected to be found, i.e. sticker tags, cables, a PCB for LCD control, etc. Due to the flat surface, the surface roughness (Ra) is used to determine the flatness of the back frame. An average of Ra within approximately 3 mm is acceptable in consideration of precision of the depth image. The roughness is observed in 80% of the middle area of the LCD module to avoid the confusion from connective components and the front cover in the border area assumed to be 10% from each side. The common features are summarised as follows.

;(35 ,9) א and S (˚128 ,˚40) א x Gray colour range: H x Large connected region larger than 60% of the entire LCD screen; x Roughness < 3mm over the middle area

5.3.6.2 Detection algorithm

The detection is based on both colour and depth images. The detection process is similar to the detection of the carrier plus one additional criteria checking for surface roughness. For the colour image, the colour filtering and blob detection are applied. The homogeneous area is checked by the conditions of Ф Blob/Box > 65% and Ф Box/Area > 45% (see Figure 5.23a). For the depth image, only valid pixels within VOI are taken into account regardless of the blind area. The blind area of maximum 15,000 mm2 is commonly found in various shapes according to the grain pattern of the reflective surface

151

Chapter 5 - Vision System Module

(see Figure 5.23c). The surface roughness is calculated with the arithmetic average of the 2D roughness profile as in Equation(5.27).

n Rzz 1  (5.27) ain ¦ i 1

Where Ra = arithmetic surface roughness; n = the number of valid pixels; zi = level in vertical; z = average level.

(a) Colour image (b) Blob detection (c) Depth image

Figure 5.23: Captured images of LCD module

5.3.7 Detection of screws

Screws are connective components commonly found throughout the disassembly process. The number of screws in each state and their point location in the form of the list of loc(x, y, z) will be provided to the CRA. The detector is designed to have a high detection rate in order to acquire every object that is could possibly be a screw since only one leftover screw can result in failure to remove the corresponding main component. An increase in the number of false positive detections is a major drawback resulting in excessive time in the disassembly process of these redundant objects. Therefore, screw detection is selected to be implemented only in a specific area in PCB detection state in which the screw removal is effective.

5.3.7.1 Common features

All of the screws in the observed LCD screen samples are M2 – M3 flat or button Philips head. Therefore, the size of the screw head is quite consistent, approximately 10×10 − 15×15 pixel on the colour image. This small size results in lack of detail for the recognition process. In addition, the reflection from the disassembly rig’s lighting system dominates the detail of the screws (see Figure 5.24a-d for the close-up view). However, it was found that most of the reflection on the screw head is in a similar pattern as Figure

152

Chapter 5 - Vision System Module

5.24d. Therefore, this pattern is selected to be used for detection. In order to develop a detector that is robust to this variation, a Machine Learning approach is implemented in accordance with Haar-like feature (Viola and Jones 2001).

(a) Training samples (b) sample (c) sample (d) sample Figure 5.24: Sample images of the screws

5.3.7.2 Detection algorithm

The detection process involves two major stages, including 1) training of the detector and 2) implementation on the image.

Training stage - the Haar-cascade was trained with around 800 positive samples and around 7,000 negative samples using Adaptive Boosting algorithm (AdaBoost). The positive samples of 15×15 pixel were collected from different conditions of illumination, contrast, noise, and orientation, in order to develop a robust detector Figure 5.24a). Selected images from a standard dataset Caltech101 (Fei-Fei et al. 2004) were used for the negative samples. The training utility is provided by OpenCV. The training parameters were set to generate the detector having high detection – low rejection rate. As a result, the cascade was trained in 20 stages spending 2.5 hours and achieved a detection rate 98% and false positive rate 0.0001%.

Implementation stage - the trained Haar-cascade is utilised by Viola-Jones detector (Viola and Jones 2001). This technique is selected because of three major advantages: 1) robustness, 2) rapid detection speed, and 3) scale independent. The detection is performed mainly on the colour image located on all possible screws. The information from the depth image is used to filter out the detected objects which do not satisfy the size criteria. In addition, the local ROI is taken into account to filter irrelevant objects outside the scope of the corresponding main component. Finally, the detector gives the list of the locations regarding the centre of each detected screw to the CRA. An example case is shown in Figure 5.25.

153

Chapter 5 - Vision System Module

TP

TP FN FP FP

TP TP FP FP

FP FP TP TP

TP TP

Detection in the PCB’s region: 7 true positive (TP), 1 false negative (FN), and 6 false positive (FP) Figure 5.25: Detection of screws

5.3.8 Detection of state change

A state of disassembly is defined according to an existence of a particular main component. The transition of the state occurs when an entire or significant parts of this component has been detached and significantly moved from the original location. The disassembly state is expected to change after sufficient numbers of the disassembly operations have been performed. The change of the disassembly state is a key measurement of success or failure of a current disassembly operation. The CRA keeps monitoring the change after each physical action has been executed. The result is presented in the form of logical value for the execution monitoring function.

5.3.8.1 General concept

This research proposes two possible candidate approaches for measuring state change: 1) absolute approach and 2) relative approach. First, for the absolute approach, the detection of a particular component is repeatedly performed to recheck the property of the component including location and existence. The state will be indicated as changed if an entirely new set of property has been detected. This method is more flexible for a more complicated product structure. Second, for the relative approach, the measurement has

154

Chapter 5 - Vision System Module been performed relative to the original property. The component needs to be detected once at the beginning. Afterwards, incremental change will be measured.

For the first method, a critical logical ambiguity possibly arises in the case that some parts of the component remain unremoved. This would be incorrectly recognised as a new component. As a result, the number of main components will be uncontrollably increased throughout the disassembly process. Therefore, the relative approach is selected to avoid this problem. In addition, it is more robust for the destructive disassembly and imperfect component detector.

5.3.8.2 Detection algorithm

The change is measured from differences and similarities between the original condition and the current condition of a particular state. The depth image is mainly considered since it represents the physical geometry of the component. However, due to the limitation from the significant height of the component, colour image is also taken into account to compensate for this limitation. Once a new state has been approached, it will be flagged and the original property will be stored as a benchmark. The later conditions will be compared to this benchmark until this current state is completed. To disregard the irrelevant surrounding, the state change is considered only within a local ROI enclosing the main component. The process in regard to the depth image and colour image are described as follows.

Depth image - the difference is measured from the change of the depth pixel to the lower level-z. In general cases, the lower value of pixel represents the volume of the component that has been removed (see condition φ1 in Equation (5.29)). In addition, the change of the blind area is taken into account since it corresponds to the surface property of a particular component. Therefore, it can be implied from the change of the blind area that the original component has been moved from the current location and a new component with a different blind condition can be seen (see condition φ2 and φ3, Equation (5.30) and

(5.31)). In summary, the stated difference with respect to the depth image (Diffdepth) is measured as a ratio between the number of pixels satisfying one of the conditions of the change and the number of pixels within the local ROI as in Equation (5.28).

1 MMM›› ¦I >@123 Diffdepth (5.28) SI

155

Chapter 5 - Vision System Module

M1,{! zzi flag i , check (5.29)

M2,{š zzi flagII blind i, check blind (5.30)

M3,{š zzi flagII blind i, check blind (5.31)

Where φi = condition; I = pixel of the specific ROI; SI = size of a specific; ROI; zi = level- z (zF); and, ϕblind = blind area on the surface of component due to the IR reflection.

(a) Colour image (b) Depth image (c) 2.5D depth map

Figure 5.26: State change – original condition

(a) Colour image (b) Depth image (c) 2.5D depth map Figure 5.27: State change – the component is removed

Colour image - the stated difference with respect to the colour (Diffcolour) is measured using a colour-based histogram comparison in HSV colour space. The dense histogram

(Hk) is constructed in two channels, Hue and Saturation, in order to disregard the effect from illumination. The Diffcolour is obtained by Equation (5.32) and (5.33) which are derived from the correlation equation for measuring histogram similarity in (OpenCV 2010).

HIHH IH ¦I flag flag check check Diffcolour = 1 (5.32) 2 2 HIH H IH ¦¦II flag flag check check

156

Chapter 5 - Vision System Module

1 HHIkk ¦ (5.33) N I

Where N = number of histogram bins; I = pixel of the local ROI; Hflag = histogram of the original condition; and Hcheck = histogram of the current condition.

Overall, the depth criteria can identify the change effectively in the majority of the samples due to the physical geometry. However, in regard to the IR sensing technique, limitations are found in two circumstances: 1) the removed volumes having insignificant size and height and 2) the major part of the component lying under the blind area. Under these circumstances, an insufficient number of pixels can be counted and compared resulting in false detection. The colour criterion is robust for detecting state change in LCD screens since the colour difference can be noticed between any pair of the component types. The state change can be noticed in the aforementioned problematic conditions. However, this assumption is relatively weak since it does not physically reflect the geometrical change. In addition, in regard to the destructive disassembly approach, the colour can be changed due to the covering by dust and fume generated during the cutting process. Therefore, both depth and colour approach are taken into account as in Equation(5.34).

stateChange{t)›t) diffdepth depth diffcolour colour (5.34)

From a preliminary test by non-destructively removing a main component from the samples, the threshold of the depth difference (Φdepth) at 50% and the colour difference

(Φcolour) at 75% can individually differentiate the change in 95% of the samples. However, to prevent the false positive result that could possibly lead to a complicated logical reasoning of the CRA, higher threshold levels are selected as Φdepth = 50% and

Φcolour = 80%. An example of state change is shown in Figure 5.26 and Figure 5.27.

5.3.9 Detection of model of LCD screen

According to the learning ability, the CRA is able to obtain the knowledge for a particular model of LCD screen while disassembling it. The CRA will reuse and revise this knowledge when a sample of this identical model is found in the future as the framework explained in Section 6.1. This section presents the methodology for detecting a model which is used to identify whether the model of a current sample is previously

157

Chapter 5 - Vision System Module disassembled or not. The corresponding knowledge exists in the KB if the model is previously seen and disassembled (known model). Otherwise, the knowledge of this previously unseen model (unknown model) will be generated during the disassembly process of this sample. In this research, the model is recognised with Speeded-Up Robust Features (SURF) that represents the appearance of a particular LCD screen. Therefore, no relation according to the manufacturer is established or provided.

5.3.9.1 General concept

SURF (Bay et al. 2008) is used to recognise the model of the sample by matching with the existing models in the KB. This technique is generally used for finding correspondences between two images by matching the descriptor vectors belonging to their interest point (IP). The IPs are distinctive features in the image, e.g. corners, blobs, and T-junctions, which are detected by Fast-Hessian detector. The descriptor of each IP is a 64-dimension vector representing its intensity structure with respect to the surroundings. The descriptor vectors of both images are matched using Euclidean-distance. The correspondence is measured on the number of the successfully matched descriptors. This technique is selected due to its characteristics: 1) fast, 2) robust, and 3) scale and rotation invariance. In this research, OpenSURF library (Evan 2009) is used.

(a) sample (b) candidate with matched IPs (c) candidate with detected IPs The number of detected IPs: (a) = 745 and (b)-(c) = 707 and the number of matched IPs = 182

Figure 5.28: Interest points of SURF in a sample and a candidate model in KB.

A particular model of the LCD screen is represented by the descriptor vectors of the IPs belonging to the distinctive features. The colour image of the back cover is used to distinguish from other model. From the 37 different models observed, the appearance of the back cover of each model is unique with respect to its distinctive features, e.g. screws, ventilation holes, stickers, model and brand labels, brand logo, edges, corners, etc. An example of the detected IPs in one of the samples is shown in Figure 5.28c. However, the

158

Chapter 5 - Vision System Module design of the back cover can be similar for the models which are different size but from a product series. Therefore, the size criterion is also taken into account.

5.3.9.2 Detection algorithm

The ROI and the size of the back cover of the sample are obtained by executing the back cover detector (see Section 5.3.2). The SURF is implemented on the colour image by generating the 64-dimension descriptor vectors for each IP detected in this ROI. Afterwards, the matching process returns the number of the IPs those are matched between the sample and each model in the KB. This process repeats for all existing models in the KB to find the most correspondent candidate which is a model having the highest ratioSURF according to the sample. The ratioSURF is the number of the matched IPs to the number of detected IPs (ratioSURF) belonging to the sample and the model in KB (see Equation (5.35)). Finally, the sample will be recognised as this candidate model if the ratioSURF is higher than the minimum criterion ΦSURF (Equation (5.36) and their size are identical (Equation (5.37)).

From the preliminary experiment, the samples of identical model have ratioSURF between 18 – 100% depending on the variations, e.g. position and orientation of the sample, labels, screws, noise and lighting, etc. On the other hand, ratioSURF < 10% can represents 95% of the samples of different model them. Therefore, the ΦSURF = 15% is selected as a threshold level. The size criterion is also used to achieve higher accuracy. An example of the SURF matching between a sample and a candidate is shown in Figure 5.28.

§·nIp nIp 1 matched,, xsk matched x b ratio x, x ¨¸u 100% (5.35) SURF s kb 2 ¨¸nIp nIp ©¹ xxsk b

ruleSURF x s,, x kb t)^` ratioSURF x s y kb SURF (5.36)

rulesize x s, x kb ^` size xs size xkb (5.37)

Where xs = model of sample and xkb = model in KB

In a case that none of the candidate can satisfy these criteria, the sample will be considered a new model. The CRA will create a new instance in the KB for learning this

159

Chapter 5 - Vision System Module model by storing the corresponding descriptor vectors. For a reference purpose, the human user can name the model according to the manufacturer’s information, e.g. brand, series, model name, etc. However, there is no relation with the manufacturer’s information established. In summary, the flowchart of the model detection in regard to the learning process is shown in Figure 5.29.

Start

Capture the image of the sample

Indicate ROI from back cover detector Knowledge base (KB)

Apply SURF algorithm to obtain Model representation Process Interest points and descriptor vectors Model-1 Model-1 Model-2 Model-2 . . Model-i Model-i Matching with the model in KB . Known model . . . . . NO Try Model-n Model-n all models in KB? Model-k Model-k Unknown model YES Candidate is Model-i that has highest SURF ratio

Apply the rules, minimum SURF ratio and equal size, to the Model-i Add new model (Model-k)toKB - storing the descriptor vectors YES NO - process will be learned “Known model” Satisfy ’Unknown model” from disassembly all rules?

Sample is Model-i New model found

Use knowledge of Learn this new Model-i from KB model to KB

Start disassembly process

Figure 5.29: Process flowchart of the model detection

5.3.10 Other utility functions

5.3.10.1 Measurement of vertical distance ZF

Vertical distance from the fixture base zF is measured in many circumstances: 1) to locate the component by the detector and 2) to locate the cutting path by the human operator. Image and operation space conversion in Equation(5.14) is used for computing the

160

Chapter 5 - Vision System Module distance of a particular point (x,y,z) according to the input (r,c). However, due to the noise fluctuating ±1 bit of the depth image, an average value of the area of pixels surrounding the exact point is used to compensate this error. An area within ±2 pixels (equivalent to ±1 mm approximately) around the exact features is taken into consideration. The features can be: 1) point, 2) line, and 3) rectangle.

5.3.10.2 Checking grinder size

The disassembly operation uses an angle grinder equipped with an abrasive cut-off disc to perform the cutting operation. The cut-off disc is continuously worn out throughout the disassembly process. As a result, the position of the tool-tip {T} relative to the robot’s end-effector (Lt) is changed according to the size of the remaining cut-off disc (Øt). Therefore, the current size of the cut-off disc needs to be regularly updated after each operation cycle. As geometry of the grinder disc shown in Figure 5.30a and a, the Lt can be obtained from Equation (5.38). The system constant LCR6 = 279 mm and Ømax = 125 mm. The radius wear (δt) is obtained by the vision system by calculating the distance between the reference level and the lower edge cut-off disc. The edge is detected by applying a fixed level thresholding due to a high contrast between the cut-off disc and the background. Due to the camera’s perspective the precision of 0.366 mm/pixel is achieved.

I LL max G (5.38) tCR6 2 t

Tool-tip (cut-off disc) {T} Ø max = original size of the cut-off disc

t Grinder Ø centre t t L R CR6

L δt Reference {R6} level

Robot axis-6 end-effector (a) Side view (b) Front view (c) Image from camera

Figure 5.30: Checking size of grinder disc

161

Chapter 5 - Vision System Module

5.4 Experiment

The detectors according to the LCD screen disassembly in Sections 5.3.2 - 5.3.10 are tested individually. An objective of the experiment is to measure the detection performance of the vision system module in regard to the case-study product. The performance is considered in two perspectives: 1) recognition and 2) localisation. The experiment is divided into three groups according to the functionality of the detector. The full results are shown in Appendix C. Summary is in Table 5.6.

Recognition Localisation (mm) Average Detector Sensitivity Time (s) (%) Mean S.D. RMS Min Max 1 Back cover 100.00 -2.10 2.10 2.97 -7.43 1.71 2.81 2 PCB cover 100.00 0.06 3.06 3.05 -8.57 9.71 3.30 3 PCBs 90.36 2.55 5.38 5.93 -17.71 18.86 3.66 4 Carrier 79.41 2.15 3.56 4.15 -4.57 13.71 4.10 5 LCD module 51.35 n/a 2.62 6 Screw 64.22 error within ±0.5 2.60 7 State change 95.48 (1) n/a 2.45 8 Model 95.74 (1) n/a 15.42/0.42 (2)

9 Measure ZF n/a 1.32 2.84 2.84 -4.39 4.39 0.91 10 Grinder size n/a -1.28 0.23 1.30 -2.00 -0.80 2.24 NOTE: (1) overall accuracy, (2) time for the descriptor / matching for each sample Table 5.6: Performance of the main component detector

5.4.1.1 Main component

The test was done on 37 samples of LCD screens. The samples were disassembled manually and the detection of each main component was performed only in the prospective state corresponding to the main structure of LCD screens described in Section 4.1.2.2. Therefore, the sensitivity of the recognition process reflects accuracy of the detector to determine existence of the component in the prospective state. The sensitivity is defined as Equation (5.39).

n True Positive Sensitivity % u 100% (5.39) nn True Positive False Negative

For the localisation, only the true positive results were taken into account in order to measure the accuracy of the detector. The accuracy is measured from the distance error with respect to the border of the actual component. The prospective cutting path is taken

162

Chapter 5 - Vision System Module into account to determine the error direction. The error is positive (δ+) if the border of the detected component is outside the actual object (see Figure 5.31). Therefore, the negative error (δ-) is preferred since the prospective cutting path is on the actual object. Performance of the main component detector and average time are summarised in Table 5.6(1-5).

δ+ Detected δ- component

δ+ δ- Actual δ- δ+ component δ-

δ+

Figure 5.31: Measurement direction of the distance error

First, regarding the recognition, all detectors performed the recognition task effectively with higher than about 80% except the LCD module which was about 50%. It can be implied that the proposed detection rules are effective. However, the relatively low percentage of the LCD module detector was caused by the surface roughness which was very sensitive due to the noise of the depth image.

Second, regarding the localisation, the outliers in the raw data were initially removed with median average deviation (MAD) to eliminate the inconsistent data lying beyond ±3MAD. The accuracy of localisation is determined from the mean of the error which was within ±3 mm around the actual border. The precision, determined by root mean square (RMS), of all detectors were within about 6 mm. From the result, the PCBs can be localised less accurately than that of other main components. The inaccuracy was caused by three factors. First, a number of embedded components that cover the PCB base prevent the blob detector from accurate clustering. Second, the electrical cables lying outside the PCB’s boundary possibly satisfy the colour criteria resulting in error in a positive direction. Third, partitioning cannot be done effectively among the similar size PCB. Overall, the localisation tasks performed effectively.

163

Chapter 5 - Vision System Module

5.4.1.2 Connective component

The screw detector was tested only in the PCB as stated in 5.3.7. The samples were collected from 72 PCBs found in all LCD screen samples. The recognition performance is determined from the detection rate which is equivalent to the sensitivity in Equation (5.39). Only screws lying within the PCB’s boundary were taken into account according to the prospective ROI in the real disassembly operation. From the experiment, the detection rate is 64.22%. As a trade off of the high detection rate, the result had a number of false positives caused by other embedded components. For the localisation, the detector was able to locate screws very accurately within ±0.5 mm according to the nature of the detection algorithm used (see Table 5.6(6)).

5.4.1.3 Other functions

State change detector - performance of state change was measured by the detection accuracy which is defined as average between sensitivity and specificity (see Equation (5.40)). Therefore, effectiveness of the detector to determine “change” or “not change” was measured. The main components were completely and manually detached without damage. Over 100 samples were observed in each condition. As a result, detection accuracy was 95.48% (see Table 5.6(7)).

n True Negative Specificity % u 100% (5.40) nn True Negative False Positve

Model detector – performance of the model detector is measured with accuracy to classify the samples with the existing models. The experiment was conducted with 47 samples (37 samples of all different models and two of them have 5 extra samples each). The selected two models are presented in the learning and revision experiment in Section 7.1.3. The descriptors of the 37 models were learned in prior and the 47 samples were classified with these descriptors. If the sample cannot be recognised as one of these existing models, it will be indicated as a new unknown model. From the experiment, the number of IPs range from 134 – 1187 according to the complexity of the features on the back cover. Average time for generating descriptor vectors for each sample was 5.42 seconds and for matching between each pair of samples was 0.42 seconds. The detector was able to achieve approximately 95% accuracy for distinguish the 37 models. Misclassification occurred with a pair of models from the same manufacturer that the back cover looks almost

164

Chapter 5 - Vision System Module identical. The detector was able to identify 100% correctly for the samples of two selected models. Overall, from these experiments, the detector with the proposed ratioSURF and size criterion was able to achieve the detection accuracy of 95.74% (see Table 5.7(8) and classification result in Appendix C).

Measurement of vertical distance ZF – it was tested from a depth image of the flat surface. The distance of each pixel was computed over the area of 240,000 pixels over the expected sample loading area. The outliers of data were initially removed with ±3MAD. Precision was determined from RMS which was 2.84 mm. The range of data was between ±4.39 mm corresponding to ±1 bit of depth resolution (see Table 5.6(9)).

Grinder size – the test was done by comparing the detecting results to the actual maximum grinder length (Lt) = 341.5 mm. From the result, the accuracy was -1.28 mm and precision was within 1.30 mm according to RMS (see Table 5.6(10)). This precision is sufficient for the cutting process.

5.5 Conclusion

The visual input is obtained from a colour camera and a depth camera. The colour camera provides a higher resolution image used in colour related algorithm. The depth camera provides the depth information used for obtaining vertical distance of the object relative to the fixture base. The depth image is initially aligned to the colour image by using an affine transformation. Consequently, the corresponding pixels belonging to both images are mapped resulting in 2.5D depth map with colour information. The position of an object in operational space (x,y,z) can be converted from the image space (r,c) with corresponding zF. Product coordinate {P} is preferred over the Robot base coordinate {B} since the information of features is relative to the product itself and can be used for a further learning process. The information regarding the location can be described by four geometrical features, i.e. point, line, rectangle, and box. They are used to communicate with the CRA in the form of Golog syntax.

The algorithms for detecting components in LCD screens are developed. For recognition, the detection rules (see Section 5.3.1) are generated according to the common features in relation to the physical appearance belonging to each type of component. The parameters are obtained from observing the components found in 37 different models of LCD screens. These detection schemes are validated by the experimental results showing that

165

Chapter 5 - Vision System Module the detectors of most components can achieve accuracy of at least 80% and above 50% for LCD module and screws. For the localisation, the precision of the detectors is within 6 mm. The PCB detector tends to produce the most errors due to a very complicated condition regarding the appearance. However, most of the detectors produce errors in a positive direction (δ+) which means the detected border lies outside the actual object. The accuracy of the utility functions was also competent. In addition, the functions for detection of the state change and detection of the model are able to achieve 95% accuracy.

In conclusion, the vision system performed a visual detection process effectively which was empirically validated. Inaccurate results possibly occur in some complicated circumstances due to the limitation of the proposed algorithm. As a result, the CRA and operation plan are taken into account to resolve further problems. In addition, these experiments were done based on the assumption that the components are clearly removed without damage. The performance is subject to a slight decrease in the real disassembly process which is conducted in (semi-)destructive ways.

166

Chapter 6 – Cognitive Robotics 6 COGNITIVE ROBOTICS ______

This chapter gives information about the cognitive robotic module (CRM) which controls the behaviour of the system located in the high-level control layer as illustrated in Figure 6.1. The content is divided into four sections organised as follows. First, an overview and is described in Section 6.1. Second, the knowledge presentation in regard to the disassembly domain is explained in 6.2. Third, implementation of cognitive robotics in the perspective of basic behaviour control and advanced behaviour control is described in Section 6.3. Lastly, an experiment regarding conceptual test and process flow is described in Section 6.4.

Cognitive robotic module

Suggested operation Human Internal Cognitive robotic agent KB interaction (Exogeneous action) assistance HIGH-LEVEL SensingSensing requestrequest AbstractAbstract i informationnformation (sensing(sensing action)action) OOperatingperating requestrequest AbstractAbstract informationinformation ((PrimitivePrimitive a action)ction)

VisionVision systemsystem functions:functions: Cognitive robotics module DisassemblyDisassembly operationoperation proceduresprocedures RRecognitionecognition & LocalisationLocalisation MID-LEVEL SensingSensing requestrequest commandcommand MovementMovement PositionPosition SStatustatus Pre-processedPre-processed ccommandommand ccommandommand ((on/off)on/off) iimagesmages FeedbackFeedback FeedbackFeedback

ImageImage MotionMotion MotionMotion PowerPower pprocessingrocessing ccontrolontrol ccontrolontrol switchingswitching

RequestRequest AActuatorctuator AActuatorctuator AActuatorctuator ssignalignal ssignalignal ssignalignal RawRaw imagesimages ssignalignal (colour(colour & d depth)epth) SSensorensor SSensorensor ssignalignal ssignalignal CamerasCameras & FlippingFlipping LOW-LEVEL RobotRobot armarm IImagemage g grabberrabber TableTable GrinderGrinder

VisionVision s systemystem m moduleodule D Disassemblyisassembly operationoperation u unitnit m moduleodule

Figure 6.1: System architecture in the perspective of the vision system module

167

Chapter 6 – Cognitive Robotics

6.1 Overview of cognitive robotics

Cognitive robotics focuses on the problems of knowledge representation and reasoning encountered by an autonomous system in an incompletely known and dynamic world (Levesque and Lakemeyer 2007). The cognitive robotic agent (CRA) controls the behaviour of this autonomous system according to cognitive functions. As a result, the system can interact robustly and flexibly with unpredictable conditions in the dynamic environment to achieve the desired goals (Moreno 2007). A number of research works have been conducted as presented in the literature review Section 2.5.2. In this research, the concept is implemented in the disassembly domain. The automated disassembly system is considered a multi-agent system (MAS) and the high-level behaviour is controlled by the cognitive robotic module (CRM). The approach of knowledge representation and reasoning (KRR) is applied to represent the disassembly domain and the action programming language IndiGolog is used to develop the agent.

This section is organised as follows. Firstly, an overview methodology of using cognitive robotics to address uncertainties is explained in Section 6.1.1. Secondly, the architecture of the CRM is described in Section 6.1.2. Lastly, an overview of IndiGolog is described in Section 6.1.3.

6.1.1 Methodology

In comparison to an automated system, human operators are more capable of dealing with uncertainties the End-of-Life (EOL) products returned. Therefore, in this case, cognitive robotics is used to emulate the behaviour which the human operator expresses throughout the disassembly process in order to deal with the uncertainties in product and process levels as listed in Table 6.1 (see detail in Section 3.1.3).

Major uncertainty Specific detail Main product structure Variety in the supplied products Quantity of the component Disassembly Sequence Plan (DSP) Process planning and operation Disassembly operation plan Disassembly process parameters

Table 6.1: Uncertainties addressed by the cognitive robotics module

168

Chapter 6 – Cognitive Robotics

Like the behaviour of the human operator as described in Section 3.1.1, the behaviour of the CRA is illustrated as a flowchart in Figure 6.2 which is extended from Figure 3.1.

Start disassembly

Detect model of the sample

Unknown model Known model NO Known YES Trial process model? Knowledge Base (KB) x Product structure x Disassembly sequence Find main component x Number of components to be removed Learn x Location of components x Removal operations Try a new possible removal operation Recall Follow the Human assistance instruction in KB

YES Human assistance NO Fail too learn and revise Many times?

Success Success NO removing ? removing ? Revise YES YES

YES YES NO NO Goal state Goal state ? ?

Finish disassembly

Figure 6.2: Behaviour of the cognitive robotic agent in disassembly process.

The CRA initially determines whether the model of the sample is already known in the existing knowledge base (KB) or not. If this LCD screen model is being seen for the first time (unknown), the CRA will go through the trial process in which actions are executed based on general operation plans according to the components in the current state. The removal process will be carried out with a number of attempts using different operation strategies and process parameters. Human assistance will be provided if the CRA becomes stuck too many times. After all, the successful process specifically for this model is learned by being stored in the KB. As a result, the CRA can recall this knowledge from the KB if this model of product is seen again (known). The CRA will

169

Chapter 6 – Cognitive Robotics follow the instructions in the KB in the general case. In case of failure due to variations and uncertainties in the process, the CRA will request for additional human assistance in order to resolved the uncertainties and achieve the goal. The CRA also learns the new knowledge and revises the existing KB.

Regarding the process flow, the CRA drives the disassembly system through the states of disassembly by controlling a sequence of actions using knowledge of the external world and applying the behaviours at its disposal. The physical actions executed by the CRA are expected to change the state of disassembly. The transition from one state to another occurs when a particular main component has been successfully removed from the original location (see definition of state change in Section 5.3.8). This process continues from the initial state of the product until the goal state has been reached.

In this case, the external conditions relate to the disassembly in the physical and external world. They are considered incomplete knowledge because the world is subject to change in a nondeterministic way once the operations have been executed. Therefore, this automated process is considered an open-world execution since the incomplete knowledge of the world needs to be sensed on-line during the process. The interaction with the external world is described in the next section.

6.1.2 Cognitive robotic module architecture

The architecture of this system is developed based on the close-perception action loop architecture (Zaeh et al. 2007) in which the cognitive robotic module interacts with the physical world via the disassembly cell and human input as described in Section 3.12. The disassembly cell consists of sensors and actuators which facilitate the CRA in perceiving information and disassembly operation, respectively. According to the architecture illustrated in Figure 6.3, the CRA and KB are parts of the cognitive robotic module. The CRA interacts with the sensors (vision system module (VSM)) and actuators (disassembly operation unit module (DOM)) located in the physical world. The CRA also interacts with human experts via the graphical user interface (GUI) when assistance is needed. This framework was earlier presented in (Vongbunyong et al. 2012).

170

Chapter 6 – Cognitive Robotics

Cognitive robotic module Cognitive robotic agent

Revision Human assistance Revise the existing KB

Learning Add new knowledge to KB Physical world Execution monitoring Determine success of the operation Vision system module Knowledge Base (KB) Reasoning Disassembly operation Reason from current conditions & KB units module

Figure 6.3: System architecture in Cognitive robotics perspective

In regard to the composition of the CRM, the KB is designed to be separated from the CRA. The CRA is an IndiGolog program that can store or obtain the knowledge in the form of Prolog fact contained in the KB. The knowledge specific to particular models of LCD screen (model-specific knowledge) is used in this case. Due to the prospective industrial scenario, this autonomous system will be deployed in a number of factories. Each of them is expected to disassemble various models of LCD screens. Therefore, they will learn from a different set of samples which results in a significant change in the size of the system. Learning by storing the knowledge to the separated KB is more suitable than learning by allowing the CRA modifies its own program that is usually conducted with Golog (Braun 2011). A major advantage is the portability of the knowledge that can be shared and combined among the systems in different factories. The complexity due to the variation of the CRAs that evolve in different ways according to the training samples can be avoided. In addition, this model-specific knowledge can be revised in order to improve the performance of the disassembly process of previously unseen model.

In regard to the interactions within the system, they occur in the form of: 1) primitive actions, 2) sensing actions, 3) exogenous actions, and 4) fluents. First, primitive actions are used as internal functions in the CRM and externally with DOM for executing the disassembly operation. Second, sensing actions are a special form of primitive action used for requesting information from the external world. In this case, the sensing actions are sent to other modules for obtaining information from different sources. Third,

171

Chapter 6 – Cognitive Robotics exogenous actions are an action sent from outside CRM, in this case, human assistance. However, the interaction process with human assistance has been simplified, so that the exogenous actions are considered a fluent whose value is requested by a sensing action. Lastly, the fluent is a piece of information that can be a structure of data, logical value, or numerical value. Examples of the fluent from the VSM are shown in Section 5.1.3. In summary, the details of this interaction are illustrated in Figure 6.4 and also explained in the Section 3.2.1.1.

According to the preconditions of the actions, it should be noted that no specific precondition is needed for most of the actions. Therefore, they can be always executable in any condition. Exceptionally, the preconditions for the primitive cutting operations are generally zF > Zmin which can prevent cutting to deep to the fixture plate. In case of more complex preconditions, they will be indicated as part of successor state axioms thorough out this chapter.

Sensing action Human assistance Cognitive robotics - Exogenous action - Fluent module Sensing action Vision system module Internal interaction: Fluent - Primitive action - Primitive action - Fluent - Sensing action Disassembly operation Fluent units module

Figure 6.4: Interaction with actions and fluents

In regard to cognitive functions, the CRA is operated by four cognitive functions: 1) reasoning, 2) execution monitoring, 3) learning, and 4) revision. The interactions take place internally among these functions and KB within CRM. In this section, the principle of the four cognitive functions, KB, and human assistance are explained in the overview as in Figure 6.3. The detail according to the case-study is further explained in Section 6.3.

6.1.2.1 Reasoning

This function performs rule-based heuristic reasoning about the current disassembly state perceived via VSM incorporated with the predefined rules and existing knowledge in the KB. As a result, the CRA can react logically to the external world by scheduling primitive

172

Chapter 6 – Cognitive Robotics actions. In addition, the success of the operation is taken into account via execution monitoring.

6.1.2.2 Execution monitoring

This function mainly determines the accomplishment of a components’ removal at planning and operation levels according to the predefined rules in the KB. At the planning level, the information about the change of disassembly state is supplied by the VSM. At the operation level, the cutting process is monitored by MotionSupervision in DOM (see Section 4.2.3). These outcomes are considered by the reasoning function in order to proceed to the subsequent operations. In addition, the change of disassembly state passively induces the learning and revision process to happen.

6.1.2.3 Learning

The learning functions organise the model-specific information (specific knowledge for a particular model of LCD screen) obtained during the current disassembly process to store in the KB. The significant information contributing to the successful removal process is stored. Consequently, the system can utilise this knowledge in the subsequent process that encounters this already known model. The knowledge can be obtained from two sources, the reasoning process and demonstration by human assistance.

6.1.2.4 Revision

The revision function is used to revise the existing knowledge in the KB to be more efficient while new samples of the previously seen model have been found and disassembled. Human assistance is also incorporated in modifying the KB in order to change an original belief regarding the visual input from VSM. Other invalid facts such as too deep a cut resulting in a component unexpectedly falling can also be modified. In summary, the facts in the KB are modified at the programming level of Golog by the revision process.

6.1.2.5 Knowledge base

The KB contains knowledge of DPP for both the general and the model-specific disassembly processes. General knowledge is used in the case of disassembling an unknown model while the model-specific knowledge is applied for a known model. The knowledge consists of 1) Disassembly Sequence Plan (DSP), 2) disassembly operation

173

Chapter 6 – Cognitive Robotics plans, 3) process parameters, and 4) rules and constraints. The KB will be continuously expanded in regard to learning and revision as disassembly is carried out.

6.1.2.6 Human assistance

Human assistance is used to help the CRA in unresolved complex conditions in regard to the disassembly operation and physical content of the product samples. The assistance is given in the form of demonstration via the GUI. The necessary actions corresponding to VSM and DOM are provided by the GUI (see Appendix D). Therefore, the operator can use them to revise beliefs or teach necessary actions resulting in the removal of the component. Finally, the CRA will learn from this demonstration.

6.1.3 Action programming language IndiGolog

The formal framework underlying this approach to cognitive robotics is based on the Situation Calculus (McCarthy 1963) and its implementation in the action programming language Golog [22]. A major advantage of Golog over other languages is the ability to combine domain specific heuristics with search based planning. This facilitates the programming of complex behaviours of the CRA. More detail is given in the literature review Section 2.5.3.

In this research, IndiGolog, one of the extended versions of Golog, is used. The main feature of IndiGolog is its capability of performing on-line execution with sensing which is used for dealing with incomplete information about the dynamic world. In the programming perspective, the agent consists of two parts: 1) domain specification and 2) behaviour specification. Therefore, the CRA in association with the disassembly process is developed based on this principle. An overview of this programming structure is explained as follows.

6.1.3.1 Domain specification

The domain specification describes the state of the world (situation, s) internally and externally via fluents and the related types of axiom in the following way. The fluents are subject to change dynamically as the primitive actions are performed. Primitive actions are executable under specified preconditions. The changes in the world in regard to the executed action due to the current conditions are described by successor state axioms. Sensing actions are one form of primitive action which assigns the sensed value to the

174

Chapter 6 – Cognitive Robotics corresponding fluent. In summary, the domain specification is formulated by these axioms which are listed in Table 6.2. fxdoas xdoas,,, , is a successor state axiom for fluent f characterised as the free variables x are subject to change when the primary action a has been executed in situation s. Poss a, s means the primitive action a is executable in situation s. Nevertheless, for the notation used in this chapter, it should be noted that the variables begin with a lowercase letter and constants begin with an uppercase letter.

Axiom type Notation IndiGolog syntax Fluent f prim _ fluent fluentName

Primitive action a prim _ action actionName

Sensing action asenses senses actionName, sensedValue

Successor state axiom fxdoasxdoas,, causes_val actionName,

fluentName , newValue , conditions

Primitive action precondition Poss a, s poss actionName, preconditions

Initial situation of the world S0 initially fluentName, initialValue

Table 6.2: Command in domain specification

6.1.3.2 Behaviour specification

The behaviour is specified as procedures for performing complex actions in order to achieve the desired goal. The control structure uses procedural language control structure together with nondeterministic constructs for planning and search. All of these components determine the behaviour of the agent and provide mechanisms to prune the search space of the nondeterministic choices of actions through domain specific knowledge. The behaviour specification is summarised in Table 6.3 where δ is complex action expression; x and v are free variables; and ϕ is a condition. The operations associated with δ in this table are written in the short form of Do G,,' s s which means the situation s 'can be reached from situation s when an action sequence specified in δ has been executed. However, it should be noted that nondeterministic operations are not used in this research because they lead to critically complex issues of physical backtracking due to the destructive disassembly approach.

175

Chapter 6 – Cognitive Robotics

Meaning Operation IndiGolog syntax Procedure call pvv procName

Conditional if IGG then 12 else endIf ifcondition ,body , body 12 Loop while IG do endWhile whilecondition ,body

Procedure proc pvv G endP endProc procprocName , body

Empty program nil no_op Test ?I ? condition

Sequential composition GG; body, body 12 12 (ND) Sub-program GG12| ndetbody , body 12 (ND) Choice of argument SG. xx pivariable , body

(ND) Iteration G star body

NOTE: (ND) denotes nondeterministic operations

Table 6.3: Commands in behaviour specification

In summary, in order to formulate the CRA using this programming approach, the disassembly process needs to be considered in the domain and the behaviour specifications. The disassembly domain is taken into account to formulate these specifications of the CRA. The process of formulating the program structure from the level of product analysis is illustrated in the flowchart in Figure 6.5.

176

Chapter 6 – Cognitive Robotics

Start analysis

List the main and connective components (Product Structure analysis)

analysis Specify the component regarding goal state Product and Disassembly Consider a type of the main component and its corresponding connective components

Define the vision system functions and additional detection rules (Sensing action)

Define the disassembly operation plan (Procedure of the action sequence)

Define the operations and their executable conditions (Primitive actions and Preconditions)

Domain specification Define the process parameters (Fluents and the Initial values)

All main NO (The disassembly domain & Operation plans) components listed? YES Define utility functions for manipulating information of the components (Primitive actions and Successor state axioms)

Construct the main behaviour Behaviour

specification (Main Procedure)

Start programming

Figure 6.5: Analysis process for formulating the code in the programming

6.1.4 Summary of LCD screen

The structure of LCD screens is described in detail in Section 4.1. This section aims to summarise the significant points and define the abbreviations for the components that will be used throughout this chapter.

According to Figure 6.6, the product structure can be classified into two types according to the assembly directions: Type-I and Type-II. LCD screens typically consist of six types of main components and four types of connective components. The main components are:

177

Chapter 6 – Cognitive Robotics back cover (c1), PCB cover (c2), PCBs (c3), carrier (c4), LCD module (c5), and front cover (c6). The connectors are screws (cn1), snap-fits (cn2), electric cables (cn3), and plastic rivets (cn4).

The product structure can be classified into two types according to the assembly directions: Type-I and Type-II. The classification is based on the configuration of components c2, c3, and c4. In addition, Type-I can be further categorised into two sub- classes according to the material and appearance of the PCB cover. Type-Ia can be visually distinguished from the carrier by the vision system while Type-Ib cannot (see detail in Section 5.3.3 and 5.3.5). The CRA incorporated with physical disassembly operation are used to differentiate these two sub-classes.

Back cover (c1)

PCB cover (c2)

PCBs (c3)

Carrier (c4)

LCD module (c5)

Front cover (c6)

(a) Type-I (b) Type-II

Figure 6.6: Product structure of LCD screens and the main components

6.2 Disassembly domain for cognitive robotics

The aforementioned uncertainties in Table 6.1 are considered the variations of the conditions that the CRA encounters while disassembling a product. These variations can be represented as possible choice points in the search-space regarding action sequences. Transitions in the disassembly state can occur by executing a necessary sequence of actions in a proper way and parameters to remove a particular component. These choice points are organised in two levels: 1) state of disassembly and 2) operational level.

178

Chapter 6 – Cognitive Robotics

Initial state One disassembly state State-1 A Component

State-2 Disassembly operation plans

State-i Process parameters Goal State

(a) disassembly states (b) Operational level Figure 6.7: Choice points in disassembly domain

The entire disassembly process is represented as a sequence of the disassembly states which is equivalent to a DSP (see Figure 6.7a). The variation of the state of disassembly deals with the main structure of the product in which the subsequent operation in the state is determined by the component that has been detected. Regarding the operation level, the choice points within a state of disassembly are formulated with a hierarchy of operations and parameters contributing a specific type of main components detected (see Figure 6.7b and Figure 4.10). This section focuses on the definition of fluents and primitive actions that relate solely to disassembly. Details of the search space are explained as follows.

6.2.1.1 Product structure and types of component

The main product structure can be considered an arrangement of the main and the connective components. The product structure varies in different product models as described in the structure analysis in Section 4.1. The state of disassembly is defined as a type of main component that is detected and will be removed at a particular time. Therefore, the main product structure can be obtained by tracking the state of disassembly. In this case, the components at one state can be observed by the VSM by using a component detector during the disassembly process. Therefore, the specific product structure needs not to be known a priori. For example, the structure in the minimal form of a liaison diagram in Figure 6.8b can be obtained from the detection results in Figure 6.8a. This liaison diagram shows the connections between two pairs of main components. PCB1 and carrier are connected by 6 screws and PCB2-carrier connected by 5 screws.

179

Chapter 6 – Cognitive Robotics

Carrier

Screws X 6 Screws X 5

PCB-1 PCB-2

(a) Detected components in a disassembly state (b) Minimal form of Liaison diagram Figure 6.8: Representation of a product structure in a disassembly state

In order to reconstruct the complete structure, the abstract information of the main component and the connective components obtained in a state of disassembly consists of four qualifications:

x Types of the main component; x The quantity of the main components; x Types of the connective component; and, x The quantity of the connective components.

According to the logic-based syntax used, this information is obtained as a fluent compLocation. The abstract information can be represented in the general form expressed in Equation (6.1) where the feature denotes the type of geometrical feature (i.e. point and rectangle) and location denotes the location in operational space (x,y,z). A point feature is represented by loc and a rectangular feature is represented by box. For clarification, the single instance (n = 1) is represented as Equation (6.2) and (6.4) while the multiple instance (n > 1) is represented as Equation (6.3) and (6.5). The existence on the component can be implied from the fluent as in Expression (6.6).

­ feature location ° 1 :1n compLocation = ® (6.1) ªºfeature location, ... , feature location : n !1 ¯°¬¼ 1 n

compLocation = box x112 , y ,x , y 212 ,z ,z (6.2)

compLocation= ¬¼ªº box x11 , y 21 ,x 21 , y 21 ,z 11 ,z 21 ,... , box x1iiiiii , y 2 ,x 2 , y 2 ,z 1 ,z 2 (6.3) compLocation = loc x, y,z (6.4)

180

Chapter 6 – Cognitive Robotics

compLocation = ¬¼ªº loc x111 , y ,z,... loc xiii , y ,z (6.5) detected componentType{z›z compLocation [ ] compLocation'' no (6.6)

6.2.1.2 Disassembly operation plans

The disassembly operation plan is a procedure containing a sequence of primitive actions used to disestablish the connections required to remove the main component. Since the main components are connected to each other with different techniques regarding types and prospective location of the connective component, multiple operation plans are developed specifically for a particular main component. They are designed to have different levels of impact on the component with respect to the level of destruction and success rate of the removal process. Hence, the CRA will have alternatives to execute in different conditions. The available operation plans are summarised in Table 4.5.

The key feature of the operation plan procedure op compTypeij,, plan opMode is a combination of the four parameterised primitive cutting operations. They are defined in a group of primitive actions aprimCut in Equation (6.7) - (6.10) where mcut = cutting method. These process parameters contribute to the success of the disassembly process are explained in the next section.

acutPointxyzmprimCut ,,, cut (6.7)

aprimCut cutLine x112,,,,,, y x y 212 z z mcut (6.8)

acutCornerxyxyzzmprimCut 112,,,,,, 212 cut (6.9)

aprimCut cutContour x112,,,,,, y x y 212 z z mcut (6.10)

The details of the disassembly operation plans are explained in Section 4.3.2 and a summary is in Table 4.4. In addition, the procedure for the complete operation has functions of execution monitoring and other data utility incorporated. The detail is explained in the behaviour control Section 6.3.1.3.

6.2.1.3 Process parameters

The process parameters are adjustable because of two objectives. First, the adjustment can compensate for position errors caused by inaccurate localisation. Second, the adjustment

181

Chapter 6 – Cognitive Robotics can find the significant position due to a non-detectable location, e.g. the successful depth and for removal of a thick cover where the thickness cannot be detected. In association with the operation level, the adjustable process parameters focus on the tool path generation. The parameters are:

x Feed speed of the cutting tool (sfeed);

x Orientation of the cutting tool (θtool); x Cutting level (z) due to the depth of cut; and,

x Cutting line and contour offset in horizontal plane (lx, ly, or lxy).

Firstly, the feed speed and orientation are combined in one parameter termed cutting method (mcut) as supplied by the MotionSupervision in DOM. The successful cut MS means that the complete cutting path has been cut without intervening collision. The definition is in Equations (6.11) - (6.12). In general, the preferred initial cutting method for the type of the main component is considered based on 1) potentially geometrical obstacles and 2) material type. For example, cutting an external border of a steel carrier, the sfeed = ‘Low’ due to the hard material and θtool = ‘Out’ to reorient the grinder to avoid the collision.

°­M S ^`'1','2',...,'8' :successful cut mcut ® (6.11) ¯°M F ^`'0' : failed cut

Ms ^` s feeduTTT tool:, s feed ^` Low Hi š tool ^`^` N ,,,, S W E ›tool In Out (6.12)

Secondly, the cutting level is adjustable to find the critical cutting level (zc) for the detachment of the component. The adjustment occurs during the trial process. Lastly, the contour or the line offset are adjusted to compensate for visual inaccuracy. Therefore, the possibility of missing the cutting target is reduced. However, due to the longer execution time needed, this is implemented only in a case that the execution result is critical, for example in the operation for removing a PCB cover that the execution result is used to classify the structure type.

The variation of these parameters is considered choice points that are generated during the process within the range limited by the predefined constraint, e.g. deepest cutting level. The constraints are defined by specifying an initial value and are implemented by the executable precondition. Procedures can also be used for a more complex problem.

182

Chapter 6 – Cognitive Robotics

6.3 Behaviour control in disassembly of LCD screens

This section explains the behaviour control driven by four cognitive functions in regard to the disassembly of the LCD screens. These four functions are divided into two behavioural groups: 1) basic behaviour and 2) advanced behaviour. The reasoning and execution monitoring are classified as basic behaviour that is performed as a core structure to schedule the actions during the disassembly process. The basic behaviour control is presented by the author in (Vongbunyong et al. 2013). The learning and revision are classified as advanced behaviours which are performed only in a particular situation incorporated with KB. The principle regarding the LCD screens is explained in this section. Some parts of domain specification that relate to the design of the behaviour are also explained in this section.

6.3.1 Basic behaviour

The basic behaviour is used to prune the search space regarding disassembly process domain described in Section 6.2. Rule-based reasoning is used to schedule a sequence of primitive actions according to the sensing results: 1) the detection of the component and 2) the state change. The component location is obtained by executing the sensing action detectCompType in Expression (6.13) where the unique name is substituted for a particular component type, e.g. detectBackcover, detectPcb, etc. The detection of state change is considered execution monitoring. Once a new state has been reached, the primitive action flagStateChange will be executed (see Expression (6.14)), so that the vision system can store the benchmarking property of the original condition of the state. Afterwards, the sensing action checkStateChange will be executed to detect the change Expression (6.15). The detection outcome in the form of a truth value is stored in the fluent stateChanged.

senses detectCompType,compLocation . (6.13)

a flagStateChange (6.14)

senses checkStateChange, stateChanged . (6.15)

The structure of the IndiGolog program is designed based on the disassembly domain in Figure 6.7. Therefore, the procedures are developed in three main procedural levels: 1)

183

Chapter 6 – Cognitive Robotics main structure, 2) component treatment, 3) operation plan with parameter changing. The scope of the procedures according to the disassembly domain is illustrated in Figure 6.9.

(from previous State) Procedures in behaviour control

Detect main components Main structure

A combination of the detected main components Component Execution monitoring treatment

Disassembly operation plan Operation plan with changing Execution monitoring parameters Process parameter

(to the next state - detect main components)

Figure 6.9: Behaviour control in regard to the disassembly domain

Explanation is given for two situations: 1) unknown model and 2) known model. First, the process for handling the unknown model is a trial-and-error process aiming to extract the knowledge and learn from the previously unseen models. The learning takes place throughout this process. Second, the process for the known model is an implementation of the knowledge that was learned previously. Learning also occurs in the form of revision.

6.3.1.1 Main structure

With respect to the main procedure, the product model is detected by the vision system at the beginning of the disassembly. It is a “known model” if the currently detected model is matched to one of the existing models in the KB. Subsequently, the latest version of the corresponding DSP will be sought in the KB. Otherwise, the “unknown model” will be initiated in the KB for further learning process. After this model detection process, the process can be performed in one of the two main routes as in the main procedure of the CRA in Procedure (6.16). Both situations, known and unknown models, are implemented in all procedural levels. However, only the part of basic behaviour will be focused in this section. It should be noted that (*) refers to the learning and revision process which will be explained in the section on advanced behaviour.

184

Chapter 6 – Cognitive Robotics

proc main detectModel; // detect model recallCurrentVerDsp model; getVerDspNew, // *manage version of the detected model

if ™ ls11 , type . dspInKb >@ model,verDsp, ls , type // *find model and latest DSP in KB then detectC1 ; treatC1 ; // Unknown model - trial process classifyStructureType;...... ; treatC5 ; else // Known model - run from KB recallDsp model ; // recall DSP from KB while dsp z [ ] do executeCurrentComponentTreatment // repeat executing all treatment in DSP endWhile endIf < Learn DSP > // *store the knowledge endDisassembly ; // post process endProc (6.16)

Unknown model - the CRA makes a decision to enter a particular state according to the current conditions of the process. The decision rules are typically based on the visual sensing of the main components potentially located at a particular state. Figure 6.10 presents the state diagram of the disassembly process of an LCD screen.

Initial state Unknown model Known model

(c1) State-1 (backcover) (c1)

(c2˄c4) Type-Ia Type-Ib Type-II (c2˄¬c4) State-2 (PCB cover) op(2,1)=S op(2,1)=F

(c3 ˄...˄ c3 ) State-3 (PCB) 1 n (c3)

State-4 (carrier) (c4) (¬c4)

State-5 (LCD module) (c5)

Goal state

Figure 6.10: Disassembly state diagram

185

Chapter 6 – Cognitive Robotics

In Figure 6.10, the positive detection result of a component is denoted by (c) and negative result (¬c). Note that the negative result is omitted in general cases. The sensing action detectCompType will be executed. Subsequently, a decision will be made according to the fluent compLocation.

The decision rules are relatively simple in most cases since a single type of component will be detected in one state. The implication in the structure level is done based on the existence of the component in (see Expression (6.6)) regardless of the location. The rule is defined as expression (6.17) where treatComponentType is a procedure of complex action for handling a particular component type explained in the next section. This rule is used in State 1 and 3-5.

if detected componentType then treatComponentType endIf (6.17)

The decision rule is complicated in the case of the PCB cover since the vision system cannot distinguish between Type-Ib and Type-II as noted in the earlier discussion. Therefore, the procedure for treating the PCB cover (treatC2, Procedure (6.18)) is used to classify the main structure of LCD screen. This classification corresponds to the strategy shown in Figure 4.2.4 and the full detail is explained in Section 4.3.2.2.

proc classsifyStructureType detectC2; detectC4; if detected c 2š™ detected c 4 then structure ' TypeIa ' // structure = Type-Ia elseIf detected c 2š detected c 4 then

op c 2,1,' trial ' ; checkStateChange componentLocationc2 ; if stateChanged then structure ' TypeIb ' // structure = Type-Ib elseIf structure ' TypeII ' // structure = Type-II endIf endIf endProc (6.18)

In the case that the component has not been detected, the CRA will proceed to the next state automatically. However, an extra rule is specified in State-4 due to the fact that 1) every LCD screen must have a carrier and 2) there is a relatively low detection rate of the carrier in the actual destructive disassembly. The procedure treatC4 is still executed but

186

Chapter 6 – Cognitive Robotics the cutting location will be recalled from the previous detection in State-2 or estimated from the location of the back cover.

Known model – The component will be treated according to the order in the DSP. In Procedure (6.16), a DSP corresponding to the detected model is recalled from the KB. Then, the component treatment process in Procedure (6.19) will be executed according to the fluent currentComp. This fluent value represents two parameters: 1) the type of component and 2) the index of the instance (i); for instance, c3(2) denotes the second instance of the PCB. This is used for accurately identifying the same component from the previous disassembly process. However, the index is not taken into account in this procedure but will be used at the detail level. After the current component has been treated, it will be removed from the DSP list by the action feedDspComponent (see Expression (6.20)) and the subsequent component will be treated.

proc executeCurrentComponentTreatment ? dsp >@ currentComp| ! remainingList // use the first component of DSP if currentComp  i. c1 i then treatC1 // treatment according to the component type elseIf currentComp  i. c 2 i then treatC 2

... elseIf currentComp  i. c 5 i then treatC 5 feedDspComponent; endProc (6.19)

dsp L,, do a s{ a feedDspComponentš Lc L‰^` currentComp š (6.20) dsp Lc , s›š dsp L , s currentComp ›z L a feedDspComponent

6.3.1.2 Component treatment

The component treatment procedure executes the available disassembly operation plans according to a specific type of main component. The execution monitoring due to the state change is taken into account in this scope but implemented at the operation plan level. The information about the known or unknown model is passed from the main procedure. The general main structure of the procedure for treating multiple instances components, e.g. three pieces of PCB found in one state, is shown in Procedure (6.21). The detail for each route is explained as follows.

187

Chapter 6 – Cognitive Robotics proc treatComponentType

if ™ ls , type . dspInKb >@ model,verDsp , ls1 , type then // unknown model while compLocation z [ ] do

rectRoiIs compLocation, Lxy,1 ; // specify ROI with offset L flagStateChangeRoi; // flag state change at ROI planExecLoopUnknown op compType,1,' trial ' ,... // execute operation plans ..., op compType,K,'trial' ; < Learn the general and add-on plans > moveToNextCompInstance // treat the next instance endWhile else // known model

?  ls12 , ls . planInKb >@ model,verDsp ,currentComp,box,ls23 ,ls ; // get box from KB recallParamPlans model,currentComp ; // get params of general plans from KB

rectRoiIs box, Lxy,2 ; // specify ROI with offset flagStateChangeRoi; // flag state change at ROI executeAddOnPlan model,currentComp ; // execute add-on plans planExecLoopKnown op compType,1,' kb ' ,...... , op compType,K,'kb' ; // execute operation plans < Learn and Revise the general and add-on plans > endIf endProc (6.21)

Unknown model - after an underlying main component has been detected, the corresponding treatment process will be executed according to the sensed location which is the value of the fluent compLocation. In Procedure (6.21), the general form of the procedure which is able to handle multiple instances of component is given ( see Equation (6.3) for compLocation). The first instance in the compLocation list will be treated and disposed when completed. The treatment is repeated using a while-loop incorporated with the action moveToNextCompInstance (Equation (6.22)) until the list is empty: compLocation=[]. The corresponding rectangle of the box (fluent rectRoi) is used to determine the region of interest (ROI) by an action rectRoiIs. Due to the camera’s perspective, the ROI is located on the top level of the component to assure that the entire component is covered with offset lxy specified (see Equation(6.23)). The state change is considered on this ROI by initially flagging procedure flagStateChangeRoi.

188

Chapter 6 – Cognitive Robotics

compLocation L,, do a s{ x121212 ,,,,,,. x y y z z Lc a moveToNextCompInstance š

compLocaion Lcc, sš L L‰ ^ box(,,,,,) x112 y x y 212 z z `› compLocation L, s

šbox ( x112 , y , x , y 212 , z , z ) L› az moveToNextCompInstance (6.22)

rectRoi Rect,, do a s{ x12121 ,,,,,. x y y z lxy a rectRoiIs compLocation, lxy š

Rect rect x112š lxy,,,, y l xy x l xy y 21 l xy z compLocation box x 11221,,,, y x y z (6.23)

Afterwards, the operation plans are executed by the procedure planExecLoopUnknown where the variables are the specific name of the generic operation plan, e.g. optr(c3,1) op(c3,1,’trial’) denotes the procedure for executing plan-1 of the PCBs in trial mode. This procedure takes four arguments due to the maximum number of available operation plans (see Procedure (6.24)). According to the heuristic rule regarding the impact of the operation on the component, the execution order is designed according to the rate of impact: lowest to highest. The outcome of detecting state change is taken into account after every execution. A subsequent operation plan will be executed if the state has not been changed by the former plan execution. In the case that all available plans fail to remove the component, the user will be called to give assistance via the procedure callUser (see Procedure(6.40)). However, the detection of state change is performed implicitly within each operation procedure which is explained in the next section.

proc planExecLoopUnknown optr c i,1 , op tr c i,2 , op tr c i ,3 , op tr c i ,4

if ™šzstateChanged optr c i,1I then op tr c i ,1 ; // execute opPlan1

if ™šzstateChanged optr c i,2I then op tr c i ,2 ; // execute opPlan2 // execute opPlan3 if ™šzstateChanged optr c i ,3I then optr c i ,3 ;

if ™šzstateChanged optr c i ,4I then optr c i ,4 ; // execute opPlan4 if ™stateChanged then callUser endIf // call user for assistance endIf endIf endIf endIf endProc (6.24)

Known model - the process for treating the known model is in the second part of the Procedure (6.21). It is similar to the trial process except the location (box) of the component is obtained by matching with the KB in regard to the current component

189

Chapter 6 – Cognitive Robotics instance. In addition, the procedure planExecLoopKnown (see Procedure (6.45)) which executes the plan in a reverse order of the unknown one is implemented (see detail in Section 6.3.2.4). The order is considered based on the specificity and the success rate of the operation plan. Consequently, the execution starts from the add-on plans which are more specific and usually highly contribute to the successful removal. Moreover, the retraction ability is added to this execution procedure for revising the existing KB. The extension will be explained in the revision Section 6.3.2.4. In regard to the add-on plan, it is a sequence of the primitive cutting operation with the parameters represented by the destination cut. It can be considered a type of operation plan, so that the explanation is given in the next section.

6.3.1.3 Operation plan and process parameter changing

An individual operation plan is executed by this procedure in regard to the parameter changing. A maximum of four choices of this individual plan is available for selection by the component treatment procedure depending on the component type. The parameterised primitive cutting operations as in Equation (6.7) - (6.10) are the key features. The cutting operation is incorporated with adjustable process parameters, i.e. cutting method m and the cutting level z. The process parameters are changed until the state has been changed or is unable to satisfy the constraints. From Procedure (6.25), the known and the unknown models can be indicated by the variable opMode which is passed from the higher level. Two different routes are explained as follows.

190

Chapter 6 – Cognitive Robotics proc op compType,, plan opMode if opMode ' trial ' // Unknown model (1) rectCutLocationIs compLocation , Cz ; // get cutting location

offsetContourXY Lxy ; offsetContourZ Lz ; // set cutting offset

? x112 , y , x , y 22 , z . compLocation box x 112 , y , x , y 2 , zref , z 2 ; // get zRef at top surface

assignFluentValue mcut ,M S ,1 ; // set initial cutting method assignFluentValue latestPossM,'0' ; // set initial latest poss cut

while ™šzstateChanged mcut '0' do // repeat cutting

deepenCutPrimitive zref ,L maxDepth , Z inc,1 , opMode endWhile ; elseIf opMode ' kb ' // Known model

? z2 compLocationKb = rect x112212 , y ,x , y ,z ,z ; // get xyxyz11112,,,, from KB

? zRefKb = zRef; ? mKb = m cut; // get z Ref and m from KB

// set the cut location(1) rectCutLocationIs rect x11221Ref , y ,x , y ,z ,z ;

while ™šzstateChanged mcut '0' do // repeat cutting

deepenCutPrimitive zRef ,L maxDepth , Z inc,2, opMode endWhile ; finalCut ; // final fine cut flipTable ; checkStateChange; // final flip table & state change endIf endProc NOTE: (1) rect cutting location can also be line or point according to the cutPrimitive. (6.25)

Unknown model - the initial cutting location is specified according to the compLocation or the existing rect associated with the offsets by the action rectCutLocationIs (see

Equation (6.26) and (6.27)). The horizontal offset (lxy) and the vertical offset (lz) are applied to relocate the cutting location rectCutLocation to be inside the component’s boundary in order to compensate for possible visual inaccuracy. The axioms for the offset are in Equation (6.28) and (6.29), respectively. It should be noted that the rectCutLocation in the following Equations can be substituted by other primitive geometry, i.e. lineCutLocaiton or pointCutLocation according to the deepenCutPrimitive procedure.

191

Chapter 6 – Cognitive Robotics

rectCutLocation Rect,, do a s{ x121212 ,,,,,,,. x y y z z zcond c z a rectCutLocationIs compLocation , c š z (6.26) compLocation box x112 ,,,,, y x y 212 z z , sš Rect rect x 112 ,,,, y x y 2 zcond š

zzcinsidezzccond 12š z ' ' › cond šz ''outside

rectCutLocation Rect,, do a s{ x12121 ,,,,. x y y z (6.27) a rectCutLocationIs Rectš Rect rect x112,,,, y x y 21 z

rectCutLocation Rect,, do a s{ x1212 ,,,,,,'. x y y z lxy Rect

a offsetContourXY lxy š rectCutLocation Rect ', s š (6.28)

Rect ' rect x112 , y , x , y 2 , zš Rect rect x 1 lxy , y 1 l xy , x 2 l xy , y 2 l xy , z

rectCutLocation Rect,, do a s{ x1212 ,,,,,,'. x y y z lz Rect

a offsetContourZ lz š rectCutLocation Rect', s š (6.29)

Rect ' rect x112 , y , x , y 2 , zš Rect rect x 112,,,, y x y 2 zšt lzz z l Z min

The initial preferred cutting method is assigned according to the component type as described in Section 6.2.1.3. The latest successful cutting method (latePossM) will be recorded while repeat cutting the same cutting path at a deeper level in the subsequent cutting cycle. The cutting method can be altered by the robot during a cutting cycle if the cutting operation has been physically obstructed. However, only the final method will be acknowledged by the CRA. The latePossM will accept only the successful one.

The repetitive cut is performed by the conditional while-loop of the procedure deepenCutPrimitive (see Procedure (6.30)) which deepens the cutting destination rectCutLocation with respect to two control parameters: 1) the maximum depth of cut from the top surface (lmaxDepth) and 2) incremental depth to be cut at each cycle (linc). A fine incremental depth of 1-2 mm is applied in the trial mode because of two reasons: 1) to find the minimal required cutting depth resulting in the component detached and 2) to minimise the wear rate of abrasive cut-off disc. The cutting level z at each cycle is computed relative to the top surface (zref). The deepening is performed by executing the action offsetContourZ with a supplied parameter min z zref l maxDepth, z inc to guarantee the deepest available level will be cut under the condition of maximum depth constraints.

192

Chapter 6 – Cognitive Robotics

proc deepenCutPrimitive zref , l maxDepth , z inc , opMode (1) ? rectCutLocation  x1212 , x , y , y . rect x 112 , y , x , y 2 , z ; // get the current z

if zorig zl maxDepth št zz inc  Zmin then

// conditional vertical offset offsetContourZ min z zref l maxDepth, z inc cutPrimitive; // cut at the current location if opMode ' trial ' then // used in the trial mode flipTable ; checkStateChange ; checkCuttingMethod ; elseIf opMode ' kb ' then // used in the kb mode checkStateChange ; checkCuttingMethod ; elseIf opMode ' addOnCut ' then // used for executing add-on plan nil endIf ;

if m z 0 then assignFluentValue latestPossM,mcut // store the successful m endIf endIf ; endProc NOTE: (1) rect cutting location can also be line or point according to the cutPrimitive. (6.30)

After cutting, the Flipping Table is activated to remove the detached parts which results in the state change (see Chapter 4). The detection of state change and checking cutting method are performed at this level. The outcome will be used in the upper level of the conditional loop in Procedure (6.25). The cutting path will be deepened in the next cycle if the component has not been detached and the previous cutting method is valid (the condition ™šzstateChanged mcut '0'holds).

Known model – in the Procedure (6.25), the CRA cuts to the cutting destination learned from the previous samples. However, the current cutting is adapted to be more efficient in terms of time consumption. In the KB, the parameters of cutting destination, final cutting method, and top surface level are compLocationKb, mKb, and zRefKb, respectively. Therefore, the deepening cut operation in Procedure (6.30) can reproduce this physical cut according to these characteristics. The operation time is reduced by three modifications: 1) using larger increment, 2) skipping the flipping table action for

193

Chapter 6 – Cognitive Robotics intermediate cycles, and 3) having redundant operation plans removed from KB. The incremental depth of 2-5 mm can be used based on the material property. The flipping table action takes 8.5 seconds on average and is normally executed 2-5 times per operation plan. Therefore, only one flip executed at the last cycle of the current operation plan is able to save the excessive time. In addition to the deepening cuts, the procedure finalCut is executed at the end of Procedure (6.25) to repeat the final cutting destination finely and fast in order to remove the minor leftover connection, e.g. from the melting plastic, that may have occurred due to the rough and deep cutting. A comparison between the cutting operation of the unknown and the known model is illustrated in Figure 6.11.

start start Actions in the transition stage SFM S = checkStateChange M SFM F = flipTable M = checkCuttingMethod SFM M SFM SFM z finalCut dst

(a) Unknown model (b) Known model Figure 6.11: Disassembly state diagram

With respect to the execution of the add-on plan mentioned in Section 6.3.1.2, it is stored as a sequence of cutting operation with the final parameters, e.g. cutContour(X1, Y1, X2,

Y2, Zdst, Ms,dst). The cutting operations in the customPlanSequence are executed by Procedure (6.32) where the state change will be checked once all operations in the sequence have been executed. A single cutting operation is done by Procedure (6.33). To prevent any collision and excessive load on the cutting tool, the cutting operation starts from the top surface of the component as the level is measured by a sensing action MeasureZF (see Expression (6.31)) and incrementally deepens to the final location. From Procedure (6.33), the add-on cutting action is the first member of the list which will be executed according to the existing parameters in KB. For obvious clarification, the procedure presents a rectangle as the primitive geometry which can be changed to other types if needed. The procedure deepenCutPrimitive for cutting a primitive geometry is used to incrementally cut to the destination level zdst. Finally, the procedure also removes the current operation that has been done from the remaining list of the sequence. Therefore, subsequent actions in the remaining sequence will be executed.

senses measureZF x112,,, y x y 2 ,currentZF . (6.31)

194

Chapter 6 – Cognitive Robotics

proc executeAddOnPlan model,currentComp recallOriginalLocation model,currentComp ; // get the top surface from KB recallCustomPlanSequence model,currentComp ; // get the custom plan form KB while customPlanSequence z [ ] do // execute the custom plan in the list executeCurrentCustomPlanAction endWhile ; flipTable ; checkStateChange; // flip table and check state change endProc (6.32)

proc executeCurrentCustomPlanAction ? customPlanSequenceq| = >@> currentAction|< remainingListg >@ ; if t currentAction cutPrimitive t then // cut primitive geometry

? currentAction = cutPrimitive x1 , y 1 ,x 2 , y 2 ,z dst ,mKb ; // set cutting location

measureZF x112, y , x , y 2 ; ? currentZF zref ; // set zref from measureZF

// set cutting location (1) rectCutLocationIsrectx 112, y , x , y 2 , zref ;

assignFluentValue mcut ,mKb ; // set initial cutting method

while ™stateChanged do // repeat cutting to zdst

deepenCutPrimitive zRef ,z Ref z dst , Z inc ,' addOnCut ' endWhile ; elseIf ..... // in case of other type of cut ..... < for other type of cut, i.e. cutLine, cutCorner, cutPoint > endIf feedCustomPlanAction; // remove this operation from the remainingList endProc NOTE: (1) rect cutting location can also be line or point according to the cutPrimitive. (6.33)

In summary, the basic behaviour is used to control the disassembly process by scheduling the actions. Reasoning and execution monitoring are seamlessly implemented in the IndiGolog procedures which are structured according to the disassembly domain. The process for handling the unknown models is operated based on the general rules and operation procedure. The product structure and overview of the process regarding DSP are expected to be automatically generated during the first time disassembling an unknown model. Meanwhile, the process for the known model is operated typically based

195

Chapter 6 – Cognitive Robotics on the existing knowledge in the KB. The process is expected to be more efficient by incorporating whit the advanced behaviour which is explained in the following section.

6.3.2 Advanced behaviour control

The advanced behaviour involves the learning and revision process of the CRA. This behaviour aims to store the significant model-specific knowledge extracted in the current disassembly process. The knowledge will be used to reproduce the same disassembly outcome in a more efficient way. The knowledge is initially stored in the KB and will be continuously used and changed in the subsequent processes of the same model. In this case, the learning and revision can be used as the process to change the KB outside the scope of the IndiGolog program. Moreover, this process is incorporated with human assistance in order to resolve complex situations. The CRA learns from this activity.

According to the basic behaviour, the advanced behaviours are expressed in every level of plan and operation execution during the disassembly process. The knowledge is initially obtained in the process for the unknown model. Later on, in the process of known model, this existing knowledge is utilised and the CRA continuously obtains more knowledge to revise the existing KB for increasing the process performance. It should be noted that the activities regarding learning and revision are marked in the basic behaviours’ procedures as . In this section, the knowledge base will be explained first and is followed by the two cognitive functions, learning and revision.

6.3.2.1 Knowledge base

To achieve an efficient disassembly, effective cutting locations are expected to be accurately determined from the previous disassembly process. The CRA will perform the operation according at these locations directly, so that the damage and the time consumption due to further trial-and-error will be minimised. To accurately determine the specific location, learning and revision are developed for the model-specific knowledge rather than the generalised rules. The model-specific knowledge is used since the exceptional features and uncertainties vary from a model to another model. The corresponding locations are arbitrary depending on the design of each model. In addition, from the examination through a number of models in Chapter 4, there is no explicit relation between these required locations and any other detectable features. For example, there is no relation between the required cutting destination of a hidden snap-fit

196

Chapter 6 – Cognitive Robotics underneath the middle area of a back cover and the location of the back cover’s border. Therefore, the generalised rules cannot be formulated in this case. Furthermore, the model-specific knowledge is obtained from the successful disassembly cases. Therefore, high success rate can be achieved which benefits to further efficiency improvement.

In the KB file, the knowledge is stored in the form of structured Prolog facts corresponding to a model name (model) and revised version of DSP (verDsp). Therefore, the Prolog interpreter uses an inference machine to find the corresponding facts from the KB. For the implementation in the basic behaviour control, the test operator (?) is used to roughly check the structure of the piece of knowledge and the primitive action recall is used for assigning the parameters to the corresponding fluents. The information to be kept in the KB is considered from the disassembly domain described in Section 6.2. In order to keep the KB compact, only critical values of the parameters that contribute to the characteristics of individual cuts and the entire disassembly process are stored. In summary, the knowledge fact is classified into two main levels: 1) product-level and 2) component-level. The types of fact are summarised in Table 6.4.

Knowledge level Input queries Fact - model DSP Product-level - version Product structure Component location General plan - Primitive feature - model - Cutting location Component-level - version - Cutting method - component instance Add-on plan - Primitive cutting operation - Cutting method

Table 6.4: Facts in knowledge base

Product-level fact – this level represents the overview of the disassembly process in regard to 1) the DSP and 2) the main structure as in Expression (6.34). The DSP is stored in the fluent sequencePlansDSP as a sequence of the main components to be treated and their instances indexed, e.g. [c1(1), c2(1), c3(1), c3(2), c4(1), c5(1)]. Subsequently, the treatment process is executed in the procedure executeCurrentComponentTreatment until the end of this list (see procedure main, Procedure (6.16)).

197

Chapter 6 – Cognitive Robotics

dspInKb >@ model,verDsp, sequencePlansDSP, structureType . (6.34)

Where sequencePlansDSP ¬¼ªº c1 1 ,..., cj k

Regarding the main structure, the name of the main classes is stored, i.e. “Type1” or “Type2”. The type of structure is used for the decision making in the case of Type-II where the subsequently separate process for removing PCBs is needed to achieve the complete disassembly. Therefore, the CRA can track this extra process according to the indication of structure type. It also presents an overview of the sample to the users.

Component-level fact – this fact stores the information regarding the treatment process of a specified component. In order to reproduce an order-independent cut, the knowledge in physical specification and related operation for the operation are needed to be given. In Expression (6.35), the original location of the main component compLocation is stored for the physical specification. This will be used as a reference for the locating ROI used for the state change detection process. The operations’ details are stored as plangeneral and planaddon which represent the general operation plan and the human-assistance related add- on plans, respectively. This knowledge is initially utilised in the treatment of a specified component in Procedure (6.21).

planInKb >@ model,verDsp, compTypek i, compLocationKb , plangeneral, plan addOn . (6.35)

plan ªºªºªº), * , )*)*)*)* , , , , , , , (6.36) general ¬¼¬¼¬¼0 0 11 22 33 44

Where )u cutLocationop k,, u dst and *uu >@mKb, zRefKb u

planaddOn ¬¼ªº cutPrimitive t ddst ,m mKbKKbb , ... (6.37)

First, for the general plan in Expression (6.36), the parameters of the plans are stored as a constant length list containing five pairs of Φ and Γ which denote the destination cutting location (zdst) and cutting method (mKb) and the level of the top surface (zRefKb), respectively. The number of the list member corresponds to the number of the general plan available for each main component. The first pair represents the cut screws operation in which the parameters are stored in sub lists of Φ0 and Γ0. The second to the fifth pairs represent the parameters of general plan for op(i,1) – op(i,4) where i denotes a type of

198

Chapter 6 – Cognitive Robotics

component. For example Φ1 = rect(X1,Y1,X2,Y2,Zdst) and Γ1 = [Ms,Zref]. In addition, the procedure for post processing of the disassembly is stored in this form. This part of knowledge is utilised in the procedure planExecutionLoopKnown (see Procedure (6.45)).

Second, for the add-on plan in Expression (6.37), the list represents a sequence of primitive cutting operations given with human assistance by demonstration. The parameter tdstd denotes the cutting destination according to the type of primitive cutting operation. It can be one of the primitive cutting operations in Equation (6.7) - (6.10), e.g. cutContour( tdstd ) = cutContour(X1,Y1,X2,Y2,Zdst,Ms,dst). This part of knowledge is utilised in the procedure executeAddOnPlan (see Procedure (6.32)).

In summary, knowledge regarding disassembly of a model of LCD screen is stored in KB with two types of fact. The dspInKb represents the overview of the process and the planInKb represents the detail operation for the components. An example of the KB is in Figure 6.12.

dspInKb([sv,2], [backCover(1), pcbCover(1), pcb(1), pcb(2), carrier(1), lcdMod(1)], type1).

planInKb([sv,2], backCover(1), box(19,25,343,294,36,27), [[],[], rect(5,5,359,312,26), [5,27], rect(0,0,364,317,26), [5,27], rect(10,10,354,307,26), [5,27], -, [-,-]], [cutContour(30,23,332,295,26,5), cutLine(21,19,344,19,16,6)]).

planInKb([sv,2], pcbCover(1), box(45,81,304,230,50,19), [[],[], rect(45,81,304,235,44),[1,50],-,[-,-],-,[-,-],-,[-,-]], []).

planInKb([sv,2], pcb(1), box(59,79,228,227,30,20), [[loc(217,199,29), loc(157,208,29), loc(61,89,33),], [1,1,1], rect(62,82,225,224,24), [5,30], rect(56,76,231,230,24), [5,30], rect(69,89,218,217,24), [5,30], -,[-,-]], [cutContour(63,89,219,215,22,6), cutContour(119,157,128,167,23,6)).

planInKb([sv,2], pcb(2), box(240,90,299,175,25,18), [[loc(301,167,25), loc(300,162,29), loc(285,98,29), loc(303,97,25)], [1,1,1,1], rect(216,81,307,183,20), [5,26], rect(210,75,313,189,20), [5,26], rect(223,88,300,176,20), [5,26], -,[-,-]], [cutLine(248,90,248,174,25,8)]).

planInKb([sv,2], carrier(1), box(0,17,364,317,22,8), [[],[], rect(5,22,359,312,16), [1,22], -,[-,-], -,[-,-], -,[-,-]], [cutContour(8,13,354,305,13,1),cutContour(8,13,354,305,13,1)]).

planInKb([sv,2], lcdMod(1), box(0,0,364,317,25,8), [[],[],-,[-,-],-,[-,-],-,[-,-],-,[-,-]],[]).

planInKb([sv,2], postProcess, -,-, [cutLine(248,90,248,174,25,8)]).

Figure 6.12: Example of the KB for a sample model

199

Chapter 6 – Cognitive Robotics

This example shows the knowledge recorded from the 2nd revision of model name “sv”. This model has a Type-I structure consisting of six components. The general plans or parameters which are unavailable for a particular component are represented with “-” or “[ ]”. For each fact planInKb, the 1st line contains physical properties; the 2nd line presents the general plans regarding screws list; the 3rd line presents the rest of the general plans; and, the 4th line presents the add-on plans.

6.3.2.2 Learning by reasoning

The learning process stores the knowledge contributing to the successful disassembly process of a particular model of the product. This knowledge is used for reproducing the same set of order-independent cutting operations in the disassembly process for the previously seen models. The normal form of learning is performed only the first time that the new model has been recognised. Afterwards, the learning will occur in the form of revision.

Essentially, in normal learning, all operations that have been performed need to be recorded even where the state has not been changed immediately after the operation is finished. The assumption is that all performed operations have contributed to the upcoming state change unless there is a proof of irrelevance. This irrelevance can be proved by the revision process which is explained in Section 6.3.2.4. An example is shown in Figure 6.13 where the final material removed from cuts are related to only op(i,2) and op(i,3) but all of them are recorded in the learning process. The op(i,1) can be considered irrelevant since its cutting area can be covered by another two operations.

200

Chapter 6 – Cognitive Robotics

Iteration-1: op(i,j) cut at z Material removed 1 by executing an Iteration-2: op(i,j) cut at z2 operation plan op(i,j) Iteration-3: op(i,j) cut at z3

Critical parameters at zdst Component - cutting Location - cutting method (a) Cutting operation

zdst,3

zdst,1 zdst,2

Op(i,1) Op(i,2) Op(i,3)

(b) Trial process and Learning

zdst,3

zdst,2

Op(i,1)' Op(i,2)' Op(i,3)'

(c) Implementation from KB

Figure 6.13: Cutting operations in learning and implementation

Learning by reasoning occurs throughout the trial process. The knowledge is obtained as the CRA conducts the process automatically according to the basic behaviours of the general plans and operations. According to the KB in the previous section, all facts will be generated except the fact regarding planaddOn which is generated by learning by demonstration. Each type of the fact is generated in a different stage of the plans and operations. The knowledge is temporarily stored in the corresponding type of fluent during the disassembly process. It will be written to the KB file after the value of all parameters has been obtained. In association with the file writing process, the structures of the fact are predefined as in Expression (6.34) - (6.37). Therefore, the fluents can be placed into the predefined structured slots conveniently and the issue in managing the infinite list can be resolved.

201

Chapter 6 – Cognitive Robotics

For the product-level knowledge, all elements will be obtained after the entire disassembly process has been achieved. First, the component is continuously added to a list of the fluent sequencePlansDSP once the new main component has been detected. The final DSP is obtained from this fluent. Second, the structure type can be indicated after classification of the structure types by Procedure (6.18). At the final state of disassembly, these fluents are written to the KB file at the end of Procedure (6.16).

For the component-level knowledge, the critical parameters contributing to the state change or the last action before proceeding to the next plan need to be learned due to the aforementioned discussion. This critical value resulting from the repeatedly deepened cut is assigned to the fluent Φ and Γ at the end of each operation plan as in Procedure (6.25).

The critical value is obtained after the operation has been executed at level zdst as illustrated in Figure 6.13a, After all available plans have been executed or the state has been changed, all required parameters for plangeneral are obtained and written to the KB file at the end of Procedure (6.21).

6.3.2.3 Learning by demonstration

Human assistance is involved in a type of learning. The user will demonstrate a sequence of primitive actions in a case where the CRA has struggled to remove the component after all available autonomous processes have been performed. The assistance is given in order to resolve a problem at the component level and the operational level. The unresolvable condition and corresponding actions to be given are summarised in Table 6.5. Theoretically, the assistance is required only for the first time in disassembling the unknown model in order to handle complex situations. The CRA is supposed to learn from this demonstration and autonomously carries out the entire process the next time this model is found. However, according to the physical uncertainties in the product and the process, minor assistance may be needed in a few subsequent processes to resolve remaining minor uncertainties.

202

Chapter 6 – Cognitive Robotics

Demonstrated level Unresolvable conditions Type of problem actions Existence of main component False positive skipComponent Component Location of main component Inaccuracy newCompLocation Detection of state change False negative deactivate Location of xy-cutting path Inaccuracy Insufficient cutting depth Non-detectable Operation Connective components Non-detectable cutPrimitive Physical collision detected Non-detectable by MotionSupervision

Table 6.5: Unresolvable conditions and demonstrated actions

The unresolved conditions – from Table 6.5, the problems typically result from the imperfection of the vision system in various detection processes. The problem can be classified into four types: 1) false positive of detectable component, 2) false negative of detectable component, 3) inaccuracy in localisation of detectable component, and 4) non- detectable components. Proper types of demonstration need to be given to resolve these problems after the autonomous operations have been finished. The unresolved conditions can be classified into two types which are 1) component level and 2) operation level. The explanation is as follows.

First, the problems at the component level are related to the condition of the detected components and the transition among the disassembly states. In the early stage of each state of disassembly, the beliefs associated with the existence and the detected main component’s location are deployed in the CRA. The process is carried on in regard to the reasoning and execution monitoring based on this belief throughout the autonomous phase. As a result, the disassembly outcome that the CRA realised can be wrong if the belief is incorrect from the beginning. Therefore, user assistance can be given to the CRA to change the initially incorrect belief. At the end of the state, an incorrect detection of the state change may occur which needs to be corrected in regard to the execution monitoring. These are described as follows:

x The false positive detection of the non existing main component, the action skipComponent will be given to mark that this component does not exist. Hence, the CRA will acknowledge and retract this from the knowledge to be learned.

203

Chapter 6 – Cognitive Robotics

x The original location is essential for execution monitoring process where the state change is detected according to the pixel ratio of the corresponding ROI (see detail in Section 5.3.8). Therefore, the correct location given by the action newCompLocation will change the compLocationKb that will be stored in the KB. The action is characterised by the axiom in Expression (6.38).

compLocationKb Box,, do a s{ x121212 ,,,,,. x y y z z (6.38) a newCompLocation x112,,,,, y x y 212 z zš Box box x 112,,,,, y x y 212 z z

x The false negative of state change occurs when the main component has been removed but the CRA realised differently. As a result, the CRA cannot proceed to the next state of disassembly. The user can simply proceed to the next state by terminating the demonstration by sending the action deactivate. It should be noted that no parameters need to be learned since this error is related to the inaccurate size of the corresponding ROI which is expected to be corrected by newCompLocation.

Second, the problems at the operational level are related to the cutting process of the autonomous phase. The user demonstrates extra primitive cutting operations to complete the component removal. The failure of removal is inaccurate due to localisation and non- detectable object. Therefore, the demonstrated cutting is used to compensate for these errors after the autonomous operation. In order to specify the characteristics of the cutting, the user must define the cutting path, cutting depth, and cutting method via the graphic user interface (GUI) which is explained in the next section. Detail regarding each problem is explained as follows.

x The CRA generates the initial xy-cutting path by 5-10 mm offsetting the detected border of the component toward the centre area. The generated cutting path can be inaccurate due to the precision within 3-5 mm of the visual localisation (see Section 5.4).

x The final cutting level zdst of the autonomous operation results from the initial level of the cutting path in accordance with the repetitive deepening offset. The required critical depth cannot be located due to the unmeasurable thickness of the component needed to be cut. Therefore, the final cutting depth can be insufficient to terminate all significant connections.

204

Chapter 6 – Cognitive Robotics

x The problems regarding the connective components are caused by inaccuracy in the location of the detected component (i.e. screws) and completely non-detectable components (i.e. hidden or occluded screws, plastic rivet, snap-fits, etc). Overall, the screw detector, with approximately 65% detection rate and inability to detect other types of the connective components, is insufficient for terminating the connectors. x Physical collision between a part of the cutting-tool and other non-detectable surroundings is normally resolved by varying the cutting method by the robot controller. However, a problem arises in the case that the cutting destination is inaccessible or the cutter is unsuccessful in cutting hard material for too many times (see Section 4.2.3.2).

User demonstration facility - the user specifies an individual primitive cutting operation by drawing the desired cutting path overlaying the captured colour image via the GUI. The GUI is an integrated part of the vision system module (see detail in Appendix D). The GUI provides sufficient actions for the user to resolve the aforementioned problems as in Table 6.5. The user interaction is complex in the functions for cutPrimitive and newCompLocation since the location in operational space needs to be specified. An overview of the corresponding user demonstration is illustrated in Figure 6.14. Theoretically, the desired cutting path in 2.5D operational space is needed to be specified. However, an accurate cutting position in 2.5D cannot be directly given due to the limitation of the user’s perception on 2D image display and input devices, i.e. a mouse. Therefore, the vertical distance is designed to be specified separately with a user’s desired vertical offset relative to the initial cutting level zF.

205

Chapter 6 – Cognitive Robotics

Vision system module GUI in colour image panel Cognitive robotic module Record P2(c2,r2) in KB

A 2.5D cutting path (Operational space) A 2D cutting path drawnbyauser cutPrimitve(x1,y1,x2,y2,z,mcut) (Image space)

P1(c1,r1) execute User inputs - Primitive cutting method Disassembly operation unit module - Deepen z from previous operation - Cutting method mcut Figure 6.14: User’s demonstrated primitive cutting operation in GUI

In the first cycle, the corresponding initial location in operation space (x,y,zF) can be obtained at the location in image space (c,r) according to Equation 5.14 (see detail in Section 5.2.4.2). In the subsequent cycles, the user deepens this cutting path down by 1- 10 mm per one cycle. This incremental deepening allows the user to accurately decide where to stop cutting – indicated as critical depth zdst – in order to get the minimal damage possible contributing to the main component removal. At each cycle, this cutting command is in the form of primitive action generated by the vision system module. This command is sent to the disassembly operation module for the operation and sent to the CRA for learning (see Figure 6.14).

Learning to KB - after all general operation plans have failed to remove the main component, the procedure callUser (see Procedure(6.40)) will be called by the CRA in order to get assistance. In this procedure, the sensing action senseHumanAssist (Expression (6.39)) is executed in every operation cycle to obtain the demonstration of a single operation as the fluent humanAssistOp. In this research, the fluent humanAssistOp is equivalent to an exogenous action where the action source is from the external world.

Initially, the user should justify the original belief regarding the conditions of the component at the beginning of the stage using skipComponent and newCompLocation. The skipComponent is caught at the beginning of the procedure callUser which results in ignorance of the current instance of component and proceeds to the next component. Consequently, the fact PlanInKB corresponding to this component is also removed. The newCompLocation results in the effect as in Expression (6.38).

206

Chapter 6 – Cognitive Robotics

senses senseHumanAssist, humanAssistOp . (6.39)

proc callUser senseHumanAssist; // start getting demonstration from GUI if humanAssistOp 'skipComponent' then // skip current component treatment assignFluentValue customPlanSequence, ' skipComp ' else while humanAssistOpz 'deactivate' do // get the demonstration until deactivate addHumanOpSequence humanAssistOp ; // add to the sequence senseHumanAssist; // get the demonstration endWhile endIf endProc (6.40)

Afterwards, the cutting operation will be demonstrated and learned according to the conditions in the axiom characterised by Expression (6.41). In order to reduce the size of KB while the cutting characteristics are retained, this axiom causes the final cutting path and cutting method at the destination level to be stored solely instead of recording multiple times of the identical horizontal cutting path. An example is in Figure 6.15 where two cutting operations are demonstrated and learned. During the process, other extra actions, e.g. flipTable, can be executed any time but will not be recorded in the KB. The demonstration will be given until the user decides to stop which is the cycle that the command ‘deactivate’ has been received.

Demonstrated cutting-1 : z11 >z12 >z13 st 1 time - cutContour(x11,y11,x21,y21,z11,mcut11) nd 2 time - cutContour(x11,y11,x21,y21,z12,mcut12) rd Record 3 time - cutContour(x11,y11,x21,y21,z13,mcut13)

Demonstrated cutting-2 : z21 >z22 >z23 st 1 time - cutLine(x12,y12,x22,y22,z21,mcut21) nd 2 time - cutLine(x12,y12,x22,y22,z22,mcut22) rd Record 3 time - cutLine(x12,y12,x22,y22,z23,mcut23)

Operation sequence learned in KB planaddOn = [cutContour(x11,y11,x21,y21,z13,mcut13), cutLine(x12,y12,x22,y22,z23,mcut23)]

Figure 6.15: learning cutting operation for add-on plan

The primitive cutting actions are appended to the fluent customSequencePlan (planaddOn in Expression (6.37)) after each execution according to the axiom in Expression (6.41).

207

Chapter 6 – Cognitive Robotics

customPlanSequence L,, do a s{ x112 ,,,,,,. y x y 2 z m Lc

a addHumanOpSequence cutPrimitive x112 , y , x , y 2 , z , m š

customPlanSequence Lcc , sš L L ‰^` cutPrimitive x112 , y , x , y 2 , z , m › (6.41)

customPlanSequence Lc ,, sš cutPrimitive x11 y ,,,,xyzm22  Lc

šz a cutPrimitive x112 ,,,,, y x y 2 z m

6.3.2.4 Revision

The revision process aims to optimise the disassembly process of the known model by removing redundant operations and reducing the size of the learned operation plan set in operation-level fact. Therefore, the efficiency of the process in term of time consumption will increase. The reduction will be done by retracting the redundant operations that do not contribute to the removal of the main component. The assumption is all performed operations contribute to the upcoming state change unless there is a proof of redundancy. This redundancy can be proved by not executing some of the previously recorded operations. If the component is still successfully removed by executing the existing set of operations, the removed operation can be concerned redundant. Retraction of the redundant operation plans will be done subsequently. The KB will be repeatedly revised based on the latest version once the identical known model is encountered. Therefore, the revised version of the facts will be stored in the KB.

Heuristic strategy - the lower index plans, i.e. op(ci,1), tend to produce lower impact due to the physical damage but the contribution to the success of removal is lower; meanwhile, the higher index plans, op(ci,4), have opposite characteristics. Moreover, the human demonstrated custom plans are supposed to have the highest contribution to the success of removal. However, all operation plans potentially contribute to the accomplishment at different levels. Therefore, the heuristic for finding out a combination of the effective operation plan must be implemented.

In this research, the heuristic plan sets the priority of the success rate over the impact. The heuristic used is relatively simple due to the small number of operation plans available. Therefore, the process for the known model will be executed in a reverse order of the unknown model one (see Figure 6.13). Example operation sequences for the unknown and the known models are in Expression (6.42) and (6.43). The customPlanaddOn from

Expression (6.42) is learned in to KB and turned to be customPlankb in Expression (6.43).

208

Chapter 6 – Cognitive Robotics

Unknown: optr c i ,1oo ... optr c i ,4 o customPlanaddOn (6.42)

Known: customPlankboooo op kb c i ,4 ....opkb c i ,1 customPlanaddOn (6.43)

The concept of this heuristic is described by Procedure (6.44). The current revision of verDSP (j+1) is revised from the previous version (j). The execution order is 1) the custom plan, 2) highest index plan, and 3) lowest index plan, respectively. As a result, when the state is changed, the redundant operations which are the plans having lower index than the current plan will be noticed and retracted from the existing general plans. In case of the removal failure after execution of all of the plans, additional demonstrations will be given in order to resolve some minor uncertainties. These custom plan sequences will be appended to the existing one in KB. Eventually, the revised general plan

addOn) will be updated in the KB belonging׳general) and custom sequence plans (plan׳plan) to the current revision of verDSP (j+1). This reverse order of execution will be implemented in the subsequent process for further revision.

Proc generalPlanRetraction verDsp j ! // current revision j +1 use previous verDsp = j

executeAddOnPlan model,ci ; // execution the custom plan sequence in KB assignFluentValue k, n ; // start with the highest plan index-nk : = n while ¬stateChangedšt k 1 do

op ci , k, ' kb ' ; // execute plan-k in KB checkStateChange; if stateChanged then // if state changed, retract the lower index plans

retractGeneralPlan ¬¼ªº op cii ,1 , ' kb ' , ...., op c ,k 1 , ' kb ' ; else decrement k,1 // k = k 1, proceed to the lower index plan endIf endWhile; if ¬stateChanged then callUser endIf // if fail, add more custom plans and learn

writeToKb planInKb >@ model, j1 , ci ,

cc c // record the revisied verDsp to KB compLocationKb, plangeneral , plan addOn endProc (6.44)

209

Chapter 6 – Cognitive Robotics

Implementation - the actual implementation involves a number of the levels of the operations as described in Section 6.3.1.2. Therefore, the concept in Procedure (6.44) is separate to implement at each level. The implementation of the revision process is mainly implemented in planExecLoopKnown (Procedure (6.45)) which is a counterpart of planExecLoopUnknown. The operation plans are executed as the order in Expression (6.43). If the state changes after the current plan has been executed, the plans having lower index than the current plan will be retracted. The retraction is done by executing the procedure retractPlans (Procedure (6.46)) where the list of the index of the plan to be retracted is specified, e.g. [1,2,3]. All corresponding elements in KB will become unavailable. Then, this retracted plan will be ignored in the next disassembly process. Eventually, after a number of revisions in a number of repeated disassemblies of the same model, the KB is expected to be more concise and the process will be more efficient.

proc planExecLoopKnown opkbi c,1 , op kbi c,2 , op kbi c ,3 , op kbi c ,4 if stateChanged then retractPlans >@1,2,3,4 // addOn succeedo retract plan1- 4

elseIf ™šzstateChanged opkb c i ,4I then // plan4 succeedo retract plan1- 3

opkb c i ,4 ; if stateChanged then retractPlans >@ 1,2,3 endIf ;

if ™šzstateChanged opkb c i ,3I then // plan3 succeedo retract plan1,2

opkb c i ,3 ; if stateChanged then retractPlans >@1,2 endIf ;

if ™šzstateChanged opkb c i ,2I then // plan2 succeedo retract plan1

opkb c i ,2 ; if stateChanged then retractPlans >@1 endIf ;

if ™šzstateChanged opkb c i ,1I then // plan1 succeedo no retract

opkb c i ,1 ; if ™stateChanged then callUser endIf // need more assistance endIf endIf endIf endIf endProc (6.45)

proc retractPlans planList

if 1 planList then assign opkb c i ,1 ,I endIf ; // makes plan-i blank : i =1..4 .... ;

if 4 planList then assign opkb c i ,4 ,I endIf endProc (6.46)

210

Chapter 6 – Cognitive Robotics

In Summary, the advanced behaviour is used to handle the knowledge obtained during the disassembly process. The corresponding knowledge is stored in the form of Prolog fact. Learning occurs during the process from the autonomous activity and human assistance in order to store the knowledge into the KB. Afterwards, revision will be used to increase the efficiency of the process in the later processes. Therefore, the KB will be kept updated and implemented in the process for known models.

6.3.3 Summary of Actions and Fluents

Selected actions and fluents used in this process are summarised. Sensing actions are listed in Table 6.6. Primitive actions are listed in Table 6.7. The exogenous actions used in human assistances are also included. The fluents that contain value as constraints of the operation are listed in Table 6.8. It should be noted that these summary Tables present only important actions and fluents those are important for interacting with other modules. Some of the actions and fluents for internal activity of the CRA are not shown here.

Sensing action Fluent Description detectBackCover backCoverLocation Locate back cover detectPcbCover pcbCoverLocation Locate PCB cover detectPcb pcbLocation Locate PCBs location detectPcbAll pcbAllLocation Locate PCB area used in second run detectCarrier carrierLocation Locate carrier detectLcdModule lcdModule Check existence of LCD module detectModel model Match the model of the sample with the models in KB checkStateChange stateChange Determine change of disassembly state measureZF currentZF Measure level-z at a rectangle path SenseHumanAssist humanAssistOperaiton Get assistance from human checkCuttingMethod cuttingMethod Get cutting method from the robot controller

Table 6.6: Sensing actions and corresponding fluents

211

Chapter 6 – Cognitive Robotics

Category Primitive action Fluent Description cutPoint loc(x,y,z) & m Cut point, e.g. screw Primitive cutting cutLine line(x,y,x,y,z) & m Cut straight Line due to m operation cutContour rect(x,y,x,y,z) & m Cut a contour due m (1) cutCorner rect(x,y,x,y,z) & m Cut corner due to m flipTable - Activate the Flipping Table Disassembly moveHome - Move to robot’s home pose operation utility flagStateChange stateChange Flag the beginning of the state for checking setProdCoordinate rect(x,y,x,y,z) Set the VOI for product Location coordinate {P} utility offsetContourXY rect(x,y,x,y,z) (2) Offset contour

offsetContourDepth rect(x,y,x,y,z) (2) Offset contour vertically (2) also for Line and rectRoiIs rect(x,y,x,y,z) Specify the arbitrary ROI Point rectCuLocationIs rect(x,y,x,y,z) or (2) Specify the cutting location box(x,y,x,y,z,z) addSequencePlan sequencePlansDSP Add sequence plan to KB recallDSP - Recall the DSP & plan in KB KB feedCustomPlan - Proceed to the next operation plan in the list feedDspComponent - Proceed to next component skipComponent - Skip treat current component newCompLocation rect(x,y,x,y,z) Locate the current component Human assistance deactivate - Stop human demonstrating All primitive Primitive geometry Demonstrate cutting at cutting from (1) specific location

Table 6.7: Primitive actions and corresponding fluents

Fluent Value (mm) Fluent Value (mm) maxBackCoverDeepOffset 3 minIncrementDepth 1 maxPcbCoverDeepOffset 12 incrementDepth 2 maxPcbDeepOffset 12 maxIncrementDepth 3 maxCarrierDeepOffset 6 minZ 1 maxScrewDeepOffset 3 maxZ 80

Table 6.8: Fluent as constant parameters

212

Chapter 6 – Cognitive Robotics

6.4 Conceptual test of process flow

The conceptual test aims to validate the functionality of the CRA to deal with possible conditions expected to be encountered in the disassembly process. The prospective process flows are presented in this section. The test is performed only in the programming perspective regardless of the uncertainties caused by the vision system and disassembly operation. Therefore, the user interacts with the CRA via the Prolog console where the developed IndiGolog program is executed. The CRA drives the process by scheduling the actions as for the socket messaging used for communicating with other modules. In this experiment, the user is considered as other components apart from the cognitive robotics module (see Figure 6.3) which are 1) vision system, 2) disassembly operation units, and 3) human assistance. Hence, the user needs to provide the feedback as fluents for the sensing actions in regard to the visual detection, feedback from the disassembly operation units, and exogenous actions for the demonstration. This interaction occurs as command lines in the console where the user types in the corresponding input fluents.

In this section, the process flow of selected cases will be shown in a compact form. Only significant actions and fluents regarding the disassembly process will be shown. The process flows are illustrated in Figure 6.16 - Figure 6.20. For the notation, the operation sequence is connected with the transition symbol “codeÆ”. The code denotes the detail of the action executed in the transition state where S = checkStateChange, M = checkCuttingMethod, and F = flipTable.

6.4.1 Unknown model

The process flow for disassembling a sample of the unknown model is illustrated in Figure 6.16. The CRA is unable to match the detected model to the existing models in the KB. Therefore, the CRA starts the process according to the unknown model strategy. The process starts from the treatment of back cover and carries on until the LCD module has been reached. The main product structure is classified after the treatment of the back cover. The two possible cases are shown in the shaded area in this Figure where the process is performed either Type-I or Type-II.

From example of treating back cover session, the state change is flagged at the rectangle rect1 corresponding to box1. The operation starts executing plan op(c1,1) by entering the operation cycle. The CRA starts from cutContour at the xy-cutting path rect'1 which is

213

Chapter 6 – Cognitive Robotics

rect1 with the predefined offset. The first trial cut at the level z1 and then checking cutting method, flipping table, and checking state change. The cutting method will not return back from the robot until the cut is successful. After the CRA knows the disassembly outcome from checkStateChange, it will proceed to the next component if the state has changed. If the state has not changed, the CRA will repeat cutting the same xy-cutting path rect'1 at the deeper level z2. The cycle is repeated until the state has changed or reached the depth constraint, in this case z3. The process will proceed to the next plan op(c1,2) and repeat the same cycle. In case that the disassembly still unsuccessful after finishing the limit of op(c1,3). The CRA will ask the assistance from the human user. Subsequently, the user will demonstrate custom operations, e.g. primitive cutting actions or other actions for changing the original belief, by giving one operation in an operation cycle. The given operation will be appended to the list, for example [H1] = [cutContour(0, 0,300,200,40,’2’), cutLine(1,1,200,1,50,’1’), cutLine(1,300,1,200,40,’2’), ...]. The user keeps demonstrating until this component has been detached which regarded as state changed. The user has to end the demonstration by sending an action deactivate. The system will proceed to the subsequent components until the end of the entire process. At the end of the process, the user has a chance to finalise whether the process has been completed or not. Necessary operations can be further demonstrated at this point in case some components have not been detached. The knowledge obtained during this process is stored in KB as shown in Figure 6.20a. Detail is explained in the next section.

According to the completeness of disassembly, the process for the Type-I is completed in the first run according to this process flow. On the contrary, the second run is need or the Type-II where the PCBs still attach to the carrier. The process of the second run is shown in Figure 6.17. This process starts with determining the ROI that encloses all PCBs to be disassembled. This process is equivalent to the procedure performed in detection of back cover in normal operation. Only remaining PCBs are treated which the process is simpler than the normal process.

214

Chapter 6 – Cognitive Robotics

START DISASSEMBLY

detectModel Æ unknown

detectBackcover Æ backCoverLocation = box1 Treat Back cover (1)

flagStateChangeROI(rect1). SMF SMF SMF op(c1,1): cutContour(rect1’, z1) Æ cutContour(rect1’, z2) Æ cutContour(rect1’, z3 ) Æ SMF SMF SMF op(c1,2): cutCorner(rect1’, z1) Æ cutCorner(rect1’, z2) Æ cutCorner (rect1’, z3) Æ SMF SMF SMF op(c1,3): cutContour(rect1’’, z1) Æ cutContour(rect1’’, z2) Æ cutContour(rect1’’, z3) Æ

custom = [H1] Æ

(Classification of 2 possible cases – The process can goes either Type-I or Type-II) Type-I

(detectPcbCover Æ pcbCoverLocation = box2) and (detectCarrier Æ pcbCoverLocation = ‘no’) Æ Type-I Treat PCB cover(1)

flagStateChangeROI(rect2) SMF SMF SMF op(c2,1): cutContour(rect2’,z1) Æ cutContour(rect2’,z2) Æ cutContour(rect2’, z3) Æ U SMF U SMF cutLine(rect2’ , z1) Æ cutLine(rect2’’ , z2) Æ D SMF D SMF cutLine(rect2’’ , z1) Æ cutLine(rect2’’ , z2) Æ L SMF L SMF cutLine(rect2’’ , z1) Æ cutLine(rect2’’ , z2) Æ R SMF R SMF cutLine(rect2’’ , z1) Æ cutLine(rect2’’ , z2) Æ

custom = [H2] Æ

detectPCBs Æ pcbLocation = [box3, box4] Treat PCB(1)

flagStateChangeROI(rect3)

detectScrews Æ screwLocation [loc1, loc2, ..., locn] Æ SMF op(cx,0): cutScrew(loc1) Æ cutScrew(loc2) Æ .... ÆcutScrew(locn) Æ SMF SMF SMF op(c3,1): cutContour(rect3’, z1) Æ cutContour(rect3’, z2) Æ cutContour(rect3’, z3 ) Æ SMF SMF SMF op(c3,2): cutCorner(rect3’, z1) Æ cutCorner(rect3’, z2) Æ cutCorner (rect3’, z3) Æ SMF SMF SMF op(c3,3): cutContour(rect3’’, z1) Æ cutContour(rect3’’, z2) Æ cutContour(rect3’’, z3) Æ

custom = [H3] Æ Treat PCB(2)

flagStateChangeROI(rect4)

detectScrews Æ screwLocation = [loc1, loc2, ..., locn] Æ SMF op(cx,0): cutScrew(loc1) Æ cutScrew(loc2) Æ .... ÆcutScrew(locn) Æ SMF SMF SMF op(c3,1): cutContour(rect4’, z1) Æ cutContour(rect4’, z2) Æ cutContour(rect4’, z3) Æ SMF SMF SMF op(c3,2): cutCorner(rect4’, z1) Æ cutCorner(rect4’, z2) Æ cutCorner(rect4’, z3) Æ SMF SMF SMF op(c3,3): cutContour(rect4’’, z1) Æ cutContour(rect4’’, z2) Æ cutContour(rect4’’, z3) Æ

custom = [H4’] Æ

Type-II

(detectPcbCover Æ pcbCoverLocation = box2) and (detectCarrier Æ pcbCoverLocation = ‘box5’) Æ Type-II Treat PCB cover(1)

flagStateChangeROI(rect2) SMF SMF SMF op(c2,2): cutContour(rect2’,z1) Æ cutContour(rect2’,z2) Æ cutContour(rect2’, z3) Æ

custom = [H5X] Æ

detectCarrier Æ carrierLocation = box5 Treat Carrier(1)

flagStateChangeROI(rect5) SMF SMF SMF op(c4,1): cutContour(rect5’, z1) Æ cutContour(rect5’, z2) Æ cutContour(rect5’, z3 ) Æ custom = [H5] Æ

Treat LCD module(1)

op(c5,1): cutContour(rect1’’’, z1) Æ cutContour(rect1’’’, z2) Æ cutContour(rect1’’’, z3) Æ

Post process

custom = [H6]

END DISASSEMBLY

Figure 6.16: Example process flow of unknown model

215

Chapter 6 – Cognitive Robotics

START DISASSEMBLY

detectModel Æ unknown

detectEntirePCBarea Æ entirePcbArea = box1

detectPCBs Æ pcbLocation = [box2, box3] Treat PCB(1)

flagStateChangeROI(rect2)

detectScrews Æ screwLocation [loc1, loc2, ..., locn] Æ SMF op(cx,0): cutScrew(loc1) Æ cutScrew(loc2) Æ .... ÆcutScrew(locn) Æ SMF SMF SMF op(c3,1): cutContour(rect2’, z1) Æ cutContour(rect2’, z2) Æ cutContour(rect2’, z3 ) Æ SMF SMF SMF op(c3,2): cutCorner(rect2’, z1) Æ cutCorner(rect2’, z2) Æ cutCorner (rect2’, z3) Æ SMF SMF SMF op(c3,3): cutContour(rect2’’, z1) Æ cutContour(rect2’’, z2) Æ cutContour(rect2’’, z3) Æ

custom = [H2] Æ Treat PCB(2)

flagStateChangeROI(rect3)

detectScrews Æ screwLocation = [loc1, loc2, ..., locn] Æ SMF op(cx,0): cutScrew(loc1) Æ cutScrew(loc2) Æ .... ÆcutScrew(locn) Æ SMF SMF SMF op(c3,1): cutContour(rect3’, z1) Æ cutContour(rect4’, z2) Æ cutContour(rect3’, z3) Æ SMF SMF SMF op(c3,2): cutCorner(rect3’, z1) Æ cutCorner(rect3’, z2) Æ cutCorner(rect3’, z3) Æ SMF SMF SMF op(c3,3): cutContour(rect3’’, z1) Æ cutContour(rect3’’, z2) Æ cutContour(rect3’’, z3) Æ

custom = [H3’] Æ

Post process

custom = [H4]

END DISASSEMBLY

Figure 6.17: Example process flow of the second run for Type-II unknown model

However, in the real situation, loading process of the connected part of a carrier and PCBs can be difficult since the shape is inconsistent due to the cutting operation in the first run. A number of process variables, e.g. cutting location, potentially changed and lead to more complex problem. Therefore, this treatment process is undesirable since it hard to be repeated for the subsequent processes when the KB is implemented.

A more preferable strategy is to demonstrate the operation sequence to remove the PCBs from the carrier by cutting the base of the screws located on the backside of the carrier. The cutting path for each screw is a rectangle enclosing the potential screw’s base area which can be visually noticed by the user (see Figure 6.18). This strategy is generally expected to be performed at the second revision of disassembly of a particular model.

Rectangle cutting path on the backside + screw + + + screw PCB-2 PCB-2 PCB-1 + + PCB-1 + + Carrier Carrier

(a) Front view (b) Back view (c) Cutting path Figure 6.18: Strategy to detach screws from the back of carrier

216

Chapter 6 – Cognitive Robotics

6.4.2 Known model

The knowledge that previously captured is implemented in the disassembly of known model. This process (see Figure 6.19) is performed based on the KB (see Figure 6.20a) captured from the Type-I sample described in previous section shown.

START DISASSEMBLY

detectModel Æ known (sv,1), type 1

detectBackcover Æ backCoverLocation = box1 Treat Back cover(1)

flagStateChangeROI(rect1) Æ measureZF Æ

custom = [H1] Æ M M SMF op(c1,3): cutContour(rect1’’, zF) Æ ... Æ cutContour(rect1’’, z3) Æ M M SMF op(c1,2): cutCorner(rect1’, zF) Æ ... Æ cutCorner (rect1’, z3) Æ M M SMF op(c1,1): cutContour(rect1’, zF) Æ ... Æ cutContour(rect1’, z3) Æ

custom = [H1new] Æ

Treat PCB cover(1)

flagStateChangeROI(rect2) Æ measureZF Æ

custom = [H2] Æ M M SMF op(c2,1): cutContour(rect2’’,zF) Æ ... Æ cutContour(rect2’’, z3) Æ (success)

Treat PCB(1)

flagStateChangeROI(rect3) Æ measureZF Æ

custom = [H3] Æ M M SMF op(c3,3): cutContour(rect3’’, zF) Æ ... Æ cutContour(rect3’’, z3) Æ(success)

Treat PCB(2)

flagStateChangeROI(rect4) Æ measureZF Æ

custom = [H4’] Æ M M SMF op(c3,3): cutContour(rect4’’, zF) Æ ... Æ cutContour(rect4’’, z3) Æ M M SMF op(c3,2): cutCorner(rect4’, zF) Æ ... Æ cutCorner(rect4’, z3) Æ(success)

Treat Carrier(1)

flagStateChangeROI(rect5) Æ measureZF Æ

custom = [H5] Æ

Treat LCD module(1) M M op(c5,1): cutContour(rect1’’’, zF) Æ ... Æ cutContour(rect1’’’, z3) Æ

Post process

custom = [H7] Æ

custom = [H7NEW] Æ

END DISASSEMBLY

Figure 6.19: Example process flow of known model

The knowledge in regard to the component removal order and operations are in the existing KB. The CRA conducts the process due to the component removal orders but with reversed order of operation plan. Hence, the operation starts from the add-on human assistance (custom) to the lowest index operation plan. The process is simplified and faster since some actions are reduced in comparison to the process of the known model, e.g. the table is flipped and the state change is checked once only after finishing each

217

Chapter 6 – Cognitive Robotics operation plan. This strategy expects to reduce the time consumption which is a major concern for achieving economic feasibility. Moreover, the detection of the main component is not used since the fact in KB has been already verified by the user.

Therefore, the cutting operations utilise the path and zdst in the KB in accordance with the top surface level-z currently obtained by measureZF.

6.4.3 Learning and revision

Example knowledge facts in KB of the first revision (dspVer = 1) and the second revision (dspVer = 2) are generated by the process of the unknown model (Figure 6.16) and known model (Figure 6.19), respectively. These revisions are shown in Figure 6.20 where the bolded text element denotes typical fact that can be retracted and the black highlighted element denotes the resultant fact after revision.

dspInKb([sv,1], [backCover(1), pcbCover(1), dspInKb([sv,2], [backCover(1), pcbCover(1), pcb(1), pcb(2), carrier(1), lcdMod(1)], type1). pcb(1), pcb(2), carrier(1), lcdMod(1)], type1).

planInKb([sample,1], backCover(1), box1, planInKb([sample,2], backCover(1), box1, [[],[], [[],[],

rect1’(z3), mz1, rect1’(z3), mz1, rect1’’(z3), mz1, -, [-,-]], rect1’(z3), mz1, rect1’(z3), mz1, rect1’’(z3), mz1, -,[-,-],

[H1]). [H1, H1NEW]).

planInKb([sample,1], pcbCover(1), box2, planInKb([sample,2], pcbCover(1), box2, [[],[], [[],[],

rect2’’(z3), mz1, -, [-,-],-, [-,-], -, [-,-]], rect2’’(z3), mz1, -, [-,-],-, [-,-], -, [-,-]],

[H2]). [H2]).

planInKb([sample,1], pcb(1), box3, planInKb([sample,2], pcb(1), box3,

[loc1, loc2, ... locn],[ mz1, mz2, ... mzn], [],[],

rect3’(z3), mz1, rect3’(z3), mz1, rect3’’(z3), mz1, -, [-,-]], [-,-],-, [-,-], -, rect3’’(z3), mz1, -, [-,-]],

[H3]). [H3]).

planInKb([sample,1], pcb(2), box4, planInKb([sample,2], pcb(2), box4,

[loc1, loc2, ... locn],[ mz1, mz2, ... mzn], [],[],

rect4’(z3), mz1, rect4’(z3), mz1, rect4’’(z3), mz1, -, [-,-]], [-,-],-, rect4’(z3), mz1, rect4’’(z3), mz1, -, [-,-]],

[H4]). [H4]).

planInKb([sample,1], carrier(1), box5, planInKb([sample,2], carrier(1), box5, [[],[], [[],[],

rect5’(z3), mz1, -, [-,-],-, [-,-], -, [-,-]], [-,-],-, -, [-,-],-, [-,-], -, [-,-]],

[H5]). [H5]).

planInKb([sample,1], lcdMod(1), box1, planInKb([sample,2], lcdMod(1), box1, [[],[], [[],[],

rect1’’’(z3), mz1,-,[-,-],-,[-,-],-,[-,-]], rect1’’’(z3), mz1,-,[-,-],-,[-,-],-,[-,-]], []). []).

planInKb([sample,1], postProcess, -,-, [H ]). planInKb([sample,1], postProcess, -,-, [H7, H ]). 7 7NEW (a) Revision-1 (b) Revision-2

Figure 6.20: Example knowledge based in two revisions

218

Chapter 6 – Cognitive Robotics

From the previous sample of the known model in Figure 6.19, the process flow the effectiveness of the existing plan in three cases: 1) cannot remove the component, 2) all plans used to remove the component, and 3) only some plans used to remove the component. These different cases result in the revision of the KB as shown in Figure 6.20b. In the first case, new custom sequence will be taught and appended to the existing add-on plan, e.g. the facts of the back cover and the post process. This case can occur due to uncertainties in the disassembly operation. In the second case, nothing will be changed in the KB, e.g. the facts of the PCB cover and the LCD module. It can be implied that all operations are significant and cannot be ignored. In the third case, the redundant facts will be retracted from the existing KB, e.g. the facts of the PCBs and the carrier. These redundancies are permanently retracted and will be ignored in the subsequent processes. The retraction affects only for the parts of the general operation plans since they potentially involve the redundant activities that slow down the process and produce more damage to the product. In contrast, the custom operations are initially verified by the user that they are necessary for achieving the disassembly.

In conclusion, the process flow of the possible cases expected to be encountered are described. In general, the actual processes are performed similarly with some variations in regard to the uncertainty in each sample. The knowledge is stored in the KB by the learning process and will be implemented in later on. Afterwards, the KB will be refined by the revision in order to increase the efficiency of the process by retracting the redundancy.

6.5 Conclusion

The concept of cognitive robotics is used to emulate the behaviour of human expertise expressed through the disassembly process. As a result, the system is able to handle the uncertainties in the product and the process perspectives. The close-perception action loop architecture is used to incorporate this module with vision system, disassembly operation units, and human user. Therefore, the CRA can interact with the external world according to the conditions sensed at the current state of disassembly. In the CRM, the CRA expresses the behaviour according to four cognitive functions incorporated with the KB. The CRA drives the system through the disassembly domain represented in three levels, 1) product structure, 2) disassembly operation plans, and 3) process parameters. In this research, this domain is constructed based on the case-study product LCD screen.

219

Chapter 6 – Cognitive Robotics

The behaviour consists of two levels which are the basic and the advanced behaviours. For the basic behaviour, the rule-based reasoning schedules the actions in order to drive the disassembly process according to the incomplete knowledge of external world and existing KB. The execution monitoring measures an achievement of the component removal for indicating the state change. For the advanced behaviour, the learning stores the model-specific knowledge captured during the disassembly process for implementation when the previously seen model is encountered. The separated KB used benefits to the structure of the CRA that will be simple after a large number of samples have been disassembled. This is more practical in the actual disassembly industry scenario. The revision continuously makes the process more efficient by revising the existing KB by repeating disassembling of the same model. In addition, human assistance is given as a demonstration of the operations for assisting the CRA to overcome complex circumstances and be able to proceed to the subsequent state. These behaviours are formulated with IndiGolog program and are preliminarily validated by demonstrating the process flows.

In conclusion, the developed CRA is able to handle the uncertainties in the disassembly of the previously unseen models and the efficiency of the process can be increased when repeating disassembly of this model. For the previously unseen model (unknown model), the trial process is performed in order to achieve the goal state of disassembly. The CRA is expected to perform the process autonomously in general conditions encountered in the majority of the processes. Human assistance will intervene in order to solve complex circumstances typically caused by inaccuracy of the visual detection and unsolvable physical uncertainties. From these activities, the CRA is able to generate the model- specific knowledge that will be used later on. Afterwards, when this model has been encountered again (known model), the CRA will be able to carry on the process more autonomously. The amount of human assistance will also decrease according to the existing knowledge learned from the previous process. In addition, the efficiency in regard to time consumption tends to increase due to the revision process. However, in this chapter, only the conceptual tests are conducted regardless of the actual outcome from other operating modules. The full system test will be presented in Chapter 7.

220

Chapter 7 – Performance testing 7 PERFORMANCE TESTING ______

This chapter presents the performance testing of the entire proposed system. The content is divided into five sections. First, an overview of the experiment including key performance index (KPI) and the experiment procedures are explained in Section 7.1. Next, the experiment for testing the flexibility of the system to deal with various different models of LCD screen is given in Section 7.2. The experiment for testing the learning and revision strategy in regard to the performance improvement is explained in Section 7.3. The life cycle assessment perspective of the system is described in 7.4. Finally, the conclusion of this chapter is given in Section 7.5.

7.1 Experiment overview

The experiments are designed to test the performance of the entire disassembly system which is an integration of the three operating modules discussed in the previous chapters. The disassembly processes were performed on various models of the sample LCD screens. The performance is aimed to be measured in two perspectives: 1) flexibility and 2) learning and revision. The flexibility to handle uncertainties expected to be encountered in a variety of models is tested. Second, the capability for learning and revision of the knowledge in the disassembly process are tested. In this section, the general experiment method is explained first and followed by both tests.

7.1.1 Key performance index

The performance of the system is measured by three KPIs, namely

x Completeness of disassembly; x Time consumption; and, x The need for human assistance.

221

Chapter 7 – Performance testing

7.1.1.1 Completeness of disassembly

The disassembly outcome is indicated to be complete if every main component is detached from each other. Effectiveness of the operation is typically measured from these key performance indices. In the detail level, efficiency of the operation for detaching the component is measured by weight in comparison with the upstream condition (see Section 4.4.3). The efficiency is measured after the disassembly process is carried out. In general, the Type-I structure can be measured at the end of a single run. On the contrary, the Type-II structure is expected to be completed after the second run, so that the efficiency is measured at that stage. Since the cut off parts are mixed in the disposal container after being detached, the weight comparison is observed according to four group of material: 1) plastic, 2) steel, 3) PCBs, and 4) compound component. The plastic group consists of back cover and front cover. The steel group consist of PCB cover and carrier. The PCB groups are all types of PCB and the compound component refers to the LCD module. An example is shown in Figure 7.1.

(a) Plastic (b) Steel (c) PCBs (d) LCD module

Figure 7.1: Detached components classified by type of material

7.1.1.2 Time consumption

The time consumption of the process is measured from the beginning to the end of the process regardless of the lead time due to load-unload sample. However, a penalty time of 5 minutes for second time loading for the second run of the Type-II when detaching the PCBs from the carrier is taken into consideration since it is regarded as an extra process. In general, the overall time consumption for one process is contributed by three operating modules. In average, around 97% involves the physical operations (i.e. cutting operations, the routine for updating the grinder disc, and flipping table). The visual sensing involves 1.5% and artificial intelligence (AI) and data transfer activity involves 1.5%. The time consumption is presented in two perspectives: 1) autonomous process and 2) human

222

Chapter 7 – Performance testing assistance involved. The data belonging to each process is recorded in a log file generated by the CRA. The significant level of 0.01 second is taken into account.

7.1.1.3 Need for human assistance

The amount of human assistance needed is an indirect measurement of the level of autonomy which is more difficult to measure. The number of times the demonstration given is counted. It should be noted that the number of counts excluded the final deactivation at the end of the disassembly process. In regard to the duration, the time spent is the time for the user to make decisions which can be different for every process. For consistency, an average of 5 seconds per one time assistance is assigned for decision- making. Therefore, the time consumption presented in this chapter is a summation of the decision making and the actual execution.

7.1.2 Experiment setup and procedures

The processes are expected to be conducted autonomously by the cognitive robotic agent (CRA) the majority of the time. It is expected the human user will be involved in certain situations where the conditions are unsolvable by the CRA. The process can be divided into three phases: 1) pre-process, 2) process, and 3) post-process. Detail is as follows.

First, in the pre-process, the sample is visually checked and manually loaded into the disassembly rig located in the isolation room in order to prevent potential toxicity from the user during the disassembly process. The sample is firmly placed on the flipping plate and the orientation is fixed by two suction cups and fixture elements. Afterwards, the disassembly process will be activated by the user via the graphic user interface (GUI) operated on the local machine located in the next room. The model name must be specified by the user. The CRA starts controlling the disassembly rig to perform the process autonomously from this point.

Second, during the process, the CRA schedules the action according to the behaviour presented in Chapter 6. The messages generated according to the action and fluents are stored in the data log with timestamp. Human assistance will be given only when the CRA gets stuck and requests it. The assistance will be given through the GUI as much as needed according to the user’s decision. After that, the CRA continues the autonomous process until further assistance is needed or the process has reached the goal of finding the LCD module. During the process, the detached parts from the cutting operations are

223

Chapter 7 – Performance testing dropped to the disposal container underneath which will be collected afterwards. Example steps of the disassembly are shown in Figure 7.2.

Finally, in the post process, the information will be collected from three sources: 1) the data log, 2) images from the vision system, and 3) knowledge base (KB). The remaining part on the fixture plate, i.e. LCD module, will be unloaded manually and the cut off parts will be collected. These parts will be weighed and used for analysis afterwards.

In case of an incomplete disassembly potentially occurring with the Type-II structure, the second run is needed. Only the incompletely disassembled part –PCBs and carrier– is loaded on the fixture plate. The process is performed as previously described until the goal state has been reached; in this case, all PCBs have been detached.

(a) (b) (c) (d)

(e) (f) (g) (h)

Description (e) Flipping the fixture plate after every operation routine (a) Home position (f) Detached component dropped to the disposal container (b) Update grinder size visual detected above camera (g) The component was removed and enter the next state (c) Cut through the cutting path according to the plan (h) Goal state after all components had been removed (d) Spark because of cutting steel part A part attached by cables would be cut in the post process

Figure 7.2: Snapshots of the disassembly process captured from the performance test

7.2 Flexibility testing

This section describes the testing of the flexibility and robustness of the system to handle uncertainties expected to be encountered in a variety of previously unseen models. The process corresponds to the behaviour of the CRA to handle the unknown models. According to the testing samples, the experiments were done on 24 different selected

224

Chapter 7 – Performance testing models from 12 manufacturers. The size varied from 15”-19”, and the year of manufacture ranged over 1999-2011. The detail of the models used as the samples are presented in Appendix A. With respect to the structure type, 15 monitors of Type-I and 9 monitors of Type-II were used. The Type-I sample was further classified into 8 of Type-Ia and 7 of Type-Ib. The aforementioned KPIs were measured according to the collected data. In addition to the system’s performance, the flexibility of the system is related to the performance of the vision system module and effectiveness of the general disassembly operation plans. Their results are also presented in this section.

7.2.1 Vision system performance

The performance of the vision system for the ideal non-destructive disassembly is presented in Section 5.4. However, the performance marginally changed in the actual disassembly process using (semi-) destructive approach. The result is shown in Table 7.1. It should be noted that the number of samples are fewer than that of the ideal case due to the available sample to be disassembled by the actual process.

Recognition Sensitivity (%) Localisation (mm) Detector Relative to Actual test Mean S.D. RMS Min Max ideal case 1 Back cover 100.00 0.00 -2.85 1.89 3.41 -7.43 1.14 2 PCB cover 95.45 -4.55 -0.28 2.27 2.27 -8.57 8.00 3 PCBs 89.47 -0.89 2.38 5.91 6.35 -20.00 24.00 4 Carrier 60.00 -19.41 1.76 4.32 4.64 -4.57 17.71 5 LCD module n/a 6 Screw 60.20 -4.02 error within ±0.5 7 State change 99.21(1) +3.73 n/a NOTE: (1) overall accuracy

Table 7.1: Performance of the detector in destructive disassembly (actual case)

Due to the damage incurred by the main component, the recognition sensitivity and localisation marginally changed case by case. Changes in the performance were caused by two main factors: 1) significant features necessary for detections were destroyed and 2) the remaining parts belonging to other components were misclassified as being part of the underlying component. In regard to the recognition, the sensitivity of the PCB cover and the screw slightly decreased by less than 5%. The sensitivity of the carrier reduced by

225

Chapter 7 – Performance testing

19% due to significant damage generally occurred in the process. For the localisation, the difference was insignificant which was within ±1 mm in comparison with the ideal case.

With respect to the state change detector, the accuracy increased by about 4% from the ideal case. This small difference was insignificant since a much larger number of the data points in the actual process were considered. The percentage is very high since a large number of the data points showed detection outcome as true negative (state not change) which more commonly occurs in comparison with the case of true positive (state change).

7.2.2 General disassembly plan performance

The general disassembly plans were performed autonomously in the trial phase especially for the unknown model. They are expected to remove the corresponding main components after each plan has been finally executed. The parameters are developed based on the assumption that the vision system can provide accurate detected location of the component in comparison with visual inspection conducted by the researcher. Therefore, the automatically complete component removal was difficult to achieve due to the imperfect location accuracy shown in Table 7.1. The data collected during the disassembly of the 24 samples are summarised in Table 7.2.

Main Rate of success to remove the main component (%) component General plans conducted autonomously Human assistance Plan-0 Plan-1 Plan-2 Plan-3 Back cover n/a 0 0 12.5 100 PCB cover n/a 46.67 33.33 n/a 100 PCB 0 0 0 17.65 100 Carrier n/a 16.67 n/a n/a 100 LCD module n/a

Table 7.2: Success rate of the plans for removing main components

From the heuristic rules regarding the impact of the operation plan, the plans were executed according to numerical order followed by human assistance. This table shows the successful rate after a particular plan and its preceding plans have been executed. The percentages in Table 7.2 are calculated based on the times that a particular plan is implemented. Overall, successful removal rates for the general plans were quite low but all of them were able to be removed after being assisted by the user. The failures of the

226

Chapter 7 – Performance testing removal were caused by aforementioned reasons regarding inaccurate localisation and non-detectable connections. However, the unsuccessful operation plans indirectly contributed to the removal process by destroying the majority of the significant connections. The subsequent human assistance finished the process at the end.

In addition, small values of predefined depth constraint resulting in higher failure rate since the depth of cut was insufficient. Certain values of them, i.e. plan for back cover, were intentionally set to be low to decrease the chance of multiple components falling down inappropriately. For example, if the back cover is cut too deep, the border of the carrier may be cut at this state resulting in the carrier, PCB cover, and PCBs falling down together. This bunch of components is fully attached and harder for further disassembly. This situation needs to be avoided if possible.

In case of the operation plan-1 of PCB cover, this operation is designed to have a higher successful rate since it is crucial for the classification of the main structure. The success rate was 46.67% which was higher than the other operation since more trial steps in horizontal variation were implemented (see Section 6.4). The drawback was the extra time consumption which was 150% of the time used in the normal trial process. This success rate contributes to the classification result in accordance with the aforementioned visual classification.

7.2.3 Key performance index

Three aforementioned KPIs of the system in order to handle the variation of models were measured. The explanation of each KPI is as follows.

7.2.3.1 The completeness of disassembly

In the flexibility test will be presented in two perspectives: 1) the completeness due to the classification of the main structure and 2) efficiency by weight comparison of the detached component.

The classification of the main structure is shown in Table 7.3 where the ratio under classification column shows the ratio between true positive samples and the number of all samples in a particular category. Therefore, the sensitivity of the classification is measured. From the experiment, the system was able to precisely distinguish Type-Ia from the others by using solely the visual system. The operation further distinguished

227

Chapter 7 – Performance testing between Type-Ib and Type-II achieved sensitivity of above 55%. The misclassification was directly related to the success rate of the plan-1 for the PCB cover. Overall, without considering subclass, the sensitivity for Type-I was 80% and Type-II was 55.55%.

Classification Structure type Sensitivity (%) Vision system Operation a 8/8 n/a 100.00 Type-I 80.00 b 16/16 4/7 57.14 Type-II To be classified 5/9 55.55

Table 7.3: Classification of the main structure

This classification indirectly determined the need for second runs for particular models. Ideally, only the Type-II structures would need a second run for removing the PCBs from the carrier since the PCBs were inaccessible in the first run. However, due to the misclassification with Type-Ib, human assistance was able to be given in order to detach the PCB from the backside of the carrier by cutting around the area of the potential screws (see Section 6.4.1). Subsequently, the PCB could be detached at this state in the first run. The advantage of this strategy is to reduce the process of reloading in the second run. Overall, approximately 70% of samples were able to be completed in the first run and the remaining 30% was completed after the second run. Regarding the first category, 40% of the samples follow this cutting screw strategy. The knowledge regarding the location of the screws will be learned by following this strategy. For the models needing a second run in the first time disassembly, the CRA will force the user to implement this strategy in subsequent processes.

From the last row in Table 7.4, the efficiency of the detached component operation according to the variation of model was 97.36% on average with standard deviation 3.11%. The remaining and the residue percentages were considered from the total weight of all samples. It was represented that around 97.36% of the detached parts turned out as lump while the rest 2.64% was scrap material.

With respect to each group of material type, overall, considering the entire product, material can be clearly separated from each other with more than 98% efficiency on average. Due to the material group, the efficiency of plastic and the compound material (LCD module) were more than 93% efficiency and the efficiency of PCB and steel were

228

Chapter 7 – Performance testing around 85%. According to the visual inspection, the trend was similar to the conceptual test presented in Chapter 4. The residue represents the part that was cut off from the component. Positive residue means that the component loses weight. The remaining parts of the PCBs and PCB cover were still attached to the carrier due to the offset of the cutting operation. The data is in Appendix E.

Remaining Residue Efficiency (%) Material (%) (%) Average S.D. Min Max Plastic 100.54 -0.54 96.12 6.61 70.47 99.84 PCB 86.39 13.61 85.43 9.79 52.5. 98.49 Steel 91.66 8.34 86.51 8.55 63.22 97.98 Compound 101.79 -1.79 92.47 5.46 82.90 99.98 Product 97.36 2.64 97.36 3.11 89.00 99.90

Table 7.4: Outcome of the detached components

In addition to the efficiency, one of the critical issues in disassembly of LCD screens is damaging the Cold Cathode Fluorescent Lamps (CCFL) lying within the LCD module. From the experiment, in most cases, minor damage occurred to the LCD module due to the cutting operation of the carrier. However, there were none that damaged the CCFL due to the predefined constraints. Therefore, it can be ensured that no toxic substances are leaked from the CCFL.

7.2.3.2 Time consumption

The duration of the process depends on the complexity and the size of the screen. The majority of the time is used for physical operating routines as stated in Section 7.1.1.2. According to one operating cycle the physical operations can be divided into the following three perspectives:

x The routine for flipping table took 10.90 seconds, including flipping operation (8.45 seconds) and checking state change (2.45 seconds); x The routine for updating the grinder’s length, including robot movement and visual checking, took 4.19 seconds; and, x The cutting operations took 33.75 seconds on average. It varied from 5.09 – 186.91 seconds (95% of the data were within 0 – 100 seconds). The variation depended on the size of the cutting path and variation of the cutting method trial.

229

Chapter 7 – Performance testing

In practice, the autonomous process had all three operations in each operation cycle in order to check the detachment of the component after cutting. On the contrary, during the human assistance session, the routine for flipping table does not have to be performed in every cycle since the detachment of the component can be visually justified by the user. It can be performed once at the end of the demonstrated custom operation sequence.

The time consumption for each model in Type-I and Type-II is shown in Figure 7.3 where the contribution of the autonomous process and human assistance are shown individually (Figure 7.3 -Figure 7.5 are sorted by type and screen size, see Index* in Appendix A). From the result, the autonomous process performed 67% of the entire disassembly process. Overall, the time for the disassembly process ranges from 35 - 60 minutes for Type-I and 32-55 minutes for Type-II varying according to the complexity of the models. The average time for Type-I was 49.37 minutes and Type-II was 46.18 minutes (the time penalty for operating the second run was included).

Time consumption in disassembly process 70

60

50

40

30 Time (mins) Time

20

10

0 0 5 10 15 20 Model index Type-I Type-II Run1-auto Run1-human Run2-auto Run2-human

Figure 7.3: Time consumption of the disassembly process

Overall, without considering the types, the average time for disassembly an unknown model LCD screen was around 48 minutes (see Appendix E). The time consumption was contributed by each operation shown in Figure 7.4. The detail is as follows.

x The vision system and AI activities were less than 3% which is insignificant;

230

Chapter 7 – Performance testing

x Flipping table routine performed 47 times on average (ranged 31 - 65 times) which result in 8.59 minutes on average; x Time penalty for reloading a sample 5 minutes for the second run; x The cutting operation with the grinder checking performed 58 times on average (ranged 31 -79 times). Time consumption was 45.85 seconds on average; and, x Time for the human user to make decision was 5 seconds/count. It occurred 32 times on average (ranged 12 - 56 times) resulting in 2.67 minutes on average.

Time consumption in disassembly process by types of operation 70

60

50

40

30 Time (mins) Time 20

10

0 0 5 10 15 20 Type-I Type-II Model index AI Vision FlipTable Cut Operation human decide

Figure 7.4: Time consumption of the disassembly process by each operation

The processes were very long in comparison to the traditional manual disassembly. However, the process for each individual model is expected to be further optimised by learning and revision strategy. First, the redundant operations dealing with trial-and-error process will be minimised, i.e. flipping table routine, reloading for second run, human assistance. At least, about 11 – 16 minutes will be regained on average. Second, the redundant cutting operations will be retracted. The proper finding method will be also learned resulting in reducing the time for trial-and-error. Lastly, the incremental deepen cut will be adjusted to make the process quicker. The time regained is depended on the features needed to be cut in each case. From the result in learning and revision experiment (Section 7.3), the time consumption expects to be finally reduced to around 24 minutes.

231

Chapter 7 – Performance testing

7.2.3.3 Need for human assistance

Demonstration of the primitive cutting path related to the operation level occurred in more than 99.9% of the assistance given. Less than 0.1% was necessary to resolve the component level problem. The overall amount of human assistance for each model is shown in Figure 7.5 where the Type-I and Type-II are shown separately. The count for Type-I and Type-II was similar which ranged 12 – 56 and approximately 32 times on average. In practice, the greater number of counts in Type-II was due to a number of the cutting operations being demonstrated according to the strategy for cutting screws and/or second runs. See Appendix E for data.

Figure 7.5: Human assistance count in the disassembly process

7.2.4 Summary

According to the experiment on flexibility, the system was able to handle uncertainties encountered in a variety of models.

x The performance of the vision system changed from the ideal undamaged condition – generally decreased by 5% in sensitivity and indifferent in localisation accuracy – according to the damaged and leftover part of the main component. x Human assistance was needed in most cases to achieve the component removal due to the low success rate of the general plans. Around 70% of the assistances

232

Chapter 7 – Performance testing

were for deepening the cuts that were too shallow to remove the component. The rest 30% were for disestablishing the non-detectable connections. x The main structure was able to be classified with sensitivity of 80% for Type-I and 55.55% for Type-II. Misclassification was typically caused by an unsuccessful cutting operation used for distinguishing Type-Ib and Type-II. Cutting the PCB cover part was unsuccessful because the cover was not a perfect rectangle as expected and/or the cover made from thin steel sheet was bent. x The completeness of the detached main components was approximately 90% according to the efficiency 98%. The remaining part of PCBs and PCB cover were attached to the carrier. x Time consumption was mainly caused by the physical operations. Approximately 67% of the duration of the process was conducted autonomously. The process of the Type-II samples generally took longer than Type-I due to the complexity of the model and the extra process needed in the second run. x Time consumption was around 48 minutes on average for both types. A number of redundant operations were performed and expected to be optimised by learning and revision strategy. x Human assistance was typically given to resolve problems at the operational level. More assistance in terms of counting was given in Type-II than in Type-I.

In conclusion, most of the required work was done autonomously by these general plans but the removals were not generally accomplished straightaway. A number of failure cases are caused by intentionally limiting the cutting depth in order to prevent multiple components falling without disassembly. Human assistance was needed to resolve uncertainties resulting in incompleteness at the end of the stage. Since, the removals were achieved in every case, it can be concluded that the disassembly was achievable by this approach in which the vertical cutting direction was implemented. In addition, even though the operation was not fully automatic in these first times of treating unknown models, the process was learned and expected to be carried out automatically later on. The disassembly of these previously unseen models took very long and needed to be optimised in order to compete with the traditionally manual disassembly.

233

Chapter 7 – Performance testing

7.3 Learning and revision testing

This experiment aims to test the capability of the system in learning and then revision of the knowledge extracted from the disassembly process that had been achieved previously. These two cognitive functions correspond to the advanced behaviour control presented in Section 6.3.2. The performance of the process is expected to improve according to this behaviour. The three KPIs were measured from this experiment.

7.3.1 Experimental method

The testing samples were one model of Type-I and one model of Type-II. Similar models in terms of size (17”) and year of manufacture of model were selected from the available samples. The experiment was repeated five times for each model to see the trend in regard to the KPIs. According to the revision strategy, five repetitions should be sufficient to exhibit the trend. The number of identical available samples was also the limitation.

Process KB st Unknown 1 disassembly produce revDSP1 model (rev-1) use A set of knowledge produced th nd from the i disassembly Known 2 disassembly produce revDSP2 model (rev-2) planDSP([model, revDSPi], ..... ). planInKb([model, revDSP ], c , .....). revDSP-i i 1 . planInKb([model, revDSP ], c , .....). Known 5th disassembly i k produce revDSP5 model (rev-5) Figure 7.6: Experiment order due to the revision

Initially, for the 1st disassembly, the unknown model sample was disassembled. Subsequently, the process was carried out and produced the set of knowledge, i.e. planDSP and planInKb, according to the current process marked as the revDSP1. This knowledge was used for the subsequent disassembly. The experiment was conducted in this order until the 5th disassembly was finished (see Figure 7.6). In addition, in regard to the repeatability of the experiment, the test of five samples for one model is completed before changing to another model. Therefore, the sample was able to be placed at the identical location as constrained by the stationary fixture element. This can reduce the possible variation in localisation of an initial product coordinate.

234

Chapter 7 – Performance testing

7.3.2 Key Performance Index

In this section, the explanation of the behaviour and three KPIs will be given in detail for each model. From the experimental result, the performance of the system regarding time consumption and the amount of human assistance increased dramatically during the first few times of revision. Then, it remained almost the same with small fluctuations occurring due to the variability in the vision system and the disassembly operation processes. The trends of disassembly performance regarding time consumption and amount of human assistance according to the revision of both models are illustrated in Figure 7.7. The detail for each model is as follows:

NOTE: (*) The operation in the 1st run resulted in an incomplete removal.

Figure 7.7: Disassembly performance with respect to multiple revisions.

7.3.2.1 Structure Type-I

According to the characteristics of the Type-I structure model, the entire disassembly process is expected to be completed in a single run. From the experiment, this condition was satisfied from the first time disassembly of this model. In the first disassembly (Rev- 1), the overall time consumption was 47.9 minutes and human assistance was given 37 times. Around 70% of the assistance was given for deepening the cut that were not enough to remove the components. The rest of them were used to locate the non- detectable connections. The revisions were done later on until Rev-5. The time

235

Chapter 7 – Performance testing consumption, in comparison to the first disassembly, decreased to about 89% in the Rev-2 and decreased to about 52% in the Rev-3.Small fluctuations within 5% occurred between Rev-3 and Rev-5. A similar trend was expressed for human assistance. The amount of assistance dropped dramatically to about 16% in the Rev-2, maintained at that level within 3% fluctuation, and finally dropped to 0% at the Rev-5. In summary, the performance increased as the process was revised. Eventually, the time of 25.7 minutes was spent and no further assistance needed in the final test. In regard to the efficiency of the detached component, there was no significant difference among the revisions where the efficiency was approximately 98%.

7.3.2.2 Structure Type-II

As discussed, regarding inaccessibility of the PCBs under the carrier, the second run was needed to complete the disassembly. This tested sample needed the second run at first (Rev-1) in order to complete the disassembly. However, this problem was resolved from the second time of disassembly (Rev-2) where only a single run was needed since the strategy for cutting screws from the backside had been applied. The outcome of the incompletely detached carrier and PCBs and the outcome after performing the second run are shown in Figure 7.8.

Second run

Incompletely detached Completely detached PCBs PCB and carrier

Completely detached carrier Cut-off steel part from carrier

Figure 7.8: Incompletely detached carrier and PCBs and the second run

In the first revision, the total time consumption (sum of the single run, second run, and reload penalty) was 28.1 + 22.0 + 5.0 = 55.1 minutes. The total amount of human

236

Chapter 7 – Performance testing assistance in the first and the second runs was 8 + 30 = 38 times. Similar to the Type-I case, most of the assistances were given for deepening the cut of the unsuccessful removals. For the Rev-2, the CRA realised that multiple runs were required from the first revision. Consequently, it asked for the user’s assistance to resolve this complexity by employing the cutting screw strategy. As a result, the time consumption decreased to 74.5% since this strategy was more efficient than the multiple run. However, more steps were required in human assistance which increased to 107.9% of the first one. In comparison to the first revision, the problem of too shallow cutting was almost completely resolved from the Rev-I. More than 90% of the assistances were given in the cutting screws strategy where the screws were non-detectable. The knowledge obtained in this revision satisfied the single run requirement which was more effective. In Rev-3, both values decreased dramatically where the time consumption decreased to 34.5% and human assistance decreased to 2.6% according to the first revision. Both values marginally increased by about 10% in the Rev-4 due to the uncertainties of the system and maintained about this level until the end of Rev-5. Eventually, at the final revision the time consumption was 25.1 minutes (45.5% from the first revision) and no human assistance required (0 time). In regard to the efficiency of the detached component, there was no significant difference among the revisions where the efficiency was around 90%. The disassembly outcome is shown Figure 7.1.

7.3.3 Uncertainties in process

Due to the results, a fluctuation due to a decrease in performance was noticed in some revisions, e.g. Type-I Rev4-5 and Type-II Rev3-4. This fluctuation among those revisions was caused by the uncertainties in 1) vision system and 2) disassembly operation.

First, the uncertainty in the vision system is caused by inaccuracy of measureZF which is used to locate the top surface where the cutting starts. Due to the precision of measureZF presented in Section 5.4, the starting cutting level is within the upper and the lower bound which is approximately ±3 mm about the actual top surface. This inaccuracy possibly causes extra deepening cutting iterations to be needed as illustrated in Figure 7.9. The CRA cuts the object from the start point iteratively. It incrementally deepened until reaching the destination depth zdst which was learned from the previous operation. Therefore, for example, one extra cut is generated where the sensed top surface is too

237

Chapter 7 – Performance testing high as shown in Figure 7.9b. The variation within this boundary leads to the expected variation in time consumption being bounded.

start Upper bound start Actual top surface start Lower bound

zdst

(a) Ideal case (b) Too high measured z (c) Too low measured z

Figure 7.9: Uncertainties due to the variation of the starting level for cutting

Second, the uncertainty from the disassembly operation involves an imperfectly repeated cut. An identical action that was successful in removing the main component in a previous process may fail in other subsequent processes. The problems are caused by three factors: 1) Non-uniform wear rate of the abrasive cutter resulting in minor difference in the cutting destination, 2) uncertain physical characteristics of the object after cut due to different process parameter. For instance, the plastic is melted and sticks to other components when using high feed speed and large cutting depth. Therefore, extra human assistance is needed to resolve these minor uncertainties. The extra time consumption is a consequence.

7.3.4 Summary

According to the experiment for learning and revision, the system was able to improve its performance. The result is summarised as follows:

x Due to the completeness of disassembly, the Type-I was able to achieve success with a single run. The Type-II required the second run only for the first time disassembly. Afterwards, the cutting screw strategy taught by human assistance allowed the achievement of complete disassembly with a single run; x Efficiency according to the weight comparison was more than 90% in both types. x Time consumption decreased by about 50% after the third revision and remained at that level with small fluctuation for both types. The time for Type-I reduced from 47.9 to 25.7 minutes and for Type-II reduced from 55.1 to 25.1 minutes;

238

Chapter 7 – Performance testing

x A significant amount of human assistance was needed only in the first revision for Type-I. It was needed in the first two revisions for Type-II to resolve the second run issue by incorporating the cutting screw strategy; x No or minimal human assistance was required after the initial significant assistance was given; and, x Minor increases in time and human assistance needed after reaching the lowest value was caused by uncertainties in vision system and cutting process.

In conclusion, the performance of the system increased to a certain level as more disassembly processes were conducted in accordance with implementation of learning and revision. According to the trend exhibited in the experimental result, the time consumption of the disassembly process was able to decrease dramatically to a certain level after the first few times of revision. In addition, the process was expected to be performed autonomously without any or with only minimal human assistance required after the revised KB was stable after the first few revisions. Minor uncertainties in the process can cause the performance to fluctuate within a small boundary which is expected to be further suppressed by improving the accuracy of each individual operating module.

In comparison to the traditional manual disassembly, the time consumption needs to be further reduced to 6.2 minutes/screen on average (Kernbaum et al. 2009). The process must be further optimised. Possible improvements are as follows. First, the disassembly time is possible to reduce to around 9 – 11 minutes if the cutting tool can directly approach the destination depth in one operation cycle instead of incremental deepen cuts. Second, all movements regarding the hardware should be improved, e.g. feed speed, robot movement, flipping table, etc. However, more reliable and powerful hardware is needed for the improvement of both aspects and overcome this benchmark.

7.4 Life cycle assessment perspective (LCA)

This section presents the proposed disassembly process in the practical implementation. In regard to LCA, energy consumption, toxicity, and EOL treatment are described.

7.4.1 Disassembly cost

The cost constraint in disassembly is closely linked to the time consumption. From the experiment, the proposed system is expected to disassemble any unknown models of

239

Chapter 7 – Performance testing

LCD screens in about 50 minutes with some human assistance involved. The process will be improved after learning and revision. As a result, the time consumption for the optimised process for individual model is expected to reduce by half, which is around 25 minutes. The system will be fully automatic in this case. However, the system in this research is only a prototype which further improvement is essential in order to compete with the manual disassembly that take 6.2 minutes on average. Eventually, by optimising the hardware and process parameters as discussed in previous section, it is high potential that the system will be able to achieve the manual time consumption benchmark in a more economical way. However, since this system is in a prototype stage and was tested under the controlled environment, the actual cost in the industrial scenario cannot be justified precisely at this moment (see preliminary estimation in Appendix E).

7.4.2 Toxicity

In regard to the toxicity, hazardous material is resulted from cutting of the components in LCD screen where the materials are listed in (Franke et al. 2006Franke et al. 2006). Major hazardous material commonly in the proposed process is found in three forms: 1) fume from halogen-free plastic, 2) dust from cutting various material, 3) dust from the abrasive cut-off disc. In addition, small amount of mercury can contaminate the environment if the CCFL is broken. However, according to the experiment setup, these hazardous factors were able to be addressed by the following strategy:

x The selected dust extractor can filter dust and smoke down to 0.4 micron; x The disassembly rig is located in an isolated room where the user remotely controls and demonstration via the GUI; and, x Cutting of CCFL and LCD glass are avoided by the proposed operation plans.

Therefore, toxic discharged in the form of dust in a confined space where the cleaning process can be done. The possibility of toxic exposure to the human user and environment is minimal. Currently, the leftover hazardous substance can be exposed by the user during the loading and unloading the sample which are done manually. However, this process is expected to be automated in the future.

7.4.3 Disassembly for recycling

In regard to the disassembly of EOL product, the proposed disassembly process is able to satisfy the requirement of recycling where the components can be damaged. Majority of

240

Chapter 7 – Performance testing the components in his process turned out as lumps according to the result due to the cutting efficiency Section 7.2.3. Therefore, disassembly has this advantage over the shredding with regard to this perspective.

7.5 Conclusion

The validation of the methodology of the entire system is explained in this chapter. The performance is measured by 3 KPIs, namely completeness of disassembly, time consumption, and need of human assistance. The tests were done by two experiments.

First, the flexibility test aims to validate the capability of the system to deal with the uncertainties found in a variety of models of LCD screen. In this case, 24 different models were disassembled. Overall, the operation for Type-I samples was simpler than Type-II due to the accessibility issue. The autonomous process performed majority of the essential cutting operation but the human assistance was still need to finish up in most cases. Regarding the time consumption, the process was quite long, around 48 minutes in average, due to the trial-and-error process that resulted in numerous redundant operations. However, these operations are expected to be later optimised by the learning and revision. Therefore, the time is expected to reduce and human assistance will be unnecessary.

Second, the learning and revision test was conducted to validate that the performance of the system can be improved by these two cognitive functions. The test was done on a selected model from each type. The disassembly was repeated for each model and the CRA keep acquiring and revising the knowledge. The performance increased and reached stability with in the first few times of revision. Eventually, the system can operate autonomously without human intervention with the time significantly reduced from the initial process. The time reduced by about 50% and the process took around 25 minutes.

Lastly, the LCA perspective of the system is explained. According to the disassembly cost, the system is still unable to overcome the manual disassembly in term of time spent. By conducting (semi-)destructive disassembly, a higher level of toxic substance will expose to the environment. However, it can be controlled within the closed space and prevent human to expose to the hazard substance, in this case, Mercury in CCFLs. Using the destructive disassembly can achieve high success rate of disassembly while the outcome can satisfy the desired EOL treatment option which is recycling.

241

Chapter 8 – Conclusion 8 CONCLUSION ______

This chapter presents the final conclusion of this research. It is organised as follows. First, a comprehensive summary and findings of each operating module is given in Section 8.1. Second, the major research contributions of this thesis and discussion according to the proposed scope and objectives are given in Section 8.2. Lastly, the future work is discussed in Section 8.3.

8.1 Summary and findings of each module

In this research, the principle of cognitive robotics is implemented in the disassembly automation in order to resolve the problems regarding uncertainties in products and process. The behaviour in a manual disassembly process as expressed by a human expert is emulated by the cognitive functions which are used to control the behaviour of the system. The cognitive robotic module (CRM) performs the disassembly process through the vision system module (VSM) and disassembly operation unit modules (DOM). The system also interacts with humans in order to resolve complex problems. These three operating modules are comprehensively concluded and discussed in this section.

8.1.1 Disassembly operation module

The DOM deals with physical operations implemented in the disassembly process of an LCD screen. The uncertainty in End-of-Life (EOL) product condition and the detail in the disassembly operation are addressed. The observation was done on 37 different models of LCD screen are ranged 15”-19” and manufactured in 1999 - 2011 from 15 manufacturers. Based on the selective disassembly approach used, the product consists of 6 types of main component and 3 types of connective component. The liaison diagram used to represent the sample’s detailed structure that is unique for each model. The hardware and the operation plans are designed to be cost effective and reliable by considering these variations. The significant perspectives of the DOM are concluded as follows.

242

Chapter 8 – Conclusion

8.1.1.1 Product analysis

A broad classification scheme of the main structure of LCD screens is proposed in this research to eliminate the need for prior knowledge of the specific product structure. The main structure is classified into two types, Type-I and Type-II, according to the arrangement of the main components - the PCB cover, the PCBs, and the carrier. The main structure needs to be identified since the different structures lead to different execution routes of disassembly with respect to accessibility of the components. The disassembly process was found to be more complex for the Type-II samples where the PCBs and a number of connective components are non-detectable and inaccessible. In the actual operations, from the given structure definition, the cognitive robotic agent (CRA) can autonomously identify the structure type by using visual recognition and the execution outcome. As a result, The CRA can decide the execution route during the process without being supplied specific information regarding the product structure.

8.1.1.2 Hardware design

First, a small size 6-DOF industrial robot arm equipped with an angle grinder is used to perform the cutting operations for (semi-)destructive disassembly. The need for tool change has been eliminated. According to the experiment, the grinder operated in the upright orientation is able to achieve all operations. However, the small robot’s workspace limits the accessibility to certain desired cutting orientations and positions. Effectiveness of the operation plans is reduced in certain cases. For example, in the case that the cut-off disc cannot reorient to be perpendicular to the desired cutting plane, the cutting target is possibly missed due to the disc bending and excessive side friction. The MotionSupervision, a facility of the robot using the built-in force-torque sensor, may consider this excessive external force a collision. As a result, alternative cutting methods will be tried which leads to extra time consumption and damage on the sample.

Second, the FlippingTable is developed for removing the detached parts and components. This device can eliminate the need for a flexible gripper that is expensive due to the hardware cost and computational resources. The device holds the sample at the LCD module side which successfully prevents the damage from the cutting operations. This fixture system works effectively for every model during the experiment. However, a problem occurs in case that the component is incompletely detached; for example, the carrier is hung to the LCD module with the attached hidden cables. It is commonly found

243

Chapter 8 – Conclusion in Type-II rather than Type-I. This problem can be addressed by human assistance in order to cut the remaining connections. The extra cuts will be learned and implemented in the later process.

8.1.1.3 Operation plans and process parameters

The general operation plans for each type of component based on statistical information of the possible location of the corresponding connective components are proposed in this research. Instead of destroying the connective component (semi-destructive approach), the operations tend to cut near the border of the main component (destructive approach) to detach its major part. A major benefit is to compensate the limitation of the vision system regarding the non-detectable connective components, e.g. hidden snap-fits around a back cover, screws, and cables around a PCB. However, for the limitation, it is effective only for the connective components lying along the border of the main component. The extra cut given by human assistance is needed for disestablishing special connections in the middle area, e.g. plastic rivet on PCBs. In addition, this strategy is suitable only for the recycling purpose since the component will be partially damaged.

The critical conditions of the operation plans and process parameters that lead to the main component removal are obtained by a trial-and-error process conducted by the CRA. The plans have different levels of success rate for removal which is directly related to the impact due to the damage on the components. The process parameters, i.e. cutting location and cutting method, are tried under the predefined constraints. By defining the broad operation scheme only, the CRA is able to autonomously find out the set of the plans and parameters that lead to the successful disassembly. The trial-and-error process to identify the depth of cut is essential since the cutting target is an unobservable location underneath the xy-cutting path on the surface. The depth constraint is assigned according to the approximated value obtained from preliminary observed samples. In addition, the ability to sense collisions also gives more chances to solve the accessibility issue.

From the experiment, these general plans contribute to the removal by cutting the majority of the connections autonomously. However, the component was sometimes not detached due to inaccurate location of xy-cutting path and too shallow depth of cut. To identify the sufficient cutting depth is very problematic since too deep cuttings potentially damage other components underneath. The possible threats are, for example, disassembly failure since a group of connected components falling down and breaking cold-cathode

244

Chapter 8 – Conclusion fluorescent lamps (CCFL) in an LCD module. Therefore, the depth constraint is assigned to be quite shallow which results in the safe cut but the lower success rate is a drawback. Eventually, the primitive cutting operations will be given by the user to resolve the unsuccessful detachment and the CRA learns this for each individual model. For the future improvement, the effective xy-cutting location is expected to be identified by improving the visual localisation accuracy. The depth of cut will be identified accurately if the component underneath can be sensed. Hence, a force sensor should be incorporated.

8.1.2 Vision system module

The VSM is the main sensing facility of this system. The uncertainties of physical appearance, quantity, and location of the component are addressed. The sensing is essential in order to perceive information that will be revealed during the disassembly process. The detection (recognition and localisation) is based on colour images and depth images. This module performs detection of the components, detection of state change, and other utilities in the disassembly process.

8.1.2.1 Hardware capability and calibration

The experiment is conducted under the controlled lighting environment that significantly reduces the complexity of the visual detection algorithms by eliminating the uncertainties of the ambient. The camera system consists of two cameras which are a high resolution colour image and a medium resolution depth image. For the calibration process, the 2.5D colour-depth map is constructed by using affined transformation to align both images. The resolution is 0.57 mm/pixel and 4.39 mm/bit in x-y and z direction, respectively. The selected depth camera (MS Kinect) is more cost effective, faster processing, and lower computational resource, in comparison to other 2.5 - 3D positioning techniques. However, the limitation in accuracy is resulted from the Infrared based sensing technique. The data loss occurs at the reflective surface that is perpendicular to the Infrared emitting direction and the accuracy reduces at the edge of the object. These problems are partly eliminated by filtering algorithm that disregards the irrelevant data. Also, proper cutting offset in the operation plans are taken into account.

8.1.2.2 Detection of the components

The component detectors are a major contribution of the vision system module in this research. The rule-based recognition of the components using the concept of common

245

Chapter 8 – Conclusion features is developed in this research. The predefined rules are developed according to the common physical appearances of each type of component observed from the aforementioned 37 models. The developed algorithms are able to recognise the type of component and the location which will be supplied to the CRA. The key benefit of this method is flexible to the variations and uncertainties in physical appearance that possibly found in LCD screens. However, the limitation of the component detector is the possibility of misclassification among the types of components if the related rules are too broad. This can be reduced by selectively implemented the detector only in the disassembly states that potentially contain a particular component.

From the experimental result, the recognition sensitivity was over 90% for all main components except the carrier. The carrier had 60% sensitivity since the majority part of the carrier was cut off during the state of treating PCB cover in Type-II samples. The precision of localisation was approximately within 6 mm. For the screw detection, the detection rate was 60% and localisation precision was within ±0.5 mm. However, the detection performance can vary according to the damaging conditions of the component.

8.1.2.3 Detection of state change

The detection of state change is used for determining the accomplishment of the removal of the main component which is part of execution monitoring. None of the reviewed priorresearch works has the execution monitoring capability. This measurement is designed for supporting destructive disassembly where the achievement is considered when significant part of the component (not the entire component) has been removed. The detector developed based on similarity measurement of both colour and depth images over the area belonging to the component to be removed. This algorithm achieves higher than 95% of the detection rate. However, for the limitation, the fault detection occurs when the height of the component is lower than the depth resolution, e.g. PCB.

8.1.2.4 Other utility

The vision system serves other three functions needed by the CRA. First, the model detection is a classifier used to match the sample to the previously seen models stored in the knowledge base (KB). It is based on the Speeded-Up Robust Features (SURF) of the back cover. The detector achieves higher than 95% accuracy. Second, the vertical distance-z at the corresponding location-xy is measured with precision 2.84 mm. This

246

Chapter 8 – Conclusion function is needed for acknowledging the current location of the component when disassembling the known model. Third, the size of the grinder disc needs to be measured since is continuously worn during disassembly. The information of the size represents the actual location of the cutting path which needs to be precisely learned as one of the process parameters. The precision of the measurement is 1.3 mm.

8.1.3 Cognitive robotics module

The CRM controls the system using four cognitive functions: 1) rule-based heuristic reasoning, 2) execution monitoring, 3) learning, and 4) revision. The uncertainties regarding product structure and process are addressed by this module. The CRM composes of the CRA and the knowledge base (KB). The CRA control the behaviour of the system and the KB contains the model-specific knowledge that is obtained from the previous disassembly processes. The key benefit of using cognitive robotics in this research is the ability to make decision through the search space by taking the actual execution outcome into account. The system is also able to improve from previous experience. These features allow the system flexibility to deal with various models of products by addressing the uncertainties in both planning and operational levels. These have been hindrance in the reviewed existing research works.

8.1.3.1 Architecture and language platform

The cognitive robotic architecture is based on the closed-perception action loop which expresses the key features of the behaviour, i.e. perception, action, reasoning, learning, planning, behaviour control, and human interaction. The CRA is programmed with action programming language IndiGolog which is based on Situation Calculus. The key benefit to this research is the online execution which supports sensing and exogenous actions which allow the CRA to effectively respond to the external dynamic world. The language in Golog series is also benefit to the development process of the behaviour of the system can be clearly described by actions, preconditions, and effects. The KB is separated from the CRA and is modelled in Prolog. The CRA interacts with the VSM via sensing actions, DOM via primitive actions, and human assistance via exogenous actions. The parameters and variables are represented in the form of fluents.

247

Chapter 8 – Conclusion

8.1.3.2 Reasoning and execution monitoring

The CRA schedules the actions by reasoning about the current condition of the disassembly state, the disassembly domain, and the execution outcome. In this research, the proposed disassembly domain represents: 1) the broad structure definition, 2) the broad operation plans, and 3) process parameters with constraints. The existing model- specific knowledge is also taken into account for disassembling the known models. In addition, the CRA can decide to switch to the user assistance when the autonomous operations failed too many times.

For the unknown model, the key feature of reasoning is to select the operation plans and parameters according to the current main component. They are considered choice points to be pruned along with the two main structure definitions. The input is obtained by the component detectors and the execution outcome which is determined by the execution monitoring that examines the change of disassembly state. This input is used in the trial- and-error process in order to find the critical plan and parameters that lead to the state change. As a result, this can eliminate the need of the disassembly process plan (DSP) and disassembly process plan (DPP) supplied a priori. The CRA will also learn these generated DPP and DSP. This also addresses the uncertainties due to the variation in the quantity of the component, e.g. the number of PCBs. In case of the known model as recognised by the model detector, reasoning is used to execute the operation according to the knowledge previously learned. The sensing input in regard to the component type and location is less significant in this process since the information is already known.

The CRA performs the disassembly according to the order of the states defined by the given main structure. These predefined structures benefit the reliability of the process by reducing the effects from the misclassification of main component in the actual experiment when the components are damaged. The effects are the infinite execution loop, redundant physical damage, and time consumption. The redundant damage needs to be minimised since it is irreversible and leads to the complicated problem in learning process. However, a major drawback of using the given broad structures possibly occurs if the product structure significantly changes from the given definition. The decision rules for selecting the proper operation plans needs to be expanded. This situation may happen in other product families that the structure is more complex. However, it has not been found in the observed case-study products.

248

Chapter 8 – Conclusion

8.1.3.3 Knowledge base

The KB contains model-specific knowledge with respect to the product detail and disassembly process of LCD screens that have been disassembled. The KB keeps the critical values of the selected significant parameters in the form of Prolog fact that the CRA will use to reproduce the same cuts in subsequent disassembly those models. The KB is continuously revised when the same model has been repeatedly disassembled.

In this research, a different way of learning strategy using Golog is proposed. The learning of the knowledge stored into the separated KB is done instead of generalising the rules and modifying the CRA’s Golog program. A major advantage of using the separated KB is that the size and the complexity of the Golog program remains the same after learning from a large number of disassembled samples which is expected to happen in the actual disassembly industry scenario. The model-specific knowledge also have version managed which can be merged with other KBs created from different CRA. In addition, since the specific features are arbitrary, there is no explicit relation with other properties of the parent main component, e.g. no relation between the location of a non-detectable plastic rivet in the central area of a PCB and the location of PCB itself. The rules cannot be generalised in this case. Therefore, the model-specific knowledge is more suitable since it can accurately provide the location and operation for treating those features.

8.1.3.4 Learning

The learning occurs in two forms. First, learning by reasoning, the CRA learns the parameters for the predefined general operation plans that have been executed prior to the successful component removal. The critical value of all executed operations need to be recorded even if the state change has not occurred straightaway since some cuts may passively contribute to the detachment. Second, learning by demonstration, the CRA learns from the human assistances that are given to overcome unresolved problems. The assistance is used to change the original belief caused by inaccurate visual detection, e.g. misbelieve about the existence of a main component and the state change. In addition, assistance is given in the form of additional sequences of primitive cutting operations (custom plans). They are used to disestablish the remaining connections that are non- detectable or need deeper cuts. From the experiment, most of the assistances were to deepen the existing cutting paths initially cut by the general operation plan as discussed in the DOM section.

249

Chapter 8 – Conclusion

A major benefit of learning is to reduce the need for human assistance for disassembling the known model. The time consumption is also marginally reduced by skipping some redundant steps, e.g. flipping the table and visual sensing. For the limitation, a disadvantage of this strategy is the knowledge cannot be adapted among the different models. Therefore, the specific information may need to be supplied by the human user for individual model in the first time of disassembly. However, from the experiment result, it is proved that the CRA will be able to achieve it autonomously afterwards for the previously unseen model.

8.1.3.5 Revision

The revision process optimises the disassembly process of the known model by retracting the redundant general operation plans that have been learned previously. The redundant operations, which do not contribute to the removal of the main component, can be found out by executing the operation plans in a reversed order (high impact to low impact plans). The recorded add-on custom operations are performed first since they are considered very precise and high impact to the component removal. It has been proved that this process can improve the process efficiency in term of time consumption and the need of human assistance. From the experimental result, the time consumption reduced by more than 50% and the process was able to be carried out without human assistance after the first few revisions. However, small fluctuations were presented due to the uncertainties regarding the inaccurate visual localisation and physical operation.

According to the limitation of this strategy, the retraction is done only for the general operation plans which are possible to be redundant. The add-on custom operations are unnecessary to be retracted due to the assumption that the add-on custom operations are always significant since the decision is carefully made by the human user.

8.2 Conclusion and discussion

This section presents the outcome and achievement of this research according to the objectives stated in the scope of research in Chapter 1 and research gap in Chapter 2. The major research contributions of this thesis are as follows.

250

Chapter 8 – Conclusion

8.2.1 Flexibility to deal with uncertainties

The system has been proved to be flexible enough to deal with various models of the LCD screens without specific information supplied. Only broad schemes of the main product structure and the operations according to the product family need to be specified a priori. The uncertainties are typically able to be addressed autonomously by the integration of the operating modules. The human assistance is involved in unresolvable cases. The primary uncertainties stated in Table 3.1 are discussed as follows.

First, the uncertainties in EOL condition are addressed by the DOM. These uncertainties deal with the conditions that cannot be observed by the vision system. The general operation plans cut the main component at the estimated location that expects to disestablish the connection. The MotionSupervision is also incorporated to acknowledge the collision and find alternative cutting method to achieve the operation.

Second, the variety in the supplied products is about the product structure and the detailed property of the component. The component with the variation in property, i.e. physical appearance, quantity, and location, are effectively addressed by the proposed component detectors. The main structure of each sample is able to be identified by the CRA according to the given broad structure and the conditions observed during the disassembly process. Therefore, it can be concluded that the success of addressing these uncertainties depends on the performance of the vision system.

Third, the complexity in process planning and operations is typically addressed by the CRA that goes through the states of disassembly using the basic behaviour functions which are reasoning and execution monitoring. The effective sequence plan, operation plans, and process parameters, can be obtained by the trial-and-error process. No specific information is further needed to be supplied. Regarding the non-detectable object, the DOM addressed the uncertainties in the same way as the uncertainties in EOL condition. Therefore, the success of addressing these uncertainties depends on the accuracy of the predefined schemes and constraints of the process parameters. In addition, the uncertainties in the disassembly rig are concerned a process parameters that can also be compensated by the trial-and-error strategy or human assistance.

From the validation on numerous different models of LCD screens, the general uncertainties in both planning and operational levels can be addressed by these strategies.

251

Chapter 8 – Conclusion

However, the uncertainties regarding the operation are usually problematic which the human assistance becomes unavoidable. The success rate of the automatic operation can be increased if less strict constraints are assigned, e.g. deeper maximum cutting depth. Consequently, the trial-and-error has higher potential to complete the task but the time consumption will be increased as a drawback. However, the number of these uncertainties and the need of human assistance expect to reduce significantly after the model-specific knowledge has been learned from the first disassembly.

In conclusion, this strategy allows the system to be flexible to handle any models in a product family by supplying only broad schemes of process. This strategy successfully addresses the problem in the existing research works that the detailed information regarding a particular model is needed to be supplied. Especially, the broad general operation plans for treating non-detectable connections and the execution monitoring for determining the success of the operation have not been found in existing research works.

8.2.2 Performance improvement by learning and revision

It has been proved that learning and revision are able to improve the process performance of the previously seen models. During the disassembly, learning occurs autonomously with the knowledge obtained from the automatic operation as well as the user input. This knowledge is about product and process which are comparable to DSP and DPP. It will be used and revised by the system in the later processes. The improvement according to the three key performance indices (KPI) is concluded and discussed as follows.

First, the completeness of disassembly is improved for the Type-II structure where the PCBs are initially inaccessible. Therefore, the second run is essential for the first time of disassembly. This circumstance is able to be recognised by the CRA and further resolved by human assistance in the later revision. As a result, the disassembly will be completed by only a single run. For the Type-I structure, the improvement is unnecessary since the disassembly can be completed from the first time.

Second, the disassembly time is typically reduced by optimising the physical movement of the robot and the FlippingTable. The cutting operation will approach the destination depth more quickly by adjusting the size of the cutting step. All unnecessary flipping operations will be skipped. In addition, the revision process can retract the redundant general operation plans which contribute to a large amount of time. Eventually, from the

252

Chapter 8 – Conclusion experiment, the time consumption reduced by about 50% in both structure types. However, a small fluctuation occurred caused by the inaccuracy of visual detection of the location to be cut. This can be further resolved by improving the localisation method.

Third, the need for human assistance is an indirect measurement of the degree of autonomy the system can achieve. The assistance is essential in most cases for the first time disassembling an unknown model to deal with unresolved remaining connections. However, the amount of human assistance will significantly reduce afterwards and the system will be able to carry out fully autonomously after the first few revisions. However, a small fluctuation occurred due to the repeatability of the disassembly operation. This is expected to be resolved if the disassembly rig becomes more physically reliable.

In conclusion, the performance is significantly improved in every perspective after the first few time of disassembly a particular model. This learning and revision strategy is able to find out the optimised process with respect to the uncertainties in the actual operations. Eventually, the process will be conducted autonomously in robust and efficient way. This strategy has not been done before in other existing research works.

8.2.3 Toxicity

The destructive disassembly approach is able to address the uncertainties in the EOL products with high success rate. However, a major drawback is the toxic substances that expose to the working environment. The toxic substances from cutting hazardous material that are possibly found are halogen-free plastic fume, metal dust, polymer dust, dust from abrasive cut-off disc, and Mercury from CCFLs. The Mercury from broken CCFLs is a major concerned in this case. Therefore, to prevent the leakage, the operation plans and limitation of the cutting depth are designed to avoid cutting too close to the CCFLs and LCD glass. The fixture that holds the samples from the LCD glass side is also designed for this purpose. As a result, although the LCD modules are damaged at the back side, none of the CCFLs are broken during the experiments.

In conclusion, a major advantage of this automated system is to avoid the possibility of toxic exposure to human operator. This system achieves this purpose by setting up the disassembly rig in a confined space that is isolated from the human user. The user can monitor the system and give assistance via a computer console in a separated room.

253

Chapter 8 – Conclusion

8.2.4 Economic feasibility

Economic feasibility is considered in two perspectives, including the cost of the automation platform and the operating cost. First, a low-cost disassembly automation platform that is flexible to deal with a wide range of models of LCD screens has been successfully designed in this research. By the destructive approach and special designed tools, the system achieves the high success rate of disassembly without additional expensive sensors needed. The condition of the disassembly outcome is also suitable for recycling. Second, time consumption is one of the key concerns in economic feasibility analysis. The current prototype system still needs further improvement in this perspective.

In comparison to the traditional manually disassembly, a comparable manual process took 6.2 minutes/screen on average (Kernbaum et al. 2009). The proposed system took 48 minutes on average for disassembling an unknown model sample. It expects to reduce to around 24 minutes in the optimised process after a few revisions when majority of the redundant operations are eliminated. Improvement in regard to physical operation and hardware is needed to overcome this limitation. First, from the case-study in learning and revision experiment, the disassembly time will reduce to around 10 minutes if the cutting tool can directly approach the destination depth in one operation cycle for the known models. An incremental cut of 2-5 mm per cycle is necessary in the current system to prevent force overload due to the limited power of the cutting tools. Second, the time can be further improved by optimising all movements, e.g. feed speed for cutting operation, robot motion, flipping table, grinder checking routine, etc. However, a more reliable rig and sensors are needed for finely adjusting the related process parameters.

In conclusion, it is high potential that the proposed system can overcome the traditional manual disassembly if the operation is optimised and the hardware is improved. The flexibility to deal with various models of product is crucial for the actual industrial application. This research proves that the principle of cognitive robotics together with other operating modules can achieve this goal. In addition, the learning and revision are the key concept that allows the system to improve the process performance from previous experience. Even though the human need to be involved in the first stages, the system will become autonomous afterwards.

254

Chapter 8 – Conclusion

8.3 Future works

8.3.1 Approach for applying to other product families

In this research, the proposed concept of cognitive robotic disassembly automation has been proved by using LCD screens as a case-study product. To avoid the complexity due to variations among different product families, it is unavoidable that the system foundation and the supporting modules are specifically built based on this individual product. In the next stage, this concept is expected to be generalised and be able to handle other product families. The general concepts explained in the beginning of each methodology chapter (Chapter 3-6) must be reconsidered.

Overall, the cognitive robotic architecture, i.e. the system framework and the behaviour control, is the core structure which is persistent. Most modifications are needed in detailed levels of supporting modules. However, corresponding parts in the Golog program will need to be modified accordingly. Based on the case of LCD screen disassembly, modifications are needed to be done in the following perspectives.

x Mechanical units − i.e. fixtures, robot’s workspace, and disassembly tools − that are suitable for the geometry of the selected products and their dismantling techniques required; x Operational constraints in regard to the requirement of the parts or components to be conserved, e.g. LCD modules should not be damaged; x Preliminary study of the possible variation in the main product structures and components used; x Visual detection functions for the main and the connective components that are commonly found in the selected products; x Modification of the disassembly domain – i.e. component treatment strategy, operation plans, and process parameters – according to the physical connections of the components in the selected products; and, x Modification of the corresponding actions, fluents, axioms, and procedures.

In conclusion, a flexible system that is capable of handling multiple product families can be developed according to these proposed modifications. However, the complexity of the system will greatly arise not only in the cognitive behaviour but also from the complicated supporting modules.

255

Chapter 8 – Conclusion

8.3.2 Advanced learning and revision strategy

First, for the learning, the limitation of the current learning strategy is that the knowledge is specific for individual models. No relation is established among different models. Therefore, the strategy that allows the robot to adapt the existing knowledge of one model to use for disassembling another model should be developed. The implementation can be in the form of generalised rules that the CRA automatically generates during the disassembly. The CRA expects to self-modify its own program to learn these new rules. This will enhance the flexibility to deal with new product structures that are completely different from the predefined broad structure types. Ultimately, the system is expected to completely carry out the process autonomously without human intervention from the first time disassembling an unknown model.

Second, for the revision, the current strategy is limited to the revision of the general operation plans since there is a higher possibility that the redundant operations are executed. The operations taught by user demonstration are currently assumed to be correct and not redundant. This is not true in a case that the cutting operations are overlapped. Therefore, the strategy to find out the redundant user demonstrated operation should be developed in order to increase efficiency of the process.

8.3.3 Hardware improvement and non-destructive disassembly

Non-destructive disassembly should be conducted since it can serve other purposes, e.g. maintenance, reusing, and remanufacturing. Additional hardware is needed for this extremely complicated task. According to the sensing facility, force-torque sensors and improved quality vision system should be used. For the physical operation, a wide range of disassembly tool and a versatile gripper should be used. In addition, the possibility to extend to other electrical and electronic product families should be concerned.

For the current disassembly rig, the (semi-) destructive disassembly is still a good option that allows the high success rate of disassembly. The disassembly rig should be modified to be more reliable and powerful in order to achieve higher process efficiency. Moreover, the process parameters in regard to the cutting operation should be improved.

256

References

REFERENCES ______

ABB (2004). Product specification - Articulated robot IRB 140, IRB 140 - F, IRB 140 - CW, IRB 140 - CR M2004/M2000 Rev.6.

ABB (2004). RAPID overview

Babu, B. R., Parande, A. K. and Basha, C. A. (2007). "Electrical and electronic waste: a global environmental problem." Waste Management and Research 25(4): 307-318.

Baier, J. A. and Pinto, J. A. (2003). "Planning under uncertainty as GOLOG programs." Journal of Experimental and Theoretical Artificial Intelligence 15(4): 383-405.

Bailey-Van Kuren, M. (2005). "A demanufacturing projector–vision system for combined manual and automated processing of used electronics." Computers in Industry 56(8-9): 894-904.

Bailey-Van Kuren, M. (2006). "Flexible robotic demanufacturing using real time tool path generation." Robotics and Computer-Integrated Manufacturing 22(1): 17-24.

Bannat, A., Bautze, T., Beetz, M., Blume, J., Diepold, K., Ertelt, C., Geiger, F., Gmeiner, T., Gyger, T., Knoll, A., Lau, C., Lenz, C., Ostgathe, M., Reinhart, G., Roesel, W., Ruehr, T., Schuboe, A., Shea, K., Stork Genannt Wersborg, I., Stork, S., Tekouo, W., Wallhoff, F., Wiesbeck, M. and Zaeh, M. F. (2011). "Artificial cognition in production systems." IEEE Transactions on Automation Science and Engineering, art. no. 5524092 8(1): 148-174.

Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L. (2008). "Speeded-Up Robust Features (SURF)." Computer Vision and Image Understanding 110 (2): 346-359.

Bayer, B. E. (1976). Color imaging array, Eastern Kodak Company.

Beetz, M., Buss, M. and Wollherr, D. (2007). "Cognitive technical systems - What is the role of artificial intelligence?" Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 4667 LNAI: 19-42.

Berger, U. and Schmidt, A. (1995). Active vision system for planning and programming of industrial robots in one-of-a-kind manufacturing. SPIE - The International Society for Optical Engineering.

Bogdanski, G. (2009). Entwicklung und Analyse von Handlungsoptionen zur Umsetzung von Recyclingkonzepten für Flüssigkristallbildschirme (LCD) für ein Unternehmen der Elektro(nik)altgeräterecyclingbranche Diploma-thesis, Technische Universität Braunschweig.

Bradski, G. (2010, 08/03/2010). "OpenCV." from http://opencv.willowgarage.com/wiki/.

Bradski, G. and Kaebler, A. (2008). Learning OpenCV - Computer Vision with the OpenCV Library, O' Reilly Media, Inc.

Braun, A. (2011). Programming by demonstration using the high - level programming language Golog. Diploma Diploma, RWTH Aachen University The University of New South Wales.

257

References

Braunschweig, A. (2004). Automatic Disassembly of Snap-in Joints in Electro-mechanical Devices. The 4th International Congress Mechanical Engineering Technologies’04, Varna.

Büker, U., Drüe, S., Götze, N., Hartmann, G., Kalkreuter, B., Stemmer, R. and Trapp, R. (1999). "Active object recognition system for disassembly tasks." IEEE Symposium on Emerging Technologies and Factory Automation, ETFA 1: 79-88.

Büker, U., Drüe, S., Götze, N., Hartmann, G., Kalkreuter, B., Stemmer, R. and Trapp, R. (2001). "Vision-based control of an autonomous disassembly station." Robotics and Autonomous Systems 35(3-4): 179-189.

Büker, U. and Hartmann, G. (1996). Knowledge based view control of a neural 3-D object recognition system. The 13 th Internation Conference on Pattern Recognition: 24-29.

Burgard, W., Cremers, A. B., Fox, D., Hähnel, D., Lakemeyer, G., Schulz, D., Steiner, W. and Thrun, S. (1999). "Experiences with an interactive museum tour-guide robot." Artificial Intelligence 114(1-2): 3-55.

CCRL, C. C. R. L. (2007). "Research area F: demonstration scenarios, cognitive factory." August 2009, from http://www.cotesys.de/research/demonstration-scenarios.html.

Chang, F., Chen, C.-J. and Lu, C.-J. (2004). "A linear-time component-labeling algorithm using contour tracing technique." Computer Vision and Image Understanding 93 (2): 206-220.

Chen, K. Z. (2001). "Development of integrated design for disassembly and recycling in concurrent engineering." Integrated Manufacturing Systems 12(1): 67-79.

Covington, M., Nute, D. and Vellino, A. (1996). Prolog Programming in Depth. Upper Saddle River, N.J., Prentice Hall

Craig, J. J. (2005). Introduction to robotics: mechanics and control (3rd Edition), Prentice Hall.

De Giacomo, G., Lespérance, Y. and Levesque, H. J. (1997). Reasoning about concurrent execution, prioritized interrupts, and exogenous actions in the situation calculus. International Joint Conference on Artificial Intelligence.

De Giacomo, G., Lespérance, Y., Levesque, H. J. and Reiter, R. (2001). "IndiGolog-OAA Interface Documentation." Retrieved 16 July 2011, from http://www.cs.toronto.edu/~alexei/ig- oaa/index.htm.

De Giacomo, G. and Levesque, H. J. (1999). "An incremental interpreter for high-level programs with sensing." Logical Foundations for Cognitive Agents: 86-102.

Desai, A. and Mital, A. (2003). "Evaluation of disassemblability to enable design for disassembly in mass production." International Journal of Industrial Ergonomics 32(4): 265 -281.

Diftler, M. A., Ahlstrom, T. D., Ambrose, R. O., Radford, N. A., Joyce, C. A., De La Pena, N., Parsons, A. H. and Noblitt, A. L. (2012). Robonaut 2 - Initial activities on-board the ISS. IEEE Aerospace Conference.

Duflou, J. R., Seliger, G., Kara, S., Umeda, Y., Ometto, A. and Willems, B. (2008). "Efficiency and feasibility of product disassembly: A case-based study." CIRP Annals - Manufacturing Technology 57(2): 583-600.

258

References

ElSayed, A., Kongar, E., Gupta, S. M. and Sobh, T. (2012). "A Robotic-Driven Disassembly Sequence Generator for End-Of-Life Electronic Products." Journal of Intelligent and Robotic Systems: Theory and Applications 68(1): 43-52.

Evan, C. (2009). "Notes on the OpenSURF Library." from http://www.chrisevansdev.com.

Ewers, H.-J., Schatz, M., Fleischer, G. and Dose, J. (2001). Disassembly factories: economic and environmental options. IEEE International Symposium on Assembly and Task Planning.

Fazio, D. and Whitney, T. L. (1987). "Simplified generation of all mechanical assembly sequences." IEEE Journal of Robotics and Automation RA-3(6): 640-658.

Fei-Fei, L., Fergus, R. and P., P. (2004). "Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories." IEEE CVPR 2004 Workshop on Generative-Model Based Vision.

Feldmann, K., Trautner, S. and Meedt, O. (1996). "Innovative disassembly strategies based on flexible partial destructive tools." Annual Reviews in Control 23: 159-164.

Ferrein, A. and Lakemeyer, G. (2008). "Logic-based robot control in highly dynamic domains." Robotics and Autonomous Systems 56(11): 980-991.

Franke, C., Kernbaum, S. and Seliger, G. (2006). Remanufacturing of flat screen monitors. Innovation in life cycle engineering and sustainable development. T. S. Brissaud D, Zwolinski P: 139-152.

Gao, M., Zhou, M. C. and Tang, Y. (2005). "Intelligent decision making in disassembly process based on fuzzy reasoning Petri nets." IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 34(5): 2029-2034.

Gengenbach, V., Nagel, H.-H., Tonko, M. and Schaefer, K. (1996). Automatic dismantling integrating optical flow into a machine vision-controlled robot system. IEEE International Conference on Robotics and Automation.

Gil, P., Pomares, J., Puente, S. V. T., Diaz, C., Candelas, F. and Torres, F. (2007). "Flexible multi-sensorial system for automatic disassembly using cooperative robots." International Journal of Computer Integrated Manufacturing 20(8): 757-772.

Grochowski, D. E. and Tang, Y. (2009). "A machine learning approach for optimal disassembly planning." International Journal of Computer Integrated Manufacturing 22(4): 374 - 383.

Gungor, A. and Gupta, S. M. (1997). "An evaluation methodology for disassembly processes." Computers and Industrial Engineering 33(1-2): 329-332.

Gungor, A. and Gupta, S. M. (1998). "Disassembly sequence planning for products with defective parts in product recovery." Computers and Industrial Engineering 35(1-4): 161-164.

Gungor, A. and Gupta, S. M. (1999). "Issues in environmentally conscious manufacturing and product recovery: a survey." Computers and Industrial Engineering 36: 811-853.

Gungor, A. and Gupta, S. M. (2002). "Disassembly line in product recovery." Annual Reviews in Control 40(11): 2567-2589.

Gupta, M. and McLean, C. R. (1996). Disassembly of products. 19th International Conference on Computers and Industrial and Engineering, Computer Industrial Engineering. 31: 225-228.

259

References

Heilala, J. and Sallinen, M. (2008). "Concept for an industrial ubiquitous assembly robot." IFIP International Federation for Information Processing 260: 405-413.

Hohm, K., Hofstede, H. M. and Tolle, H. (2000). Robot assisted disassembly of electronic devices. IEEE International Conference on Intelligent Robots and Systems. 2: 1273-1278.

Homem De Mello, L. S. and Sanderson, A. C. (1990). "AND/OR graph representation of assembly plans." IEEE Transactions on Robotics and Automation 6(2): 188-189.

Iigo-Blasco, P., Diaz-Del-Rio, F., Romero-Ternero, M. C., Cagigas-Muiz, D. and Vicente-Diaz, S. (2012). "Robotics software frameworks for multi-agent robotic systems development." Robotics and Autonomous Systems 60(6): 803-821.

Jaco, H., Federico, M., Ruediger, K., Claudia, M., Clara, D., Eniko, A., Josef, S. and Ab, S. (2008). Final Report. Review of Directive 2002/96 on Waste Electritcal nd Electronic Equipment (WEEE), United Nations University.

Jorgensen, T. M., Andersen, A. W. and Christensen, S. S. (1996). Shape recognition system for automatic disassembly of TV-sets. IEEE International Conference on Image Processing.

Kaebernick, H., Ibbotson, S. and Kara, S. (2007). Cradle-to-Cradle Manufacturing. Transitions: Pathways towards Sustainable Urban Development in Australia, CSIRO Press: 521-536.

Kaebernick, H., O'Shea, B. and Grewal, S. S. (2000). "A method for sequencing the disassembly of products." CIRP Annals - Manufacturing Technology 49(1): 13-16.

Kara, S., Pornprasitpol, P. and Kaebernick, H. (2005). "A selective disassembly methodology for end-of-life products." Assembly Automation 25(2): 124-134. Kara, S., Pornprasitpol, P. and Kaebernick, H. (2006). "Selective disassembly sequencing: a methodology for the disassembly of end-of-life products." CIRP Annals - Manufacturing Technology 55(1): 37-40.

Karlsson, B. and Järrhed, J.-O. (2000). "Recycling of electrical motors by automatic disassembly." Measurement Science and Technology 11(4): 350-357.

Kernbaum, S., Franke, D. and Seliger, G. (2009). "Flat screen monitor disassembly and testing for remanufacturing." International Journal of Sustainable Manufacturing 1(3): 347-360.

Khoshelham, K. and Elberink, S. O. (2012). "Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications." Sensors 12: 1437-1454.

Kim, H.-J., Harms, R. and Seliger, G. (2007). "Automatic control sequence generation for a hybrid disassembly system." IEEE Transactions on Automation Science and Engineering 4 (2): 194-205.

Kim, H. J., Kernbaum, S. and Seliger, G. (2009). "Emulation-based control of a disassembly system for LCD monitors." International Journal of Advanced Manufacturing Technology 40 (3- 4).

Knoth, R., Brandstötter, M., Kopacek, B. and Kopacek, P. (2002). "Automated disassembly of electr(on)ic equipment." IEEE International Symposium on Electronics and the Environment.

Koenderink, N. J. J. P., Top, J. L. and van Vliet, L. J. (2006). "Supporting knowledge-intensive inspection tasks with application ontologies." International Journal of Human-Computer Studies 64(10): 974-983.

260

References

Kopacek, B. and Kopacek, P. (2001). Semi-automatised disassembly. The 10th international workshop on robotics, Alpe Adria Danube region, RAAD 01,Vienna.

Kopacek, P. and Kopacek, B. (2006). "Intelligent, flexible disassembly." International Journal of Advanced Manufacturing Technology 30(5-6): 554-560.

Kovac, J., Peer, P. and Solina, F. (2003). Human skin colour clustering for face detection. International Conference on Computer as a Tool. The IEEE Region 8.

Kroll, E., Beardsley, B. and Parulian, A. (1996). "A methodology to evaluate ease of disassembly for product recycling." IIE Transactions (Institute of Industrial Engineers) 28(10): 837-845.

Kyrnin, M. (2010). "LCD monitor buyer's guide: How to compare LCD monitors based on specifications to find the right one." from http://compreviews.about.com/od/monitors/a/LCD- Monitor-Buyers-Guide.htm.

Lambert, A. J. D. (1999). "Linear programming in disassembly/clustering sequence generation." Computers and Industrial Engineering 36(4): 723-738.

Lambert, A. J. D. (2003). "Disassembly sequencing: a survey." International Journal of Production Research 41(16): 3721-3759.

Lambert, A. J. D. and Gupta, M. (2005). Disassembly modeling for assembly, maintenance, reuse, and recycling, Boca Raton, Fla.: CRC Press.

Lambert, A. J. D. and Gupta, S. M. (2008). "Methods for optimum and near optimum disassembly sequencing." International Journal of Production Research 46(11): .2845-2865.

Lee, K.-M. and Bailey-Van Kuren, M. M. (2000). "Modeling and supervisory control of a disassembly automation workcell based on blocking topology." IEEE Transactions on Robotics and Automation 16(1): 67-77.

Lespérance, Y., Levesque, H. J., Lin, F., Marcu, D., Reiter, R. and Scherl, R. B. (1994). A logical approach to high level robot programming - A progress report. Proc. AAAI Fall Symposium of Control of the Physical World by Intelligent Systems Lésperance.

Lespérance, Y., Tam, K. and Jenkin, M. (2000). "Reactivity in a logic-based robot programming framework." Intelligent Agents VI: Agent Theories, Architectures, and Languages: 173-187. Levesque, H. and Lakemeyer, G. (2007). Cognitive robotics. Handbook of knowledge representation. Amsterdam, Elsevier: 869-882.

Levesque, H. J., Reiter, R., Lesperance, Y., Lin, F. and Scherl, R. B. (1997). "GOLOG a logic programming language for dynamic domains." Journal of Logic Programming 31(1-3): 59-83.

Li, J., Gao, S., Duan, H. and Liu, L. (2009). "Recovery of valuable materials from waste liquid crystal display panel." Waste Management 29 (7): 2033-2039.

Li, W., Zhang, C., Wang, H. P. B. and Awoniyi, S. A. (1995). Design for disassembly analysis for environmentally conscious design and manufacturing. ASME International Mechanical Engineering Congress and Exposition.

Lin, F. (2007). Situation calculus. Handbook of knowledge representation. Amsterdam, Elsevier: 649-669.

261

References

Liñán, C. C. (2010). "cvblob: Blob library for OpenCV." Retrieved June, 2010, from http://code.google.com/p/cvblob/.

Lowe, D. (1998, November 2009). "The computer vision industry." Retrieved December, 2009, from http://people.cs.ubc.ca/~lowe/vision.html.

Martinez, M., Pham, V.-H. and Favrel, J. (1997). "Dynamic generation of disassembly sequences." IEEE Symposium on Emerging Technologies & Factory Automation, ETFA: 177- 182.

MathWorks. (2009). "Image Processing ToolboxTM." Retrieved June, 2009, from http://www.mathworks.com.au/help/toolbox/images/.

MathWorks. (2011). "Kmeans : K-Means clustering." R2011a Documentation: Statistics Toolbox, from http://www.mathworks.com/help/toolbox/stats/kmeans.html.

McCarthy, J. (1963). Situations, Actions, and Causal Laws. Technical Report Memo 2, Stanford Artificial Intelligence Project, Stanford University.

Merdan, M., Lepuschitz, W., Meurer, T. and Vincze, M. (2010). Towards ontology-based automated disassembly systems. Industrial Electronics Conference (IECON). MicrosoftCorporation. (2011). "XBox 360 - Kinect." from www.xbox.com/Kinect.

MicrosoftCorporation. (2008). "Visual Studio 2008." from http://www.microsoft.com/visualstudio/en-us/products/2008-editions.

MicrosoftCorporation. (2011). "XBox 360 - Kinect." from www.xbox.com/Kinect.

Mok, H. S., Kim, H. J. and Moon, K. S. (1997). "Disassemblability of mechanical parts in automobiles for recycling." Computers and Industrial Engineering 33(3-4): 621-624.

Moreno, R. A. (2007, 26 July 2007). "Cognitive robotics." Retrieved October, 2009, from http://www.conscious-robots.com/en/conscious-machines/the-field-of-machine- consciousness/cognitive-rob.html.

MSDN, M. (2011). "Windows Sockets 2." from http://msdn.microsoft.com/en- us/library/ms740673(v=vs.85).aspx.

Müller, V. C. (2012). "Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction." Cognitive Computation: 1-4. NASA. (2008). "Robonaut." from http://robonaut.jsc.nasa.gov/.

Nevins, J. L. and Whitney, D. E. (1989). Concurrent design of products & processes: a strategy for the next generation in manufacturing. New York, McGraw-Hill.

OpenCV. (2010). "OpenCV v2.1 documentation, Histogram." Retrieved 10 March 2011, from http://opencv.willowgarage.com/documentation/cpp/histograms.html.

OpenKinect. (2010). "libfreenect." from https://github.com/OpenKinect/libfreenect.

OpenKinect. (2011). "Imaging Information." from http://openkinect.org/wiki/Imaging_Information.

262

References

Parliament (2003). "Directive 2002/96/EC of the European Parliament and of the council on waste electrical and electronic equipment (WEEE) of 27 January 2003. ."

Pornprasitpol, P. (2006). Selective Disassembly for Re-use of Industrial Products. Master by Research, University of New South Wales.

Rasband, W. (2012). "ImageJ - Image Processing and Analysis in Java." Retrieved June, 2012, from http://rsbweb.nih.gov/ij/.

Reap, J. and Bras, B. (2002). Design for disassembly and the value of robotic semi-destructive disassembly. ASME Design Engineering Technical Conference.

Reese, G. (2000). Distributed Application Architecture Database Programming with JDBC and Java, O'Reilly & Associates.

Reiter, R. (2001). Knowledge in action : logical foundations for specifying and implementing dynamical systems. Cambridge, The MIT Press.

Russell, S. J. and Norvig, P. (1995). Artificial intelligence: a modern approach. Englewood Cliffs, N.J. , Prentice Hall

Ryan, A., O'Donoghue, L. and Lewis, H. (2011). "Characterising components of liquid crystal displays to facilitate disassembly." Journal of Cleaner Production 19(9-10): 1066-1071.

Sangveraphunsiri, V. (2003). Control of Dynamic Systems. Bangkok, Chulalongkorn University Press.

Salomonski, N. and Zussman, E. (1999). "On-line predictive model for disassembly process planning adaptation." Robotics and Computer-Integrated Manufacturing 5(3): 211-220.

Sardina, S. (2004). "Cognitive robotics at university of Toronto." Retrieved January, 2011, from http://goanna.cs.rmit.edu.au/~ssardina/papers/slides/cogrobouoft-uns02.pdf.

Sardina, S., De Giacomo, G., Lespérance, Y. and Levesque, H. (2004). On ability to autonomously execute agent programs with sensing. The 4th International Workshop on Cognitive Robotics (CoRobo-04).

Schmitt, J., Haupt, H., Kurrat, M. and Raatz, A. (2011). Disassembly automation for lithium-ion battery systems using a flexible gripper. IEEE 15th International Conference on Advanced Robotics: New Boundaries for Robotics, ICAR 2011 291-297.

Seliger, G., Keil, T., Rebafka, U. and Stenzel, A. (2001). "Flexible disassembly tools." IEEE International Symposium on Electronics and the Environment: 30-35.

Shan, H., Li, S., Huang, J., Gao, Z. and Li, W. (2007). Ant colony optimization slgorithm-based disassembly sequence planning. IEEE International Conference on Mechatronics and Automation.

Shapiro, L. G. and Stockman, G. C. (2001). Computer Vision. Computer Vision. Nw Jersey, Prentice-Hall: 279-325.

Shih, L.-H., Chang, Y.-S. and Lin, Y.-T. (2006). "Intelligent evaluation approach for electronic product recycling via case-based reasoning." Advanced Engineering Informatics 20(2): 137-145.

263

References

Siciliano, B., Sciavicco, L., Villani, L. and Oriolo, G. (2009). Vision Sensors. Robotics: modelling, planning and control, Springer, London: 255 - 230.

Soutchanski, M. (2001). An on-line decision-theoretic golog interpreter. The 17th International Joint Conference on Artificial Intelligence (IJCAI Soutchanski yr:2001).

SWI-Prolog. (2010). "SWI-Prolog." from http://www.swi-prolog.org.

Tang, Y. (2009). "Learning-based disassembly process planner for uncertainty management." IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 39(1).

Tonko, M., Gengenbach, V., Nagel, H.-H., Schäfer, K., Picard, S., Horaud, R. and Mohr, R. (2009). Towards the integration of object recognition and visual servoing for disassembly of used cars.

Tonko, M. and Nagel, H.-H. (2000). "Model-based stereo-tracking of non-polyhedral objects for automatic disassembly experiments." International Journal of Computer Vision 37(1): 99-118.

Torres, F., Gil, P., Puente, S. T., Pomares, J. and Aracil, R. (2004). "Automatic PC disassembly for component recovery." International Journal of Advanced Manufacturing Technology 23(1-2): 39-46.

Torres, F., Puente, S. and Díaz, C. (2009). "Automatic cooperative disassembly robotic system: Task planner to distribute tasks among robots." Control Engineering Practice 17(1): 112-121.

Torres, F., Puente, S. T. and Aracil, R. (2003). "Disassembly planning based on precedence relations among assemblies." International Journal of Advanced Manufacturing Technology 21(5): 317-327.

Turowski, M., Morgan, M. and Tang, Y. (2005). Disassembly line design with uncertainty. IEEE International Conference on Systems, Man and Cybernetics.

Uhlmann, E., Spur, G. and Elbing, F. (2001). Development of flexible automatic disassembly processes and cleaning technologies for the recycling of consumer goods. IEEE International Symposium on Assembly and Task Planning.

Veerakamolmal, P. and Gupta, S. M. (2002). "A case-based reasoning approach for automating disassembly process planning." Journal of Intelligent Manufacturing 13(1): 47-60.

Vezhnevets, V., Sazonov, V. and Andreeva, A. (2003). A survey on pixel-based skin color detection techniques. GraphiCon 2003.

Viganò, F., Consonni, S., Grosso, M. and Rigamonti, L. (2010). "Material and energy recovery from Automotive Shredded Residues (ASR) via sequential gasification and combustion." Waste Management 30(1): 145-153.

Viggiano, J. A. S. (2004). Comparison of the accuracy of different white balancing options as quantified by their color constancy. SPIE - The International Society for Optical Engineering

Viola, P. and Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

264

References

Vongbunyong, S., Kara, S. and Pagnucco, M. (2012). "A framework for using cognitive robotics in disassembly of products." Leveraging Technology for a Sustainable World - Proceedings of the 19th CIRP Conference on Life Cycle Engineering: 173-178.

Vongbunyong, S., Kara, S. and Pagnucco, M. (2013). "Basic behaviour control of the vision- based cognitive robotic disassembly automation." Assembly Automation 33(1): 38-56.

Vongbunyong, S., Kara, S. and Pagnucco, M. (2013). "Application of cognitive robotics in disassembly of products." CIRP Annals - Manufacturing Technology 62(1): 31-34.

Wang, Y., Li, F., Li, J., Chen, J., Jiang, F. and Wang, W. (2006). Hybrid graph disassembly model and sequence planning for product maintenance. IET Conference Publications. 524.

Woller, J. D. (1992). "A combinatorial analysis of enumerative data structures for assembly planning." Journal of Design and Manufacturing 2(2): 93-104.

Zaeh, M., Lau, C., Wiesbeck, M., Ostgathe, M. and W. Vogl, W. (2007). Towards the cognitive factory. International Conference on Changeable, Agile, Reconfigurable and Virtual Production (CARV). Toronto, Canada.

Zäh, M. F. (2009). Cognitive Factory : demonstration scenario. CoTeSys Spring Workshop.

Zebedin, H., Daichendt, K. and Kopacek, P. (2001). "A new strategy for a flexible semi-automatic disassembling cell of printed circuit boards." IEEE International Symposium on Industrial Electronics 3: 1742-1746.

Zussman, E., Zhou, M. and Caudill, R. (1998). "Disassembly Petri net approach to modeling and planning disassembly processes of electronic products." IEEE International Symposium on Electronics and the Environment: pp. 331-336.

265

Appendix A

APPENDIX A LCD SCREEN SAMPLES ______

Code sub- Diag size Index Index* Brand Model series Model Type Year name class (inch) 1 T2-22 AC1 ACER - AL1916WA 2 - 19 2007 2 T1-10 AC2 ACER - AL1916Ws 1 a 19 2005 3T1-3 AO1 AOC - LM726 1 a 17 2005 4 - BQ2 BENQ - Q7T3 2 - 17 2004 5T1-7 BQ3 BENQ - FP71G+ 1 a 17 2006 6T1-7 BQ4 BENQ - FP71 1 b 18 2005 7 T1-11 BQ5 BENQ - FP91G+ 1 a 19 2006 8 T2-24 BQ6 BENQ - FP91E 2 - 20 2005 9 - CP1 COMPAQ - TPT1501 1 a 16 2002 10 T1-8 DD1 DMD_Dt - DV171JB 1 a 18 2006 11 - DL1 DELL - E151FP 1 a 18 2002 12 T2-16 DL2 DELL - E173FPb 2 - 17 2005 13 - DL3 DELL - 2001FP 2 - 20 2004 14 T1-12 DL4 DELL - E193FPp 1 b 19 2004 15 T2-17 DL5 DELL - 1707FPt 2 - 17 2008 16 T2-18 DL6 DELL - 1706FPVt 2 - 17 2008 17 - DV1 DiamondView - DV172 1 a 18 2003 18 - HP1 HP L1530 PE1236 2 - 16 2003 19 T1-5 HP2 HP - HPL1706 1 a 17 2005 20 T1-1 HP3 HP - HP1502 1 a 15 2005 21 - IB1 IBM ThinkVision 6734AC1 1 b 18 2005 22 - IB2 IBM ThinkVision 9205AB6 2 - 15 2006 23 T1-9 IB3 IBM ThinkVision 6734AB1 1 b 18 2004 24 - JC1 JOYCOM - HL-1510A 1 a 17 2008 25 - LG1 LG Flatron L1730s 2 - 18 2005 26 T1-2 LG2 LG Flatron L1511SK 1 b 16 2003 27 T1-13 NC1 NEC MultiSyncLCD 1960Nxi 1 b 20 2004 28 T2-19 NC4 NEC AccuSync LCD17V 2 - 18 2005 29 T1-6 OP1 OPTIMA Multimedia LCD L705AD 1 a 17 2008 30 - SS1 SAMSUNG SyncMaster 153Vs 1 a 15 2004 31 - SS2 SAMSUNG SyncMaster 151N 2 - 15 2003 32 T1-14 SS3 SAMSUNG SyncMaster 531TFT 1 a 19 1999 33 T1-15 SS4 SAMSUNG SyncMaster 204B 1 b 20 2006 34 T2-20 SS5 SAMSUNG - 943BW 2 - 18 2011 35 T2-21 SS6 SAMSUNG SyncMaster 151BM 2 - 18 2001 36 T2-23 SS7 SAMSUNG SyncMaster 913VN 1 a 19 2005 37 - TG1 TARGA - TD154 1 a 17 2006 NOTE: - The dark highlight at the index represents the model used in final experiment - Index* represent the sample sorted by type (T1 or T2) and screen size

A-1

AppendixB

APPENDIX B HARDWARE ______

B.1 Robot specification

The information of shown in this section is referred to ABB documentation

B.1.1 Robot motion

B.1.2 Performance

Description Value Unidirectional pose repeatability RP = 0.03 mm Linear path accuracy AT = 1.0 mm ear path repeatability RT = 0.15 mm Minimum positioning time, 0.2 sec. (on 35 mm linear path) to within 0.5mm of the position

B-1

AppendixB

B.2 FlippingTable

B.2.1 Fixture plate

B.2.2 DC geared motor

B-2

AppendixB

B.3 Camera

B.3.1 Colour camera –IMPERX IPX-1M48-L

B.3.2 Depth camera – MS Kinect

B-3

Appendix C

APPENDIX C VISION SYSTEM EXPERIMENTS ______

This section presents the raw data in pixel according to the detection of main components. The error is measured in Pixel. The conversion to mm uses factor 0.58 mm/pixel. The error measurement is described in Section 5.4.1.1 according to Figure 5.2. The highlighted valus of data “0” means that the data in that column is not valid. Therefore, false positive and negative can be observed.

δ+ Detected δ- component

δ+ δ- Actual δ- δ+ component δ-

δ+

Figure C.1: Measurement direction of the distance error

Section c.1 – c.4 shows the detection result in unit pixel. The measurement is done in 4 direction according to Figure C.1, including left (L), right (R), down (D), and up (U). The “detection” result is obtained from the detector and the “actual” result is physically measured from the input images. The position error relative to each side of border is also provided. Finally, the basic statistic information in both pixel and mm is provided.

Section c.5 shows the classification capability of the model detector. The confusion matrix is constructed among pairs of models.

C-1

Appendix C c.1 Detection of back cover

Backcover Model detect actual rror at border (out+, in index L R D U L R D U XL XR YD YU 1 126 914 188 704 124 920 181 708 -2 -6 -7 -4 pixel 2 127 925 193 720 126 924 192 718 -1 1 -1 2 mean -3.68 3 222 873 160 724 216 886 154 732 -6 -13 -6 -8 stdev 3.67 4 238 902 183 739 234 905 178 746 -4 -3 -5 -7 rms 5.19 5 215 881 190 750 215 886 188 749 0 -5 -2 1 max 3.00 6 219 878 157 768 217 885 151 765 -2 -7 -6 3 min -13.00 7 205 950 161 773 204 951 160 777 -1 -1 -1 -4 8 205 926 127 788 203 932 121 792 -2 -6 -6 -4 mm 9 258 858 175 687 253 863 167 694 -5 -5 -8 -7 mean -2.10 10 219 876 157 760 212 884 154 761 -7 -8 -3 -1 stdev 2.10 11 194 885 211 782 195 892 211 783 1 -7 0 -1 rms 2.97 12 234 894 188 749 227 901 184 753 -7 -7 -4 -4 max 1.71 13 135 929 168 792 131 932 161 794 -4 -3 -7 -2 min -7.43 14 226 949 161 776 214 952 157 774 -12 -3 -4 2 15 229 867 191 728 213 880 188 731 -16 -13 -3 -3 16 204 868 246 792 203 875 241 793 -1 -7 -5 -1 17 216 894 163 786 217 893 164 786 1 1 1 0 18 233 842 244 780 231 843 242 780 -2 -1 -2 0 19 222 876 212 785 222 880 213 787 0 -4 1 -2 20 214 802 287 778 204 806 281 788 -10 -4 -6 -10 21 225 921 198 788 221 922 193 788 -4 -1 -5 0 22 248 846 245 717 240 840 243 730 -8 6 -2 -13 23 227 921 197 788 220 925 191 789 -7 -4 -6 -1 24 213 886 186 744 211 894 187 743 -2 -8 1 1 25 214 880 206 760 199 895 194 761 -15 -15 -12 -1 26 228 839 165 656 218 844 156 665 -10 -5 -9 -9 27 133 878 168 792 128 878 165 794 -5 0 -3 -2 28 214 887 150 753 212 893 149 756 -2 -6 -1 -3 29 214 891 199 750 214 896 201 752 0 -5 2 -2 30 188 801 269 743 174 798 264 756 -14 3 -5 -13 31 241 847 246 714 240 845 245 720 -1 2 -1 -6 32 168 881 190 779 167 891 189 779 -1 -10 -1 0 33 132 917 187 790 131 923 184 791 -1 -6 -3 -1 34 142 913 285 787 137 911 281 787 -5 2 -4 0 35 211 941 270 788 200 941 266 793 -11 0 -4 -5 36 213 949 170 782 206 956 165 788 -7 -7 -5 -6 37 220 887 192 751 221 895 191 752 1 -8 -1 -1 Unit in Pixel Table C1: Detection result of back covers

C-2

Appendix C c.2 Detection of PCB cover

PCB Cov er Model detect actual Error (out+, in-) index L R D U L R D U XL XR YD YU 1 252 765 174 412 248 771 173 414 -1 -2 -248 771 pixel 2 295 761 235 480 0000 mean 126.16 3 283 814 127 415 283 809 138 423 11 -8 -283 809 stdev 403.38 4 318 811 173 541 325 802 172 544 -1 -3 -325 802 rms 421.23 5 304 792 208 447 304 806 213 483 5 -36 -304 806 max 860.00 6 277 696 125 504 285 697 124 509 -1 -5 -285 697 min -342.0 7 289 843 224 524 281 846 224 527 0 -3 -281 846 8 273 716 104 501 290 718 103 501 -1 0 -290 718 mm 9 342 784 208 447 342 784 204 450 -4 -3 -342 784 mean 72.09 10 268 815 145 449 264 816 154 471 9 -22 -264 816 stdev 230.50 11 283 813 167 559 282 821 166 570 -1 -11 -282 821 rms 240.70 12 296 785 202 503 306 782 200 507 -2 -4 -306 782 max 491.43 13 200 797 231 533 221 795 231 535 0 -2 -221 795 min -195.4 14 295 800 216 515 307 801 215 513 -1 2 -307 801 15 301 761 179 499 286 764 183 503 4 -4 -286 764 16 268 766 240 563 263 768 239 563 -1 0 -263 768 17 266 852 110 456 264 852 108 482 -2 -26 -264 852 18 299 753 205 559 298 754 203 567 -2 -8 -298 754 19 302 772 260 547 308 778 260 548 0 -1 -308 778 20 263 741 302 535 263 734 310 533 8 2 -263 734 21 310 831 212 535 320 831 221 535 9 0 -320 831 22 344 773 213 471 341 771 214 474 1 -3 -341 771 23 310 830 211 535 319 830 221 535 10 0 -319 830 24 265 832 159 581 264 840 154 574 -5 7 -264 840 25 301 815 171 473 301 814 170 480 -1 -7 -301 814 26 282 733 141 447 283 733 140 448 -1 -1 -283 733 27 179 829 112 498 180 827 113 498 1 0 -180 827 28 284 811 149 457 286 814 150 460 1 -3 -286 814 29 293 814 190 515 301 814 197 512 7 3 -301 814 30 245 768 188 534 248 716 249 525 61 9 -248 716 31 329 752 265 515 316 759 271 525 6 -10 -316 759 32 294 710 153 447 293 709 157 356 4 91 -293 709 33 212 817 222 579 220 827 220 582 -2 -3 -220 827 34 268 770 305 545 275 769 304 546 -1 -1 -275 769 35 325 806 277 551 325 808 289 549 12 2 -325 808 36 302 863 185 503 304 860 183 503 -2 0 -304 860 37 320 803 166 529 320 806 189 525 23 4 -320 806

Table C2: Detection result of PCB cover

C-3

Appendix C c.3 Detection of PCBs

PCB Model detect actual Error (out+, in-) pixel index L R D U L R D U XL XR YD YU mean 7.95 1 636 800 268 406 643 784 268 403 7 16 0 3 stdev 25.46 1 215 590 266 511 234 588 265 507 19 2 -1 4 rms 26.58 2 289 569 220 471 288 570 221 469 -1 -1 1 2 max 120.00 2 521 765 277 475 571 761 293 472 50 4 16 3 min -110.00 2 171 890 115 195 169 888 116 195 -2 2 1 0 3 301 558 158 428 307 557 163 423 6 1 5 5 mm 3 622 787 261 422 626 786 275 419 4 1 14 3 mean 4.54 4 276 625 180 555 275 622 190 551 -1 3 10 4 stdev 14.55 4 625 783 164 419 625 781 193 421 0 2 29 -2 rms 15.19 5 322 644 180 479 318 642 231 480 -4 2 51 -1 max 68.57 5 628 798 248 482 644 797 281 480 16 1 33 2 min -62.86 6 287 537 140 510 285 536 138 507 -2 1 -2 3 6 517 696 186 503 537 695 306 502 20 1 120 1 6 427 699 582 666 427 698 607 665 0 1 25 1 7 298 617 272 527 296 617 271 527 -2 0 -1 0 7 688 843 393 524 687 845 394 524 -1 -2 1 0 8 288 681 195 616 293 726 201 591 5 -45 6 25 8 0000568 723 203 399 8 0000571 720 429 572 9 362 454 217 435 361 454 216 436 -1 0 -1 -1 9 481 773 215 438 483 775 215 436 2 -2 0 2 10 275 546 201 468 285 545 200 467 10 1 -1 1 10 604 792 315 467 606 792 314 467 2 0 -1 0 10 400 709 600 662 401 708 605 659 1 1 5 3 11 302 633 199 552 301 632 221 544 -1 1 22 8 11 615 809 319 567 660 803 352 564 45 6 33 3 12 270 683 213 523 299 650 214 514 29 33 1 9 12 681 812 216 346 650 807 215 346 -31 5 -1 0 13 302 848 199 566 258 842 245 534 -44 6 46 32 13 260 374 192 269 0000 13 0000264 699 327 532 14 310 635 178 512 312 586 241 509 2 49 63 3 14 550 790 278 514 620 789 376 512 70 1 98 2 15 257 633 220 545 256 521 219 544 -1 112 -1 1 15 633 769 218 385 523 769 217 385 ### 0 -1 0 16 303 557 328 557 301 571 338 536 -2 -14 10 21 16 557 811 200 559 564 808 211 535 7 3 11 24 17 306 744 117 481 267 666 120 483 -39 78 3 -2 17 610 848 193 484 667 840 227 481 57 8 34 3 17 429 825 595 684 0000 18 379 756 184 582 464 752 184 549 85 4 0 33 18 0000340 453 411 547 19 330 636 277 545 331 637 277 545 1 -1 0 0

Table C3: Detection result of PCBs

C-4

Appendix C

PCB (cont) Model detect actual Error (out+, in-) index L R D U L R D U XL XR YD YU 19 637 777 367 545 636 777 366 546 -1 0 -1 -1 20 250 516 310 537 262 515 310 536 12 1 0 1 20 483 732 307 549 513 731 308 531 30 1 1 18 21 308 620 219 535 321 622 219 534 13 -2 0 1 21 601 839 303 533 619 841 304 532 18 -2 1 1 22 379 782 215 478 379 780 233 473 0 2 18 5 23 306 623 218 534 316 621 217 533 10 2 -1 1 23 602 841 305 531 617 841 304 533 15 0 -1 -2 24 325 686 160 562 320 690 161 464 -5 -4 1 98 24 0000271 317 190 511 25 342 537 332 461 341 538 325 460 -1 -1 -7 1 25 596 801 163 485 597 802 163 486 1 -1 0 -1 26 308 483 186 447 295 483 185 449 -13 0 -1 -2 26 504 650 316 455 503 651 319 449 -1 -1 3 6 27 195 279 123 569 194 271 146 553 -1 8 23 16 27 305 547 216 478 303 546 285 477 -2 1 69 1 27 467 824 125 490 613 823 126 488 146 1 1 2 28 332 848 208 519 606 835 211 505 274 13 3 14 28 0000334 549 344 504 29 310 562 268 530 310 561 269 530 0 1 1 0 29 590 656 353 520 588 656 350 525 -2 0 -3 -5 29 611 804 272 530 658 804 306 528 47 0 34 2 30 241 485 266 529 240 479 268 525 -1 6 2 4 30 405 709 382 527 502 706 394 533 97 3 12 -6 31 302 753 269 525 300 746 267 523 -2 7 -2 2 31 391 747 255 406 552 747 262 397 161 0 7 9 31 547 770 210 267 0000 31 0000302 376 279 521 32 295 727 158 361 295 729 160 359 0 -2 2 2 32 266 357 619 678 266 356 619 675 0 1 0 3 32 417 644 617 681 418 645 617 678 1 -1 0 3 33 208 502 247 581 215 502 252 572 7 0 5 9 33 480 797 396 601 517 793 418 577 37 4 22 24 34 254 735 273 547 255 546 294 524 1 189 21 23 34 587 766 292 456 615 761 293 405 28 5 1 51 35 327 761 217 500 524 761 226 481 197 0 9 19 35 264 779 479 556 0000 35 0000292 498 279 474 36 294 546 144 499 294 540 218 495 0 6 74 4 36 288 946 91 186 287 940 87 186 -1 6 -4 0 36 624 842 291 472 627 840 341 463 3 2 50 9 37 324 666 205 526 332 660 208 521 8 6 3 5 37 487 656 452 648 490 637 588 649 3 19 136 -1 37 651 793 477 529 659 794 478 527 8 -1 1 2

Table C3: Detection result of PCBs (continue)

C-5

Appendix C c.4 Detection of carrier

Carrier Model detect actual Error (out+, in-) indexLRDULRDUXLXRYDYU 1 131 919 80 588 139 910 99 585 8 9 19 3 pixel 2 00000000 mean 6.69 3 0000228 872 65 597 stdev 14.36 4 244 892 105 614 242 892 105 614 -2 0 0 0 rms 15.78 5 216 881 90 624 225 874 110 600 9 7 20 24 max 79.00 6 217 875 69 576 219 874 69 576 2 1 0 0 min -8.00 7 206 950 72 526 222 934 117 524 16 16 45 2 8 206 922 28 582 209 925 29 575 3 -3 1 7 mm 9 262 854 93 561 262 851 91 555 0 3 -2 6 mean 3.82 10 226 871 69 603 225 870 69 600 -1 1 0 3 stdev 8.21 11 237 846 149 676 238 843 151 597 1 3 2 79 rms 9.02 12 237 887 143 537 237 883 145 536 0 4 2 1 max 45.14 13 141 921 190 603 141 922 190 562 0 -1 0 41 min -4.57 14 0000232 949 179 553 15 0000230 863 151 539 16 216 861 164 582 215 862 164 579 -1 -1 0 3 17 220 884 98 482 220 886 96 481 0 -2 -2 1 18 235 837 165 606 236 824 164 604 1 13 -1 2 19 222 877 114 681 230 870 123 676 8 7 9 5 20 0000211 798 201 651 21 252 895 107 653 251 892 121 651 -1 3 14 2 22 266 863 132 590 267 852 133 589 1 11 1 1 23 0000251 897 121 651 24 219 883 103 617 217 882 112 611 -2 1 9 6 25 238 883 100 645 237 879 105 635 -1 4 5 10 26 0000233 828 73 537 27 139 867 76 664 139 866 75 669 0 1 -1 -5 28 225 874 62 605 223 874 63 603 -2 0 1 2 29 241 890 101 647 241 889 112 645 0 1 11 2 30 189 800 170 641 190 785 180 637 1 15 10 4 31 240 832 169 609 248 830 187 547 8 2 18 62 32 172 868 109 616 193 867 109 615 21 1 0 1 33 140 915 88 678 140 912 90 686 0 3 2 -8 34 0000256 785 295 566 35 0000268 881 265 572 36 0000217 940 254 544 37 223 887 93 650 254 866 113 572 31 21 20 78

Table C4: Detection result of carriers

C-6

Appendix C c.5 Detection of model

NOTE: - Headers on the topmost row and left most column are the model index - Headers on the bottommost row and rightmost column are the screen size. - Blue highlight represents self matching which result in 100% matched. - Yellow highlight represent the successful matched according to the criteria.

model123456789101112131415161718 1 100 3.96 1.85 1.66 5.08 2.68 4.95 2.12 1.20 1.13 5.66 1.48 1.45 2.03 0.00 1.60 0.28 5.60 2 3.96 100 5.62 2.02 5.89 4.72 5.14 6.27 4.87 2.75 2.75 2.99 1.98 3.79 0.25 3.70 0.23 4.34 3 1.85 5.62 100 1.64 3.50 1.56 1.89 3.37 2.42 1.29 1.30 0.76 2.06 3.29 0.47 1.72 0.00 2.34 4 1.66 2.02 1.64 100 8.80 6.10 3.34 5.82 1.70 0.95 2.95 1.55 0.91 2.31 0.27 1.18 0.00 2.51 5 5.08 5.89 3.50 8.80 100 5.54 15.00 5.36 6.29 3.57 4.46 1.63 1.23 3.78 0.00 5.79 0.45 2.79 6 2.68 4.72 1.56 6.10 5.54 100 6.08 13.88 2.15 2.20 2.02 1.26 2.49 1.43 0.00 2.24 0.14 2.62 7 4.95 5.14 1.89 3.34 15.00 6.08 100 7.80 7.20 5.73 7.03 0.89 2.74 2.67 0.00 6.68 0.00 4.71 8 2.12 6.27 3.37 5.82 5.36 13.88 7.80 100 0.65 1.23 2.19 2.01 2.48 3.20 0.00 1.74 0.16 1.44 9 1.20 4.87 2.42 1.70 6.29 2.15 7.20 0.65 100 2.32 2.88 0.76 0.66 0.56 0.17 1.27 0.00 2.96 10 1.13 2.75 1.29 0.95 3.57 2.20 5.73 1.23 2.32 100 2.25 1.42 1.53 3.01 0.16 3.40 0.00 0.87 11 5.66 2.75 1.30 2.95 4.46 2.02 7.03 2.19 2.88 2.25 100 1.23 2.66 2.94 0.00 3.10 0.00 3.42 12 1.48 2.99 0.76 1.55 1.63 1.26 0.89 2.01 0.76 1.42 1.23 100 6.74 9.18 1.54 4.83 0.00 5.32 13 1.45 1.98 2.06 0.91 1.23 2.49 2.74 2.48 0.66 1.53 2.66 6.74 100 3.87 0.91 4.69 0.00 2.64 14 2.03 3.79 3.29 2.31 3.78 1.43 2.67 3.20 0.56 3.01 2.94 9.18 3.87 100 0.15 5.79 0.00 4.66 15 0.00 0.25 0.47 0.27 0.00 0.00 0.00 0.00 0.17 0.16 0.00 1.54 0.91 0.15 100 0.44 0.33 2.12 16 1.60 3.70 1.72 1.18 5.79 2.24 6.68 1.74 1.27 3.40 3.10 4.83 4.69 5.79 0.44 100 0.00 3.90 17 0.28 0.23 0.00 0.00 0.45 0.14 0.00 0.16 0.00 0.00 0.00 0.00 0.00 0.00 0.33 0.00 100 0.17 18 5.60 4.34 2.34 2.51 2.79 2.62 4.71 1.44 2.96 0.87 3.42 5.32 2.64 4.66 2.12 3.90 0.17 100 19 14.42 3.39 1.29 1.56 2.37 1.49 1.83 2.15 0.77 1.23 2.51 3.14 2.61 4.58 0.00 2.81 0.20 3.08 20 16.03 11.15 2.70 3.53 6.68 4.19 5.84 7.31 4.75 6.96 4.26 5.98 6.97 9.64 0.88 8.60 0.42 8.98 21 4.19 8.41 4.04 4.90 6.06 5.26 10.86 10.71 3.46 5.23 12.81 10.72 3.45 3.32 2.10 8.33 2.60 4.08 22 8.22 6.64 1.99 5.07 7.46 4.20 4.86 4.48 2.33 3.85 6.23 2.86 2.34 1.58 0.00 5.27 0.00 4.98 23 13.95 3.17 3.05 4.37 5.88 4.70 9.60 3.83 2.60 3.96 9.42 6.05 1.99 2.48 0.34 5.22 2.49 6.31 24 0.00 0.00 0.00 0.33 0.00 0.27 0.00 0.00 0.00 0.13 0.00 0.00 0.42 0.00 0.11 0.00 0.25 0.00 25 2.76 3.29 1.97 2.00 2.68 1.04 2.28 0.69 1.95 1.85 2.74 2.62 0.71 0.60 1.09 1.36 0.79 1.68 26 1.89 2.58 0.48 0.99 2.06 1.67 1.99 2.26 1.06 1.32 1.86 1.77 1.37 1.28 0.00 2.45 0.47 1.60 27 5.30 4.40 4.22 7.75 5.27 3.83 6.99 5.54 4.29 2.77 4.97 8.21 9.85 3.07 0.00 7.36 0.30 6.47 28 3.85 5.69 5.83 3.77 3.65 1.26 5.20 1.68 3.24 1.56 2.67 5.88 3.13 3.08 2.60 5.43 1.06 1.39 29 1.58 3.93 1.71 2.18 3.14 2.22 6.64 2.30 1.08 1.85 2.37 2.97 2.67 3.77 0.72 4.09 0.60 1.22 30 4.85 8.01 2.47 2.27 4.84 2.12 9.15 1.55 1.77 4.47 4.57 7.56 4.84 7.65 2.26 6.06 0.20 1.47 31 3.35 2.41 1.96 1.82 5.84 2.78 8.90 4.11 0.79 3.26 5.10 5.68 1.78 2.46 0.23 4.78 0.61 4.28 32 2.03 2.17 1.01 1.23 3.51 1.44 3.27 1.20 0.94 1.24 3.93 1.91 1.19 2.07 0.15 1.49 0.00 1.27 33 3.01 7.68 4.62 1.42 4.92 2.74 5.11 1.43 1.73 2.53 2.99 13.27 10.52 15.65 5.02 8.63 0.27 7.54 34 1.44 2.66 1.74 2.39 2.39 1.40 2.63 1.57 0.74 1.03 1.20 1.52 0.87 1.17 0.15 2.26 0.25 2.49 35 4.72 7.15 2.93 2.33 7.14 3.29 9.79 2.65 3.20 5.00 6.00 6.41 2.27 3.19 3.10 5.36 0.00 2.77 36 2.62 3.71 2.14 1.49 5.54 1.40 8.26 1.13 0.90 1.38 3.10 2.15 2.31 2.13 0.91 2.91 0.00 0.69 37 4.10 2.05 1.95 2.03 3.76 1.50 7.37 1.63 3.12 1.74 1.58 4.16 2.42 2.93 0.22 2.37 0.20 3.12 38 0.00 0.25 1.66 0.55 0.00 0.16 0.84 0.00 0.00 0.16 0.00 1.57 1.20 3.12 22.51 0.45 1.13 0.20 39 1.52 2.80 0.24 0.69 2.29 1.96 6.75 0.37 0.34 0.80 2.52 5.66 2.39 2.34 24.45 3.14 2.49 0.59 40 0.00 0.25 0.46 0.00 0.25 0.31 0.27 0.00 0.00 0.61 0.22 0.15 0.37 0.59 23.29 0.14 0.00 0.19 41 0.17 1.01 0.00 0.54 0.25 0.00 0.28 0.18 0.51 0.48 0.23 1.87 0.52 0.77 21.95 1.04 0.33 0.58 42 1.16 5.05 0.23 0.81 0.25 0.32 1.95 0.00 0.17 0.79 0.68 2.02 1.18 0.77 22.42 2.06 1.33 0.97 43 0.43 1.21 0.29 1.30 2.42 1.48 6.26 1.16 0.22 1.26 2.50 4.53 1.27 0.82 0.37 1.98 0.00 0.98 44 2.25 2.62 1.37 1.74 2.61 1.40 2.22 0.88 1.25 0.59 2.92 5.22 2.03 1.54 0.69 2.04 0.00 1.16 45 1.80 3.12 1.18 2.34 1.56 0.66 3.72 2.17 1.15 0.65 2.86 7.29 1.90 3.20 1.17 2.89 0.34 2.28 46 1.76 2.76 0.58 1.90 2.45 1.72 3.00 0.94 1.57 0.85 2.81 1.05 1.66 1.46 0.38 2.22 0.99 1.98 47 9.09 2.46 2.03 2.10 2.46 1.30 4.68 1.66 1.81 1.29 3.11 3.79 4.84 2.72 0.38 4.67 0.83 5.48 size 19 19 17 17 17 18 19 2 0 16 18 18 17 2 0 19 17 17 18 16

Table C5: Classification of the model of LCD screens

C-7

Appendix C

model 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 1 14.42 16.03 4.19 8.22 13.95 0.00 2.76 1.89 5.30 3.85 1.58 4.85 3.35 2.03 3.01 1.44 4.72 2.62 4.10 2 3.39 11.15 8.41 6.64 3.17 0.00 3.29 2.58 4.40 5.69 3.93 8.01 2.41 2.17 7.68 2.66 7.15 3.71 2.05 3 1.29 2.70 4.04 1.99 3.05 0.00 1.97 0.48 4.22 5.83 1.71 2.47 1.96 1.01 4.62 1.74 2.93 2.14 1.95 4 1.56 3.53 4.90 5.07 4.37 0.33 2.00 0.99 7.75 3.77 2.18 2.27 1.82 1.23 1.42 2.39 2.33 1.49 2.03 5 2.37 6.68 6.06 7.46 5.88 0.00 2.68 2.06 5.27 3.65 3.14 4.84 5.84 3.51 4.92 2.39 7.14 5.54 3.76 6 1.49 4.19 5.26 4.20 4.70 0.27 1.04 1.67 3.83 1.26 2.22 2.12 2.78 1.44 2.74 1.40 3.29 1.40 1.50 7 1.83 5.84 10.86 4.86 9.60 0.00 2.28 1.99 6.99 5.20 6.64 9.15 8.90 3.27 5.11 2.63 9.79 8.26 7.37 8 2.15 7.31 10.71 4.48 3.83 0.00 0.69 2.26 5.54 1.68 2.30 1.55 4.11 1.20 1.43 1.57 2.65 1.13 1.63 9 0.77 4.75 3.46 2.33 2.60 0.00 1.95 1.06 4.29 3.24 1.08 1.77 0.79 0.94 1.73 0.74 3.20 0.90 3.12 10 1.23 6.96 5.23 3.85 3.96 0.13 1.85 1.32 2.77 1.56 1.85 4.47 3.26 1.24 2.53 1.03 5.00 1.38 1.74 11 2.51 4.26 12.81 6.23 9.42 0.00 2.74 1.86 4.97 2.67 2.37 4.57 5.10 3.93 2.99 1.20 6.00 3.10 1.58 12 3.14 5.98 10.72 2.86 6.05 0.00 2.62 1.77 8.21 5.88 2.97 7.56 5.68 1.91 13.27 1.52 6.41 2.15 4.16 13 2.61 6.97 3.45 2.34 1.99 0.42 0.71 1.37 9.85 3.13 2.67 4.84 1.78 1.19 10.52 0.87 2.27 2.31 2.42 14 4.58 9.64 3.32 1.58 2.48 0.00 0.60 1.28 3.07 3.08 3.77 7.65 2.46 2.07 15.65 1.17 3.19 2.13 2.93 15 0.00 0.88 2.10 0.00 0.34 0.11 1.09 0.00 0.00 2.60 0.72 2.26 0.23 0.15 5.02 0.15 3.10 0.91 0.22 16 2.81 8.60 8.33 5.27 5.22 0.00 1.36 2.45 7.36 5.43 4.09 6.06 4.78 1.49 8.63 2.26 5.36 2.91 2.37 17 0.20 0.42 2.60 0.00 2.49 0.25 0.79 0.47 0.30 1.06 0.60 0.20 0.61 0.00 0.27 0.25 0.00 0.00 0.20 18 3.08 8.98 4.08 4.98 6.31 0.00 1.68 1.60 6.47 1.39 1.22 1.47 4.28 1.27 7.54 2.49 2.77 0.69 3.12 19 100 11.09 5.25 3.85 5.08 0.00 1.62 1.83 7.79 4.90 2.32 4.69 4.40 1.93 3.91 3.31 4.50 1.62 2.19 20 11.09 100 7.87 5.43 5.14 0.00 4.88 3.58 9.42 5.36 3.61 3.88 4.26 3.22 6.02 3.18 6.50 3.36 3.72 21 5.25 7.87 100 6.15 63.95 0.00 3.57 2.14 3.76 2.52 4.68 5.11 4.42 1.11 6.33 4.01 6.95 2.97 4.40 22 3.85 5.43 6.15 100 8.49 0.00 2.07 1.83 8.25 4.07 3.39 7.79 5.07 3.17 7.36 4.37 7.85 3.33 3.88 23 5.08 5.14 63.95 8.49 100 0.00 2.68 1.37 4.72 2.46 4.50 8.18 6.43 2.49 8.25 3.15 10.68 4.11 3.84 24 0.00 0.00 0.00 0.00 0.00 100 2.04 0.00 0.00 3.97 0.48 1.31 0.00 0.90 0.53 0.00 0.67 0.00 0.00 25 1.62 4.88 3.57 2.07 2.68 2.04 100 1.13 2.22 4.04 5.19 4.66 3.84 1.81 1.84 2.35 7.47 1.45 2.72 26 1.83 3.58 2.14 1.83 1.37 0.00 1.13 100 5.26 2.96 3.18 1.72 3.27 1.12 1.96 1.09 1.80 1.33 1.62 27 7.79 9.42 3.76 8.25 4.72 0.00 2.22 5.26 100 4.77 2.66 3.13 4.98 1.37 6.21 3.37 4.96 2.12 3.72 28 4.90 5.36 2.52 4.07 2.46 3.97 4.04 2.96 4.77 100 2.10 6.17 2.67 1.54 8.11 3.34 2.44 2.15 4.94 29 2.32 3.61 4.68 3.39 4.50 0.48 5.19 3.18 2.66 2.10 100 0.27 3.80 0.66 1.34 0.80 2.45 1.62 3.53 30 4.69 3.88 5.11 7.79 8.18 1.31 4.66 1.72 3.13 6.17 0.27 100 1.53 1.01 1.92 0.98 2.03 3.16 6.05 31 4.40 4.26 4.42 5.07 6.43 0.00 3.84 3.27 4.98 2.67 3.80 1.53 100 2.71 3.49 1.93 6.87 2.07 4.76 32 1.93 3.22 1.11 3.17 2.49 0.90 1.81 1.12 1.37 1.54 0.66 1.01 2.71 100 2.47 1.35 4.69 3.07 2.44 33 3.91 6.02 6.33 7.36 8.25 0.53 1.84 1.96 6.21 8.11 1.34 1.92 3.49 2.47 100 1.54 3.03 0.96 1.24 34 3.31 3.18 4.01 4.37 3.15 0.00 2.35 1.09 3.37 3.34 0.80 0.98 1.93 1.35 1.54 100 2.92 1.67 1.68 35 4.50 6.50 6.95 7.85 10.68 0.67 7.47 1.80 4.96 2.44 2.45 2.03 6.87 4.69 3.03 2.92 100 2.95 2.84 36 1.62 3.36 2.97 3.33 4.11 0.00 1.45 1.33 2.12 2.15 1.62 3.16 2.07 3.07 0.96 1.67 2.95 100 2.05 37 2.19 3.72 4.40 3.88 3.84 0.00 2.72 1.62 3.72 4.94 3.53 6.05 4.76 2.44 1.24 1.68 2.84 2.05 100 38 0.00 0.00 1.41 0.00 1.02 0.22 0.74 0.43 0.00 0.00 0.74 0.39 0.46 0.63 0.16 0.15 0.00 0.47 0.23 39 0.90 0.44 0.35 3.61 2.71 0.00 0.37 2.02 1.30 0.88 0.30 0.26 2.07 3.13 0.32 0.30 0.20 0.59 3.42 40 0.43 0.00 1.72 0.00 1.32 0.31 1.06 0.00 0.00 0.00 0.97 0.60 0.22 0.15 0.00 0.29 0.38 0.54 0.00 41 0.67 0.00 0.70 0.90 1.01 0.00 0.37 0.43 0.00 0.87 0.00 6.97 0.23 0.31 0.32 0.15 0.19 13.75 0.90 42 0.67 1.32 1.40 2.98 1.68 0.66 0.91 0.28 1.62 3.19 0.00 0.13 0.23 1.70 0.00 0.60 0.78 0.46 2.48 43 28.45 2.95 0.80 2.44 1.55 0.00 1.40 1.54 1.87 0.34 3.34 0.71 2.79 1.64 1.04 0.60 0.49 1.00 2.76 44 16.72 2.40 1.94 2.02 1.50 0.00 1.32 1.08 2.17 2.30 2.02 1.48 2.93 1.93 0.59 0.94 1.86 0.77 0.79 45 29.89 3.50 3.28 3.58 3.17 2.54 1.21 1.81 3.06 2.80 2.46 1.86 2.58 2.56 0.87 1.67 1.78 0.52 4.56 46 18.27 4.95 2.83 1.76 2.74 0.00 1.18 1.96 2.64 2.41 1.60 5.59 2.82 1.88 1.48 2.04 1.74 10.31 2.52 47 25.88 5.46 3.65 4.60 2.35 0.17 3.09 1.58 3.41 4.84 5.03 1.82 3.96 4.20 1.71 3.69 3.25 1.02 3.94 size 17 15 18 15 18 17 18 16 2 0 18 17 15 15 19 2 0 18 18 19 17

Table C5: Classification of the model of LCD screens (continue)

C-8

Appendix C

model 38 39 40 41 42 43 44 45 46 47 1 0.00 1.52 0.00 0.17 1.16 0.43 2.25 1.80 1.76 9.09 19 2 0.25 2.80 0.25 1.01 5.05 1.21 2.62 3.12 2.76 2.46 19 3 1.66 0.24 0.46 0.00 0.23 0.29 1.37 1.18 0.58 2.03 17 4 0.55 0.69 0.00 0.54 0.81 1.30 1.74 2.34 1.90 2.10 17 5 0.00 2.29 0.25 0.25 0.25 2.42 2.61 1.56 2.45 2.46 17 6 0.16 1.96 0.31 0.00 0.32 1.48 1.40 0.66 1.72 1.30 18 7 0.84 6.75 0.27 0.28 1.95 6.26 2.22 3.72 3.00 4.68 19 8 0.00 0.37 0.00 0.18 0.00 1.16 0.88 2.17 0.94 1.66 20 9 0.00 0.34 0.00 0.51 0.17 0.22 1.25 1.15 1.57 1.81 16 10 0.16 0.80 0.61 0.48 0.79 1.26 0.59 0.65 0.85 1.29 18 11 0.00 2.52 0.22 0.23 0.68 2.50 2.92 2.86 2.81 3.11 18 12 1.57 5.66 0.15 1.87 2.02 4.53 5.22 7.29 1.05 3.79 17 13 1.20 2.39 0.37 0.52 1.18 1.27 2.03 1.90 1.66 4.84 20 14 3.12 2.34 0.59 0.77 0.77 0.82 1.54 3.20 1.46 2.72 19 15 22.51 24.45 23.29 21.95 22.42 0.37 0.69 1.17 0.38 0.38 17 16 0.45 3.14 0.14 1.04 2.06 1.98 2.04 2.89 2.22 4.67 17 17 1.13 2.49 0.00 0.33 1.33 0.00 0.00 0.34 0.99 0.83 18 18 0.20 0.59 0.19 0.58 0.97 0.98 1.16 2.28 1.98 5.48 16 19 0.00 0.90 0.43 0.67 0.67 28.45 16.72 29.89 18.27 25.88 17 20 0.00 0.44 0.00 0.00 1.32 2.95 2.40 3.50 4.95 5.46 15 21 1.41 0.35 1.72 0.70 1.40 0.80 1.94 3.28 2.83 3.65 18 22 0.00 3.61 0.00 0.90 2.98 2.44 2.02 3.58 1.76 4.60 15 23 1. 0 2 2 . 71 1. 3 2 1. 0 1 1. 6 8 1. 55 1. 50 3 . 17 2 . 74 2 . 3 5 18 24 0.22 0.00 0.31 0.00 0.66 0.00 0.00 2.54 0.00 0.17 17 25 0.74 0.37 1.06 0.37 0.91 1.40 1.32 1.21 1.18 3.09 18 26 0.43 2.02 0.00 0.43 0.28 1.54 1.08 1.81 1.96 1.58 16 27 0.00 1.30 0.00 0.00 1.62 1.87 2.17 3.06 2.64 3.41 20 28 0.00 0.88 0.00 0.87 3.19 0.34 2.30 2.80 2.41 4.84 18 29 0.74 0.30 0.97 0.00 0.00 3.34 2.02 2.46 1.60 5.03 17 30 0.39 0.26 0.60 6.97 0.13 0.71 1.48 1.86 5.59 1.82 15 31 0.46 2.07 0.22 0.23 0.23 2.79 2.93 2.58 2.82 3.96 15 32 0.63 3.13 0.15 0.31 1.70 1.64 1.93 2.56 1.88 4.20 19 33 0.16 0.32 0.00 0.32 0.00 1.04 0.59 0.87 1.48 1.71 20 34 0.15 0.30 0.29 0.15 0.60 0.60 0.94 1.67 2.04 3.69 18 35 0.00 0.20 0.38 0.19 0.78 0.49 1.86 1.78 1.74 3.25 18 36 0.47 0.59 0.54 13.75 0.46 1.00 0.77 0.52 10.31 1.02 19 37 0.23 3.42 0.00 0.90 2.48 2.76 0.79 4.56 2.52 3.94 17 38 100 31.44 30.03 30.20 32.17 2.08 0.88 2.17 0.96 0.77 17 39 31.44 100 25.83 32.58 45.45 0.76 0.71 0.99 0.38 2.13 17 40 30.03 25.83 100 28.98 30.18 0.00 0.34 0.19 0.37 0.00 17 41 30.20 32.58 28.98 100 40.08 0.75 0.52 0.39 0.57 2.49 17 42 32.17 45.45 30.18 40.08 100 1.12 0.70 1.95 0.38 2.49 17 43 2.08 0.76 0.00 0.75 1.12 100 27.92 38.87 23.11 40.69 17 44 0.88 0.71 0.34 0.52 0.70 27.92 100 28.73 19.41 31.26 17 45 2.17 0.99 0.19 0.39 1.95 38.87 28.73 100 23.17 38.10 17 46 0.96 0.38 0.37 0.57 0.38 23.11 19.41 23.17 100 28.96 17 47 0.77 2.13 0.00 2.49 2.49 40.69 31.26 38.10 28.96 100 17 size 17 17 17 17 17 17 17 17 17 17

Table C5: Classification of the model of LCD screens (continue)

C-9

Appendix D

APPENDIX D GRAPHIC USER INTERFACE ______

Graphic display area

Configuration

Data log

Operation commands Process control

Figure D1: Graphical user interface console

The user uses the graphic user interface (GUI) to interact with the system for process control and demonstrations in the learning process. In regard to the demonstration, the GUI is designed for intuitive and interaction that allows the user to precisely demonstrate the commands and primitive cutting operation. The issue of 2D and 3D perception of the user is taken into account. The GUI is developed and operated in C/C++ under Visual Studio 2008. The GUI consists of 5 main areas: 1) Graphic display area, 2) Operation commands, 3) Configuration, 4) Data log, and 5) Process control.

x Graphic display area: Snapshots of a colour and a depth images snap captured during the process are rendered on this area. The image can be switched between input images and output image after detection process.

D-1

Appendix D

x Operation commands: The user controls the system to start/pause/stop the process using this panel. In addition, the system is able to run according to one of five operation modes specified by the user. It should be noted that only 3 of them are available, including 1) Automatic, 2) Manual, and 3) Configuration. The system performs disassembly autonomously in the automatic mode. It is used in performance test in Chapter 7. The manual mode is used to test the concept and preliminary test, e.g. vision system’s detection, of each function before the actual operation. The configuration mode is explained next. x Configurations: In the configuration mode, the user allows to minor adjust some parameters in regard to calibration purpose, e.g. depth image compensation. x Data log: The data flows among three operating modules are shown in this console in the form of text, mostly appeared as Action and Fluent according to the cognitive robotic module’s command. Timestamp in milliseconds is used for data recording purpose. The data flow within this console is directly written to the file straightaway as process goes. x Operation commands: The user sends the command through this panel. The commands available on the panel correspond to the sensing actions and primitive actions. Every command can be activated by pressing the button which facilitates the user for error-free input. Only model name is need to specified in text input.

Figure D2: Graphical user interface – operation part

D-2

Appendix E

APPENDIX E EXPERIMENTAL RESULT ______

E.1 Time consumption

Time (sec) Count Time (min) Index Model Autonomous Human assist Autonomous Human Total AI Auto Opn Vision 1 AC1 10.92 1608.33 18.83 505.72 19 27.30 8.43 35.73 2 AC2 18.63 2518.17 163.25 447.89 18 45.00 7.46 52.47 3 AO1 20.55 2205.68 28.89 1079.73 34 37.59 18.00 55.58 5 BQ3 46.99 2695.34 33.69 496.00 22 46.27 8.27 54.53 6 BQ4 22.70 1712.88 32.50 878.47 26 29.47 14.64 44.11 7 BQ5 33.41 1094.39 27.27 954.48 35 19.25 15.91 35.16 BQ6 8.88 1421.03 16.16 382.01 13 24.10 6.37 30.47 8 BQ6_pcb 9.31 270.85 10.50 0.00 0 4.84 0.00 4.84 10 DD1 20.13 1969.22 33.72 691.16 27 33.72 11.52 45.24 12 DL2 15.31 1407.04 18.52 1368.67 52 24.01 22.81 46.83 DL4 16.51 1653.38 18.28 328.15 8 28.14 5.47 33.61 14 DL4_pcb 12.44 634.27 12.17 396.69 21 10.98 6.61 17.59 DL5 (r1) 12.67 1504.37 18.06 150.11 9 25.59 2.50 28.09 15 DL5_pcb 10.45 504.47 11.94 793.42 30 8.78 13.22 22.00 16 DL6 45.65 1701.47 18.20 1507.61 42 29.42 25.13 54.55 19 HP2 (r1) 23.17 1607.62 29.64 1214.61 38 27.67 20.24 47.92 20 HP3 17.31 2286.09 27.87 1147.45 38 38.85 19.12 57.98 23 IB3 17.16 1937.21 29.56 1165.62 46 33.07 19.43 52.49 LG2 9.47 1534.79 17.92 226.22 4 26.04 3.77 29.81 26 LG2_pcb 16.45 537.91 14.73 362.52 16 9.48 6.04 15.53 27 NC1 11.23 951.57 23.24 1668.44 56 16.43 27.81 44.24 28 NC4 11.49 964.89 24.72 945.52 33 16.68 15.76 32.44 OP1 11.06 1550.39 18.34 685.92 21 26.33 11.43 37.76 29 OP1_pcb 12.92 874.63 14.75 433.97 18 15.04 7.23 22.27 32 SS3 12.19 864.86 25.97 1596.11 45 15.05 26.60 41.65 33 SS4 18.09 1974.75 31.78 231.76 12 33.74 3.86 37.61 SS5 9.81 1387.23 23.53 295.19 11 23.68 4.92 28.60 34 SS5_pcb 28.69 481.25 12.11 99.67 6 8.70 1.66 10.36 35 SS6 29.99 1633.03 21.30 1466.23 48 28.07 24.44 52.51 SS7 10.02 1925.08 18.50 77.33 4 32.56 1.29 33.85 36 SS7_pcb 11.81 550.20 11.94 346.02 18 9.57 5.77 15.33

Learning & Revision Test DL5 (r1) 12.67 1504.37 18.06 150.11 9 25.59 2.50 28.09 DL5_pcb 10.45 504.47 11.94 793.42 30 8.78 13.22 22.00 DL5 (r2) 38.28 796.58 9.95 1570.52 42 14.08 26.18 40.26 15 DL5 (r3) 18.91 1076.05 9.67 43.58 3 18.41 0.73 19.14 DL5 (r4) 10.02 1372.75 9.62 201.45 7 23.21 3.36 26.56 DL5 (r5) 18.61 1356.09 9.66 15.00 3 23.07 0.25 23.32 HP2 (r1) 23.17 1607.62 29.64 1214.61 38 27.67 20.24 47.92 HP2 (r2) 22.39 2312.69 14.23 171.24 8 39.16 2.85 42.01 19 HP2 (r3) 39.31 1211.33 14.17 56.09 6 21.08 0.93 22.02 HP2 (r4) 22.08 1205.50 14.28 134.25 9 20.70 2.24 22.94 HP2 (r5) 20.53 1477.13 14.19 10.00 2 25.20 0.17 25.36 NOTE: 5 minutes penalty for the second run must be added on the PCB related process Table E.1: Time consumption

E-1

Appendix E

E.2 Time consumption by operations

Time (minutes) Index Model AI VS Flip Cutting Human DTotal 1 AC1 0.18 0.31 7.09 26.57 1.58 35.73 2 AC2 0.31 0.62 9.45 40.58 1.50 52.47 3 AO1 0.34 0.48 10.17 41.75 2.83 55.58 5 BQ3 0.78 0.56 9.81 41.55 1.83 54.53 6 BQ4 0.38 0.54 7.09 33.94 2.17 44.11 7 BQ5 0.56 0.45 9.45 21.78 2.92 35.16 8 BQ6 0.30 0.44 5.63 27.85 1.08 40.31 10 DD1 0.34 0.56 8.36 33.73 2.25 45.24 12 DL2 0.26 0.31 6.54 39.18 0.55 46.83 14 DL4 0.48 0.51 9.27 40.17 0.77 56.20 15 DL5 (r1) 0.39 0.50 8.54 39.95 0.71 55.09 16 DL6 0.76 0.30 7.99 44.83 0.67 54.55 19 HP2 (r1) 0.39 0.49 10.90 35.23 0.91 47.92 20 HP3 0.29 0.46 10.72 45.61 0.89 57.98 23 IB3 0.29 0.49 10.54 40.30 0.88 52.49 26 LG2 0.43 0.54 9.99 33.53 0.83 50.33 27 NC1 0.19 0.39 7.27 35.79 0.61 44.24 28 NC4 0.19 0.41 5.81 25.54 0.48 32.44 29 OP1 0.40 0.55 11.81 46.29 0.98 65.03 32 SS3 0.20 0.43 8.90 31.37 0.74 41.65 33 SS4 0.30 0.53 6.54 29.69 0.55 37.61 34 SS5 0.64 0.59 6.90 30.25 0.58 43.96 35 SS6 0.50 0.35 8.90 42.01 0.74 52.51 36 SS7 0.36 0.51 8.54 39.06 0.71 54.18

Average 0.39 0.47 8.59 36.11 1.16 48.17 Min 0.18 0.30 5.63 21.78 0.48 32.44 Max 0.78 0.62 11.81 46.29 2.92 65.03 Table E.2: Time consumption by operations

NOTE: Description of operation types

x AI = Artificial Intelligence x VS = Vision system x Flip = Flipping Table and subsequent checking state change x Cutting = Cutting operation and grinder size checking x Human D = Decision making in human assistance session

E-2

Appendix E

E.3 Weight comparison – Residue and efficiency

E.3.1 Flexibility test

before cut (Ideal undamage) Plas tic ( g) PCBs ( g) Steel (g) LCD Model Back Front Ctrl Pw r etc PCB-c Carrier Mod (g) AC1 402.3 112.9 489 490 2125.7 AC2 441.7 122.5 53.3 261.8 2418.2 AO1 530.1 133.2 55.8 259 19.5 160.9 691.5 1950.2 BQ3 417.1 89 63.9 266.5 16.8 166 496.8 1866.8 BQ4 431 162.8 294.6 66.9 27.1 474.2 340.5 1964.5 BQ5 569.2 134.6 45.5 258.4 16.8 182.3 638.9 2369.1 BQ6 477.2 289.7 49.9 313.1 33.2 32.8 915.6 1976.8 DD1 474.9 178.2 50 214.1 17.7 206.2 430.2 1818.8 DL2 382.3 123.6 41.4 310.7 640.4 1972.2 DL4 573.2 137.2 43.7 256.6 5.7 268.4 669.6 2498.2 DL5 404.2 57.5 93.6 316.6 21.4 686.5 2299.3 DL6 596 128 133.5 314.5 686.5 1514.4 HP2 554.5 108.8 42.3 259.4 14.1 153.9 510.9 1890.6 HP3 464 113.1 69.7 216.9 44.7 110.5 500 913.5 IB3 502.9 209.3 86.2 249.5 511.4 765.6 1235.1 LG2 420.1 128.1 47.9 173.7 10.7 396.7 377.3 989.2 NC1 486.6 190.5 107.2 285.8 99.8 691.8 1008.6 2806.4 NC4 391.3 193 76.7 283.3 21.3 1003.2 1835.6 OP1 445.3 78.3 72.8 263.1 20 499.8 735.9 1520.5 SS3 663 266.6 188.6 51.6 171.7 897 1691.7 SS4 565.3 69.6 78.7 275.5 1055.9 500.4 2050 SS5 442.1 80.2 208.5 40.1 59.3 458.2 2071.8 SS6 499.2 264.2 356.5 51.9 455.5 1235.9 SS7 564.9 144 48.9 233.2 8.3 1100.8 491.2 2367.1

After cut (actual damage) Plas tic ( g) PCBs ( g) Steel (g) LCD Model Back Front Ctrl Pwr etc PCB-c Carrier Mod (g) AC1 445.5 105.1 481.6 480.1 1762.3 AC2 427.3 129.2 32.7 251.4 2388.8 AO1 520.3 150.3 47.1 251.6 137.7 564.7 2118.1 BQ3 413.4 104.5 38.9 233.4 140.1 469.5 1913.1 BQ4 410.3 190.6 261.5 40.1 427.6 415.2 1874.7 BQ5 540 154.8 30.4 237 199.7 554.1 2444.9 BQ6 450.7 89.7 85.3 278.1 599.6 2135.2 DD1 458.7 202.7 36.7 198.8 115.3 668.9 1640.4 DL2 381.2 131.4 38.4 319.1 10.6 545 2016 DL4 562.1 144.6 37.5 228.3 446.5 614.5 2252.3 DL5 387.4 72.5 119.8 332.4 517.6 2049.8 DL6 578.8 128.7 127.2 363.8 770.9 1350.2 HP2 561.1 111 27.2 219.9 135.7 447.6 2028.4 HP3 450.4 125.8 54 204.5 84.9 479.3 1043.4 IB3 386.3 332.1 66.4 226.1 445.2 641.7 1428.6 LG2 419.7 139.3 30.2 145.8 393.4 436.1 951.1 NC1 427.5 287.3 71.9 246.2 80.9 377 1145 2900.5 NC4 350.5 251.7 64.4 264.6 761.6 2065.6 OP1 430.2 98.2 50.6 251.2 447.2 606.3 1726 SS3 646.4 453.4 126.1 73.4 821.7 1728.7 SS4 545.5 102.7 66.9 237.8 926.7 287.6 2385.2 SS5 421.6 77 203.5 30.3 56.8 437.8 2055.1 SS6 490.4 305.5 310.5 54.5 432.3 1235.6 SS7 555.5 158.1 41.4 207.9 1001.6 487.8 2419.2 Table E.3: Weight comparison in ideal and actual cases

E-3

Appendix E

Effiiecny and residue Residue (%) Efficiency (%) Model Plastic PCB Steel Comp Total Plastic PCB Steel Comp Total AC1 -6.87 1.51 2.02 17.10 9.54 93.13 98.49 97.98 82.90 90.46 AC2 1.36 9.84 1.22 2.07 98.64 90.16 98.78 97.93 AO1 -1.10 10.65 17.60 -8.61 0.27 98.90 89.35 82.40 91.39 99.73 BQ3 -2.33 21.57 8.03 -2.48 2.07 97.67 78.43 91.97 97.52 97.93 BQ4 -1.20 22.39 -3.45 4.57 3.76 98.80 77.61 96.55 95.43 96.24 BQ5 1.28 16.62 8.21 -3.20 1.28 98.72 83.38 91.79 96.80 98.72 BQ6 29.53 8.28 36.78 -8.01 11.00 70.47 91.72 63.22 91.99 89.00 DD1 -1.27 16.43 -23.22 9.81 2.02 98.73 83.57 76.78 90.19 97.98 DL2 -1.32 -1.53 13.24 -2.22 0.83 98.68 98.47 86.76 97.78 99.17 DL4 0.52 13.14 -13.11 9.84 3.75 99.48 86.86 86.89 90.16 96.25 DL5 0.39 26.88 10.85 10.30 99.61 73.12 89.15 89.70 DL6 2.28 -9.60 -12.29 10.84 1.58 97.72 90.40 87.71 89.16 98.42 HP2 -1.33 21.75 12.26 -7.29 0.10 98.67 78.25 87.74 92.71 99.90 HP3 0.16 21.97 7.58 -14.22 -0.41 99.84 78.03 92.42 85.78 99.59 IB3 -0.87 12.87 14.89 -15.67 0.94 99.13 87.13 85.11 84.33 99.06 LG2 -1.97 24.24 -7.17 3.85 1.10 98.03 75.76 92.83 96.15 98.90 NC1 -5.57 1.70 10.49 -3.35 0.96 94.43 98.30 89.51 96.65 99.04 NC4 -3.06 13.72 24.08 -12.53 1.21 96.94 86.28 75.92 87.47 98.79 OP1 -0.92 15.20 14.74 -13.52 0.72 99.08 84.80 85.26 86.48 99.28 SS3 -18.31 47.50 16.24 -2.19 2.05 81.69 52.50 83.76 97.81 97.95 SS4 -2.09 13.98 21.98 -16.35 0.94 97.91 86.02 78.02 83.65 99.06 SS5 4.54 5.95 4.43 0.81 2.32 95.46 94.05 95.57 99.19 97.68 SS6 -4.26 10.63 5.09 0.02 1.20 95.74 89.37 94.91 99.98 98.80 SS7 -0.66 14.15 6.44 -2.20 1.75 99.34 85.85 93.56 97.80 98.25 Average w rt. total w eight of al samples 2.64 97.36 Average -0.54 13.61 8.34 -1.79 2.56 96.12 85.43 86.51 92.47 97.41 SD 7.69 11.15 13.81 9.26 3.14 6.61 9.79 8.55 5.46 3.11 Min -18.31 -9.60 -23.22 -16.35 -0.41 70.47 52.50 63.22 82.90 89.00 Max 29.53 47.50 36.78 17.10 11.00 99.84 98.49 97.98 99.98 99.90

Table E.4: Efficiency and residue in actual cases

E-4

Appendix E

E.2.2 Learning and Revision HP2: Type-I and DL5: Type-II

Learning and Revision - Weight comparison Model Version Plas tic ( g) PCBs ( g) Steel (g) LCD Back Front Ctrl Pwr etc PCB-c Carrier Mod (g) HP2 Ideal 554.5 108.8 42.3 259.4 14.1 153.9 510.9 1890.6 r1 551.1 106.7 29.6 210.6 135 430.3 2077.7 r2 529.5 105.4 44 215.7 141.8 421.5 2073.4 r3 556 105.5 23.5 215.3 131 474.7 2100.3 r4 563.1 110.8 31.4 222.2 143.9 450.1 2305.7 r5 566.9 106.4 28.7 221 133.3 454.7 2074.9 DL5 Ideal 404.2 57.5 93.6 316.6 686.5 2299.3 r1 429.1 72.1 53.9 185.9 726.1 2025.2 r2 381.7 72.7 107.45 344.8 530.7 2010.5 r3 414.7 72.1 111.3 335.3 510.9 2072.5 r4 365.2 72.4 107.8 336.1 505.3 2003.4 r5 419.6 69.7 99.2 339.6 512.7 2065.6 Table E.5: Weight comparison in learning and revision tests

Learning and Revision - Effiiecny and residue - HP 2 (Type-I) Model Residue (%) Efficiency (%) Plastic PCB Steel Comp Total Plastic PCB Steel Comp Total HP2 r1 0.83 23.94 14.97 -9.90 -0.18 99.17 76.06 85.03 90.10 99.82 r2 4.28 17.76 15.27 -9.67 0.09 95.72 82.24 84.73 90.33 99.91 r3 0.27 24.38 8.89 -11.09 -2.03 99.73 75.62 91.11 88.91 97.97 r4 -1.60 19.70 10.65 -21.96 -8.28 98.40 80.30 89.35 78.04 91.72 r5 -1.51 20.93 11.55 -9.75 -1.45 98.49 79.07 88.45 90.25 98.55 Average 0.46 21.34 12.27 -12.47 -2.37 98.30 78.66 87.73 87.53 97.59 SD 2.39 2.81 2.78 5.33 3.42 1.54 2.81 2.78 5.33 3.39 Min -1.60 17.76 8.89 -21.96 -8.28 95.72 75.62 84.73 78.04 91.72 Max 4.28 24.38 15.27 -9.67 0.09 99.73 82.24 91.11 90.33 99.91

Learning and Revision - Effiiecny and residue - DL5 (Type-II) Residue (%) Efficiency (%) Model Plastic PCB Steel Comp Total Plastic PCB Steel Comp Total DL5 -r1 -8.56 41.54 -5.77 11.92 9.47 91.44 58.46 94.23 88.08 90.53 r2 1.79 -10.25 22.69 12.56 10.65 98.21 89.75 77.31 87.44 89.35 r3 -4.98 -8.87 25.58 9.86 8.88 95.02 91.13 74.42 90.14 91.12 r4 5.83 -8.22 26.39 12.87 12.19 94.17 91.78 73.61 87.13 87.81 r5 -5.07 -6.97 25.32 10.16 9.20 94.93 93.03 74.68 89.84 90.80 Average -2.20 -8.58 25.00 11.48 10.08 94.75 91.42 75.00 88.52 89.92 SD 5.85 1.37 1.60 1.38 1.35 2.41 1.37 1.60 1.38 1.51 Min -8.56 -10.25 22.69 9.86 8.88 91.44 89.75 73.61 87.13 87.81 Max 5.83 -6.97 26.39 12.87 12.19 98.21 93.03 77.31 90.14 91.12 NOTE: DL5-r1: PCB and steel are not taken into account due to incomplete disassembly Table E.6: Efficiency and residue in learning and revision tests

E-5

Appendix E

E.6 Energy consumption and disassembly cost

The energy consumption is calculated from the rated power of the equipments performing disassembly physically and time consumption. Therefore, the process in regard to vision system and data manipulation of the CRA is minimal and ignored. The specification of the equipment and corresponding operations are in Table E.7. Percentage of time consumption is approximated from the experiment. The flipping operation is not considered in human assistance since the chance of execution is trivia. According to the time consumption presented in Section 7.1.2.3 and Table E.8, the average energy consumption is 0.78KWh/screen resulted in $0.39/screen.

Moreover, the cost of abrasive cut-off disc at $2.76/disc/screen was taken into account. Therefore, the average disassembly cost was $3.15/screen in total regardless of the setup cost of approximately $60,000. In comparison with human operator, a comparable selective non-destructive disassembly was done in 6.2 minutes/screen on average (Kernbaum et al. 2009) resulted in approximately $2.5/screen (considering based salary of technician at 30$/hr). Overall, a major difference between the proposed system and manual disassembly is the setup cost. The cost per screen is slightly more expensive in the proposed system. However, it is expected to reduce once the same model is repeated as a result of to the learning and revision strategy.

Operation Equipments Rated Power (W) Time consumption

Flipping DC-Motor 186 20% of autonomous operation

Robot (IRB-140) 400 80% of autonomous operation Cutting Controller (IRC-5) 250 and human assistance Angle grinder 850

Table E.7: Energy consumption of the disassembly process

E-6

Appendix E

Time (sec) Energy consumption (KWh) Electricity Index Model Flip op Cut op FlipTable Robot + grinder Cost ($) Total 20% Auto 80% Auto+H 186W 1500W $0.5/KWh 1 AC1 321.67 1792.38 0.0166 0.7468 0.76 0.38 2 AC2 503.63 2462.43 0.0260 1.0260 1.05 0.53 3 AO1 441.14 2844.27 0.0228 1.1851 1.21 0.60 5 BQ3 539.07 2652.27 0.0279 1.1051 1.13 0.57 6 BQ4 342.58 2248.77 0.0177 0.9370 0.95 0.48 7 BQ5 218.88 1830.00 0.0113 0.7625 0.77 0.39 BQ6 284.21 1518.83 0.0147 0.6328 0.65 0.32 8 BQ6_pcb 54.17 216.68 0.0028 0.0903 0.09 0.05 10 DD1 393.84 2266.53 0.0203 0.9444 0.96 0.48 12 DL2 281.41 2494.30 0.0145 1.0393 1.05 0.53 DL4 330.68 1650.86 0.0171 0.6879 0.70 0.35 14 DL4_pcb 126.85 904.10 0.0066 0.3767 0.38 0.19 DL5 (r1) 300.87 1353.60 0.0155 0.5640 0.58 0.29 15 DL5_pcb 100.89 1197.00 0.0052 0.4988 0.50 0.25 16 DL6 340.29 2868.79 0.0176 1.1953 1.21 0.61 19 HP2 (r1) 321.52 2500.71 0.0166 1.0420 1.06 0.53 20 HP3 457.22 2976.32 0.0236 1.2401 1.26 0.63 23 IB3 387.44 2715.38 0.0200 1.1314 1.15 0.58 LG2 306.96 1454.05 0.0159 0.6059 0.62 0.31 26 LG2_pcb 107.58 792.84 0.0056 0.3304 0.34 0.17 27 NC1 190.31 2429.70 0.0098 1.0124 1.02 0.51 28 NC4 192.98 1717.43 0.0100 0.7156 0.73 0.36 OP1 310.08 1926.23 0.0160 0.8026 0.82 0.41 29 OP1_pcb 174.93 1133.68 0.0090 0.4724 0.48 0.24 32 SS3 172.97 2288.00 0.0089 0.9533 0.96 0.48 33 SS4 394.95 1811.56 0.0204 0.7548 0.78 0.39 SS5 277.45 1404.98 0.0143 0.5854 0.60 0.30 34 SS5_pcb 96.25 484.67 0.0050 0.2019 0.21 0.10 35 SS6 326.61 2772.65 0.0169 1.1553 1.17 0.59 SS7 385.02 1617.39 0.0199 0.6739 0.69 0.35 36 SS7_pcb 110.04 786.18 0.0057 0.3276 0.33 0.17

Average 0.78 0.39 Learning & Revision Test DL5 (r1) 300.87 1353.60 0.0155 0.5640 0.58 0.29 DL5_pcb 100.89 1197.00 0.0052 0.4988 0.50 0.25 DL5 (r2) 159.32 2207.78 0.0082 0.9199 0.93 0.46 15 DL5 (r3) 215.21 904.41 0.0111 0.3768 0.39 0.19 DL5 (r4) 274.55 1299.65 0.0142 0.5415 0.56 0.28 DL5 (r5) 271.22 1099.88 0.0140 0.4583 0.47 0.24 HP2 (r1) 321.52 2500.71 0.0166 1.0420 1.06 0.53 HP2 (r2) 462.54 2021.39 0.0239 0.8422 0.87 0.43 19 HP2 (r3) 242.27 1025.16 0.0125 0.4271 0.44 0.22 HP2 (r4) 241.10 1098.65 0.0125 0.4578 0.47 0.24 HP2 (r5) 295.43 1191.70 0.0153 0.4965 0.51 0.26 Table E.8: Energy consumption and cost

x Average time consumption is estimated from the nominal rate occurred in autonomous process without vision system and Cognitive robotics operating time. x The power used of each operation: FlipTable = Motor (186W) and Cutting = Robot (400W) + Controller (250) + Grinder (850W). x The energy cost was estimated from electricity price the during peak time, $0.5/KWh, at the time this research conducted.

E-7

Appendix E

E.3 Vision system result

(Data presented as Appendix C) Backcover Model detect actual Error (out+, in-) index L R D U L R D U XL XR YD YU 1 146 927 148 659 138 931 146 663 -8 -4 -2 -4 pixel 2 132 931 132 653 129 931 130 655 -3 0 -2 -2 mean -4.96 3 214 864 120 685 203 874 113 691 -11 -10 -7 -6 stdev 3.68 4 213 872 126 674 204 875 119 687 -9 -3 -7 -13 rms 6.17 5 206 860 79 685 200 868 74 689 -6 -8 -5 -4 max 7.00 6 193 937 66 678 187 938 64 679 -6 -1 -2 -1 min -15.00 7 192 914 25 689 190 918 23 693 -2 -4 -2 -4 8 209 861 87 685 201 869 81 691 -8 -8 -6 -6 mm 9 192 851 127 684 190 859 124 688 -2 -8 -3 -4 mean -0.88 10 195 921 76 683 187 927 71 685 -8 -6 -5 -2 stdev 4.26 11 220 878 86 627 218 871 85 634 -2 7 -1 -7 rms 4.04 12 199 856 107 648 190 860 98 654 -9 -4 -9 -6 max 4.00 13 224 879 95 664 220 881 93 665 -4 -2 -2 -1 min -8.57 14 207 795 132 623 204 803 128 631 -3 -8 -4 -8 15 202 885 106 680 193 892 93 688 -9 -7 -13 -8 16 212 824 136 617 205 832 126 632 -7 -8 -10 -15 17 188 929 63 691 187 933 62 693 -1 -4 -1 -2 18 189 863 94 693 187 865 91 697 -2 -2 -3 -4 19 202 876 129 679 201 882 128 684 -1 -6 -1 -5 20 190 903 83 676 185 912 83 674 -5 -9 0 2 21 134 916 66 663 132 924 59 668 -2 -8 -7 -5 22 141 906 149 645 136 910 147 650 -5 -4 -2 -5 23 201 924 136 639 188 925 125 651 -13 -1 -11 -12 24 196 940 69 683 191 941 66 689 -5 -1 -3 -6 Table E.9: Actual result of back cover

PCB Cov er Model detect actual Error (out+, in-) index L R D U L R D U XL XR YD YU 1 259 782 234 482 261 782 233 483 -277 656 0 0 pixel 2 138 764 277 656 0000 mean - 0.49 3 271 809 189 426 273 795 189 486 -1 0 -300 788 stdev 3.97 4 298 782 239 510 299 785 238 510 0 -2 -272 719 rms 3.98 5 265 683 146 527 277 683 146 529 3 1 -273 837 max 14.00 6 279 833 218 515 272 832 221 514 0 -1 -282 713 min -15.00 7 270 884 105 498 282 710 105 499 -2 -104 -242 796 8 243 796 184 400 241 796 182 504 -1 -1 -272 747 mm 9 268 742 232 541 268 744 231 542 -1 -2 -272 792 mean -0.28 10 257 770 216 514 281 770 215 516 0 0 -294 779 stdev 2.27 11 308 768 178 498 293 768 178 498 -1 0 -259 758 rms 2.27 12 263 751 194 522 257 754 193 522 -1 -1 -302 778 max 8.00 13 301 776 239 521 302 776 238 522 -2 1 -255 740 min -8.57 14 255 742 242 476 254 742 240 475 2 -1 -279 800 15 281 794 217 533 283 796 219 534 -6 -4 -273 721 16 272 715 210 511 272 721 204 515 -4 -2 -212 928 17 213 899 89 490 210 902 85 492 1 -3 -264 795 18 260 786 190 499 262 787 191 502 6 0 -286 796 19 280 797 233 556 286 795 239 556 154 -354 -314 734 20 0000322 736 154 354 21 213 818 190 554 217 829 189 557 -154 640 1 3 22 144 899 154 640 0000 23 314 788 236 507 313 791 236 509 -2 1 -286 850 24 280 856 182 506 281 857 180 505 -1 -7 -302 821 Table E.10: Actual result of PCB covers

E-8

Appendix E

PCB Model detect actual Error (out+, in-) pixel index L R D U L R D U XL XR YD YU mean 6.97 1 291 574 269 510 293 574 270 511 2 0 1 -1 stdev 66.10 1 574 751 286 512 573 749 328 512 -1 2 42 0 rms 54.42 2 175 892 152 229 175 890 152 230 0 2 0 -1 max 68.57 2 269 616 202 491 291 539 223 479 22 77 21 12 min -62.86 2 576 775 316 483 626 774 330 480 50 1 14 3 3 314 636 243 516 314 625 245 507 0 11 2 9 mm 3 618 784 313 507 632 783 312 504 14 1 -1 3 mean 3.98 4 314 583 243 672 0000 stdev 37.77 4 504 678 220 537 528 677 327 522 24 1 107 15 rms 31.10 5 411 684 606 677 412 683 625 676 1 1 19 1 max 39.18 5 341 417 594 681 0000 min - 35.92 6 292 603 150 515 294 604 264 518 2 -1 114 -3 6 673 831 383 507 674 830 383 506 1 1 0 1 6 327 766 216 556 328 764 217 556 1 2 1 0 7 0000328 537 371 535 7 235 536 232 506 256 534 232 500 21 2 0 6 8 535 701 383 512 593 773 348 496 58 -72 -35 16 8 0000374 689 639 674 8 297 575 256 532 299 568 258 527 2 7 2 5 9 550 765 288 527 594 768 388 528 44 -3 100 -1 9 325 729 198 536 307 734 198 538 -18 -5 0 -2 10 0000293 530 372 536 10 332 634 258 522 330 634 256 523 -2 0 -2 -1 10 607 780 330 524 649 784 348 525 42 -4 18 -1 11 250 514 249 495 265 509 250 477 15 5 1 18 11 502 732 251 482 505 737 250 474 3 -5 -1 8 12 288 593 233 530 300 587 309 535 12 6 76 -5 12 563 797 306 540 587 795 301 529 24 2 -5 11 13 388 509 351 502 385 511 353 505 -3 -2 2 -3 13 0000381 641 155 339 13 270 455 203 446 274 418 224 448 4 37 21 -2 14 417 486 270 402 422 485 294 405 5 1 24 -3 14 463 763 220 481 507 765 223 480 44 -2 3 1 15 501 663 611 677 499 663 613 674 -2 0 2 3 15 210 511 224 552 208 509 223 552 -2 2 -1 0 16 486 797 387 555 513 798 388 555 27 -1 1 0 16 289 455 364 479 286 449 363 479 -3 6 -1 0 Table E.11: Actual result of PCBs

E-9

Appendix E

Carrier Model detect actual Error (out+, in-) index L R D U L R D U XL XR YD YU 1 146 925 149 657 146 925 151 656 8 9 19 3 pixel 2 0000156 909 166 640 mean 8.22 3 0000212 847 139 656 stdev 16.19 4 0000218 867 141 653 -2 0 0 0 rms 17.95 5 236 853 78 593 233 828 81 596 9 7 20 24 max 79.00 6 0000203 920 104 624 2 1 0 0 min -2.00 7 193 912 29 629 229 869 45 571 16 16 45 2 8 0000227 851 105 631 3 -3 1 7 mm 9 203 849 187 582 206 847 190 583 0 3 -2 6 mean 4.70 10 207 907 183 620 205 906 183 550 -1 1 0 3 stdev 9.25 11 235 862 110 533 237 863 108 541 1 3 2 79 rms 10.26 12 212 840 102 517 214 839 108 521 0 4 2 1 max 45.14 13 229 878 97 632 260 847 132 618 0 -1 0 41 min -1.14 14 0000209 786 133 594 15 0000227 865 133 654 16 212 866 105 669 215 862 108 657 -1 -1 0 3 17 226 815 135 598 226 808 142 595 0 -2 -2 1 18 228 888 108 474 229 886 110 472 1 13 -1 2 19 199 849 99 643 201 849 100 645 8 7 9 5 20 217 865 130 658 219 863 131 659 21 0000203 886 96 612 -1 3 14 2 22 0000144 905 85 558 1 11 1 1 23 146 912 70 661 145 908 68 644 24 0000241 860 227 536 -2 1 9 6 25 206 935 72 674 203 937 70 670 -1 4 5 10 Table E.12: Actual result of carriers

E-10