<<

Pellet-size estimation of a ferrochrome pelletizer circuit using Computer Vision techniques

by

Johannes Kasselman Rabie

Thesis presented in partial fulfilment of the requirements for the Degree

of MASTER OF ENGINEERING (EXTRACTIVE METALLURGICAL ENGINEERING)

in the Faculty of Engineering at Stellenbosch University

Supervisor Dr Lidia Auret

March 2018 Stellenbosch University https://scholar.sun.ac.za

DECLARATION

By submitting this thesis electronically, I declare that the entirety of the work contained therein is my own, original work, that I am the sole author thereof (save to the extent explicitly otherwise stated), that reproduction and publication thereof by Stellenbosch University will not infringe any third party rights and that I have not previously in its entirety or in part submitted it for obtaining any qualification.

Date: March 2018

Copyright © 2018 Stellenbosch University All rights reserved

i Stellenbosch University https://scholar.sun.ac.za

ABSTRACT

Pellet-size estimation of a ferrochrome pelletizer circuit using Computer Vision techniques

Rabie J.K., Auret L.

Department of Process Engineering, University of Stellenbosch, Private Bag X1, Matieland 7602, South Africa.

Thesis: M.Eng (Extractive Metallurgical Engineering)

March 2018

Agglomerate pellet size plays an integral part in the safe and stable operation of a submerged arc furnace (SAF), and the efficiencies and yields achieved within the ferrochrome refining processes. For effective process control that ensures constant and optimal pellet size production, the continuous monitoring of pellet size distribution produced by the agglomeration circuit becomes imperative. Traditional size estimation methods tend to be labour intensive and time consuming, and cannot provide feedback in real time. The need therefore exists for automated, real time, and non-intrusive industrial size estimation systems. Despite major advances in the field and proven advantages with regards to object identification and analysis, image analysis-based size estimation systems have not experienced widespread implementation in the mineral processing environment. Cost and problem specific implications have been cited as the main factors inhibiting implementation. In an attempt to prove its viability, this study was aimed at developing a Digital Image Processing (DIP) and Digital Image Analysis (DIA) based particle size estimation algorithm that is suitable for implementation as part of a conceptual particle size distribution estimation sensor at a FeCr pelletizing plant, specifically at Glencore Plc‘s Bokamoso Pelletizing Plant. Additionally it had to explore the viability of the implementation of the sensor as part of a continuous monitoring and control system for a FeCr pelletizing process. This would be done through the development of a conceptual control framework for a FeCr pelletizing circuit. The algorithm was tested and validated on both simulated pelletizer circuit footage and actual process footage, with the former being made possible by the construction of a lab- scale set-up of a section of a FeCr pelletizing circuit. Estimated pellet size distributions were compared to sieve size distributions and a pixel to mm ratio. Results of analyses of both types of footage showed that the algorithm was capable of accurately estimating particle size distribution of moving FeCr pellets. In terms of industry application, the results point to a solution that analyses sintered pellet on conveyor footage as opposed to pellets on roller footage. Furthermore, the conceptual control framework suggests that the output of the algorithm can be successfully utilised in a control system that aims to control the size of FeCr pellets produced by a FeCr pelletizing circuit.

ii

Stellenbosch University https://scholar.sun.ac.za

The use of problem specific filters aided in the accuracy of the algorithm in terms of identifying and delineating objects of interest, and thus ensured its applicability. It is however recommended that future studies investigate methods in which size estimation error associated with irregularly shaped particles is mitigated. Furthermore, it is also recommended to investigate and apply methods that would enable the algorithm to correctly interpret surface pellet data, in terms of pellet on conveyor footage, and subsequently correctly infer information regarding the entire population.

iii

Stellenbosch University https://scholar.sun.ac.za

UITTREKSEL

Korrelgrootteberaming van ‘n ferrochroom verkorrelingaanleg deur middel van Rekenaarvisie tegnieke

(“Pellet-size estimation of a ferrochrome pelletizer circuit using Computer Vision techniques”)

Rabie J.K., Auret L.

Departement Prosesingenieurswese, Universiteit Stellenbosch, Privaatsak X1, Matieland 7602, Suid-Afrika.

Tesis: M.Ing (Ekstraktiewe Metallurgiese Ingenieurswese)

Maart 2018

Agglomeraat korrelgrootte speel ‗n integrale rol in die veilige en stabiele werking van ‗n boogoond, asook in die doeltreffendheid en opbrengste wat behaal word in die ferrochroom ontginningsproses. Vir doeltreffende prosesbeheer wat die produksie van konstante en optimale korrelgrootte verseker, word die deurlopende monitering van die korrelgrootteverspreiding, wat deur die verkorrelingaanleg geproduseer word, genoodsaak. Tradisionele grootteberamingstegnieke is tipies baie arbeidsintensief en tydrowend, en kan nie in reële-tyd terugvoer verskaf nie. Gevolglik bestaan daar ‗n behoefte vir geoutomatiseerde, reële-tyd en nie-indringende, industriële grootteberamingsisteme. Ten spyte van die groot vordering in die veld en die beproefde voordele met betrekking tot objekidentifikasie en –analise, het grootteberamingsisteme wat gebaseer is op beeld-analise nog nie grootskaalse implementering in die mineraalprosesseringindustrie ondergaan nie. Koste en probleem-spesifieke implikasies is geoormerk as die hoof faktore wat hierdie implimentering onderdruk. In ‗n poging om die lewensvatbaarheid van so ‗n proses te bewys, het hierdie studie gepoog om ‗n partikelgrootteberamingalgoritme te onwikkel wat op digitale beeldprosessering en – analise gebaseer is. Hierdie algoritme moes verder ook geskik wees om geïmplimenteer te word as deel van ‗n konseptuele partikelgrootteverspreidingberamingsensor by ‗n FeCr verkorrelingaanleg, spesifiek by Glencore Plc se Bokamoso verkorrelingaanleg. Daarbenewens moes die studie ook ondersoek instel rakende die lewensvatbaarheid om hierdie sensor te implementeer as deel van ‗n stelsel wat deurlopende monitering en beheer toepas op ‗n FeCr verkorrelingaanleg. Hierdie lewensvatbaarheid sou bewys word deur die ontwikkeling van ‗n konseptuele beheerraamwerk vir ‗n FeCr verkorrelingaanleg. Die algoritme was getoets en geldig verklaar op beelde van beide ‗n nagebootsde verkorrelingproses en ‗n werklike verkorrelingproses. Eersgenoemde is moontlik gemaak deur die bou van ‗n laboratorium-grootte nabootsing van ‗n gedeelte van ‗n FeCr verkorrelingaanleg. Beraamde korrelgrootteverspreidings is toe met sif-grootte verspreidings vergelyk, asook met ‗n ―pixel‖ tot mm verhouding.

iv

Stellenbosch University https://scholar.sun.ac.za

Die resultate van die ontledings van beide tipes beelde het bevestig dat die algoritme in staat is om korrelgrootteverpreiding van bewegende FeCr korrels akkuraat te beraam. Met betrekking tot die idustriële toepassing van die algoritme, dui die resultate op ‗n oplossing wat beelde analiseer van gebakte korrels wat op vervoerbande vervoer word, eerder as beelde van korrels wat oor rollers beweeg. Verder dui die konseptuele beheerraamwerk daarop dat die afvoer van die algoritme suksesvol gebruik sal kan word in ‗n beheerstelsel wat daarop gemik is om die grootte van FeCr korrels te beheer wat deur ‗n verkorrelingaanleg geproduseer word. Deur gebruik te maak van probleemspesifieke filters kon die akkuraatheid van die algoritme, in terme van die identifisering en afsondering van partikels, verhoog word, en gevolglik die toepaslikheid van die algoritme verseker word. Dit is egter aanbevole dat toekomstige studies ondersoek moet instel rakende metodes wat die fout kan verminder of uitskakel wat met grootteberaming van onreëlmatig-gevormde partikels gepaard gaan. Dit word verder ook aanbeveel dat ondersoek ingestel moet word rakende metodes wat die algoritme in staat sal stel om data rakende die korrels op die oppervlak van ‗n vervoerband korrek te interpreteer, en gevolglik die korrekte aannames te maak oor die hele populasie.

v

Stellenbosch University https://scholar.sun.ac.za

ACKNOWLEDGEMENTS

I would like to express my sincere gratitude, first and foremost, to my Lord, who has granted me with the abilities I possess in life. Secondly, my deepest gratitude is extended to my supervisor, Dr L. Auret, for her unwavering support, guidance, and belief in my abilities, and for the friendship that has developed over the duration of the study. A big thank you is extended to Glencore, who have made this study possible. And a special thank you is extended to the personnel at Glencore‘s Wonderkop Smelter, from the admin to the technical, and including, but not limited to, Mr H. Potgieter, Mr J. Botha and Mr A Gloy. Thank you for never hesitating to accommodate me and go the extra mile in assisting with whatever was needed in the execution of this study. I also thank the workshop personnel of the Department of Process Engineering at Stellenbosch University for the great contribution with regards to the lab-scale roller feeder built for the purposes of this study. Thank you to Mr M. Kotzé for his dogged motivation throughout the course of this study. And finally, thank you to my parents and my fiancé, all my family and friends, PM&S colleagues and Process Engineering Department colleagues, fathers of friends and friends of my father, Dagbreek and Dagbreek Aftreeoord residents, and random shop assistants. Thank you for all and any inputs, insights, words of encouragement and shared emotion, whether positive of negative. Without all of these, this journey would have been far less remarkable.

vi

Stellenbosch University https://scholar.sun.ac.za

DEDICATIONS

To G. Snyman & J.K. Rabie

vii

Stellenbosch University https://scholar.sun.ac.za

TABLE OF CONTENTS

Declaration ...... i Abstract...... ii Uittreksel ...... iv Acknowledgements ...... vi Dedications ...... vii Table of Contents ...... viii List of Tables ...... xi List of Figures ...... xii List of Equations ...... xv Nomenclature ...... xvii Definitions ...... xviii 1 Introduction ...... 1 1.1 Problem Background ...... 1 1.1.1 Ferrochrome Industry in South Africa ...... 1 1.1.2 Ferrochrome (FeCr) liberation process ...... 1 1.1.3 The importance of SAF feed size estimation in FeCr beneficiation processes ..... 1 1.1.4 Manual vs Automatic agglomerate size estimation ...... 2 1.1.5 Current trends in agglomerate size estimation ...... 2 1.2 Project Specifics ...... 3 1.2.1 Specific need for research project ...... 3 1.2.2 Problem statement ...... 3 1.2.3 Project objectives ...... 4 1.3 Proposed Methods of the Investigation ...... 5 1.4 Thesis Structure ...... 6 2 Fundamentals: FeCr Pelletizing, Digital image processing (DIP) and Particle Size Estimation (PSE), Process Control ...... 8 2.1 Overview of the FeCr Pelletizing Process ...... 9 2.2 An Introduction to DIP and PSE ...... 10 2.2.1 Typical PSE methods used in industry ...... 10 2.2.2 Relevance of DIP PSE research ...... 12 2.2.3 Background of computerised procedures executed on digital images for PSE... 13 2.2.4 Overview of the fundamental algorithms associated with DIP ...... 15 2.2.5 DIP used for size estimation of objects ...... 34 2.3 Process Control for the Mineral Beneficiation Environment ...... 36

viii

Stellenbosch University https://scholar.sun.ac.za

2.3.1 Background on need for process control in mineral beneficiation environment .. 36 2.3.2 General control objectives and key considerations ...... 36 2.3.3 Control methods and strategies ...... 37 3 Critical Literature Review: DIP for PSE in Industry, Process Control ...... 43 3.1 Significant Considerations for Developing PSES that are Based on DIP and DIA . 43 3.1.1 Software Considerations ...... 43 3.1.2 Hardware considerations ...... 56 3.1.3 Experimental procedure ...... 60 3.1.4 Validation of particle size estimation results ...... 61 3.1.5 Sources of error influencing size estimation accuracy and efficiency ...... 63 3.2 Control Framework for a Ferrochrome Pelletizing Plant ...... 71 3.2.1 Analysis of the Fecr pelletizing process and process variables ...... 71 4 Materials and Methods ...... 73 4.1 Project Plan ...... 73 4.2 General Experimental Procedure ...... 74 4.3 Software Used for Experimental Procedures ...... 75 4.4 Hardware Used for Data Analysis and Data Processing ...... 76 5 Phase 1 – Development of Conceptual Pellet Size Distribution Estimation Algorithm .. 77 5.1 Outline of Experimental Procedure ...... 77 5.2 Hardware and Material Considerations ...... 77 5.2.1 Imaging hardware requirements ...... 78 5.2.2 Additional hardware requirements ...... 78 5.2.3 Experimental Set-up ...... 78 5.2.4 Test material considerations and preparations ...... 78 5.3 Experimental Procedure for Acquiring Test Footage ...... 79 5.3.1 Execution of test runs ...... 79 5.4 Algorithm Development ...... 79 5.4.1 Development and application of algorithm ...... 79 5.4.2 DIP techniques employed in algorithm ...... 79 5.4.3 Validation and Representation of results ...... 82 5.5 Data Analysis ...... 82 5.5.1 Discussion of results ...... 83 5.5.2 Important considerations for subsequent project phase ...... 89 5.6 Site Visit 1 ...... 91 5.6.1 Choosing suitable pellet-in-process footage ...... 91 5.6.2 Hardware used to capture footage...... 92

ix

Stellenbosch University https://scholar.sun.ac.za

5.6.3 Technical specifications regarding capturing of footage ...... 92 5.6.4 Data analysis ...... 93 5.7 Conclusions Drawn from Phase 1 ...... 94 6 Phase 2 – Experimental (lab-scale) Validation and Refinement of Pellet Size Distribution Estimation Algorithm ...... 96 6.1 Outline of Experimental Procedure ...... 96 6.2 Hardware and Material Considerations ...... 96 6.2.1 Simulated Pellet-in-process Footage ...... 96 6.2.2 Actual process footage ...... 106 6.3 Experimental Procedure for Acquiring Test Footage ...... 107 6.3.1 Execution of test runs ...... 107 6.3.2 Imaging hardware parameters ...... 109 6.4 Algorithm Development and Data Analysis ...... 109 6.4.1 Round 1 of analyses ...... 110 6.4.2 Round 2 of analyses ...... 119 7 Phase 3 – Conceptual Control Framework Development ...... 146 7.1 Roadmap for Conceptual Control Framework Design ...... 146 7.2 Process Mapping ...... 147 7.2.1 Critical measured variables ...... 147 7.2.2 Establishing the focus area ...... 148 7.2.3 Fixed variables ...... 150 7.2.4 Disturbance variables ...... 150 7.3 Analysis of Critical Measured Variables ...... 150 7.4 Development of Conceptual Control Framework ...... 151 8 Conclusions and Recommendations ...... 155 8.1 Development of a Particle Size Estimation Algorithm ...... 155 8.1.1 Literature Study ...... 155 8.1.2 Development of a PSE algorithm ...... 155 8.2 Development of a Conceptual Pelletizing-Circuit Control Framework ...... 161 List of References ...... 164

x

Stellenbosch University https://scholar.sun.ac.za

LIST OF TABLES

Table 1: Properties of first- and second-order derivatives in determining edges in images. . 26 Table 2: Significant image characteristics regarding DIP...... 35 Table 3: Summary of the structure of typical algorithms used in DIP applications ...... 43 Table 4: Summary of various edge detection techniques used in literature...... 47 Table 5: Various techniques utilised to apply thresholds to images...... 48 Table 6: Region classifiers used in literature...... 55 Table 7: Advantages and disadvantages of IA studies on pellets in various forms of movement ...... 61 Table 8: Various methods of validation of experimental results for different PSE procedures...... 62 Table 9: Sources of error associated with particle size estimation systems...... 63 Table 10: Operating environment factors that cause errors in PSE ...... 68 Table 11: Hardware related methods to increase PSE accuracy...... 68 Table 12: Software related methods to increase PSE accuracy...... 69 Table 13: SES operation related methods to improve PSES accuracy...... 70 Table 14: A summary of the techniques used for algorithm development...... 80 Table 15: The image processing and analysis categories and techniques tested and implemented during Phase 1 of algorithm development...... 81 Table 16: Results of manual and algorithm analysis of sample footage...... 88 Table 17: Actual Roller Feeder Specifications (Adapted from Gloy, (2015)) ...... 98 Table 18: Comparison between Roller Feeder footage and pellets-on-conveyor footage. . 108 Table 19: Summary of actual dimensions of sample image pellets...... 112 Table 20: Results of pellet size estimation using the Blob Analysis method...... 116 Table 21: Results of pellet size estimation using the Hough Transform method...... 118 Table 22: Consolidated results of the three pellet size fractions – Mean ...... 130 Table 23: Consolidated results of the three pellet size fractions – RMSE ...... 130 Table 24: Size distribution of the mixed size distribution pellet sample...... 131 Table 25: Data used for Chi2 test conducted on the mixed size distribution analysis...... 134 Table 26: Data used for Chi2 test conducted on the mixed size distribution analysis ...... 136 Table 27: The effect of various critical measured variables on the controlled variable...... 150 Table 28: Sensors used for variable measurements...... 151 Table 29: Process variables chosen for use in conceptual control framework...... 151 Table 30: Effect of chosen variables on controlled variable and the respective final control elements...... 151 Table 31: Control actions regarding the two primary manipulated variables with a small pellet size process state...... 152 Table 32: Control actions regarding the two primary manipulated variables with a large pellet size process state...... 152 Table 33: Preliminary control measures for incorporation into the conceptual control framework for the pelletizing circuit at Bokamoso...... 153

xi

Stellenbosch University https://scholar.sun.ac.za

LIST OF FIGURES

Figure 1: Process flow-diagram for a typical pelletizing-sintering plant...... 1 Figure 2: Representation of a section of a pelletizing circuit: Proportioning to Sintering (adapted from Oikarinen & Pelttari 2007)...... 10 Figure 3: Sieve analysis equipment set-up...... 11 Figure 4: The three levels of computerised operations performed on digital images...... 14 Figure 5: Schematic of the typical Image Processing and Analysis procedure...... 15 Figure 6: Example of a Digital Image represented as a 2-D numerical array (0, 0.5, and 1 represent black, grey, and white, respectively)...... 16 Figure 7: A spatial domain representation of a 3x3 neighbourhood centred on a point (x, y) in an image f...... 18 Figure 8: Typical form of a transformation function used for contrast stretching...... 20 Figure 9: A grey-scale image and its intensity histogram...... 21 Figure 10: Image containing Ramp and Step Edges...... 25 Figure 11: 3 x 3 region of an image, with intensity values represented by z...... 28 Figure 12: Prewitt Operator...... 28 Figure 13: Sobel Operator...... 28 Figure 14: x-y plane (left); a-b plane or parameter space (right) ...... 34 Figure 15: Schematic of a cascade control system ...... 38 Figure 16: Block diagram of a feedforward-feedback control system with sensors and final element ...... 41 Figure 17: (left) Original image, (middle) Edge detected without prior filtering, (right) Edges detected after filtering with a median filter of size 3 x 3...... 83 Figure 18: The effect of contrast stretching: (left) median filtered image, (right) contrast enhanced image. The bottom row contains the histograms associated with the intensity images directly above them...... 84 Figure 19: Contrast Stretching (adjustment) over only the highest intensities (low intensities eliminated). (top left) The median filtered intensity image, (top right) the Contrast Stretched intensity image, with stretching done over the intensity range [0.1 1]...... 84 Figure 20: (top left) Median filtered image, (top middle) Histogram Equalisation used to increase image contrast, (top right) Contrast Stretching used to increase image contrast. The images in the bottom row are the image histograms associated with the intensity image directly above it...... 85 Figure 21: The Simulink model (block diagram) used to obtain the results shown in Figure 15...... 85 Figure 22: Histogram of a Histogram Equalised image with y-axis maximum limit at 150 000 ...... 86 Figure 23: Example of a Simulink model implementing Blob Analysis...... 87 Figure 24: Estimated pellet size of the simulated pellets. Estimated pellet width is plotted in this case...... 89 Figure 25: Camera and lighting configuration used to obtain footage during Site Visit 1...... 93 Figure 26: Uneven illumination present in onsite footage obtained during Site Visit 1 ...... 94 Figure 27: The completed roller system and lighting arrangement...... 102 Figure 28: The lighting arrangement, with the camera mounted on the square plate visible in the centre of the figure...... 102 Figure 29: Lighting arrangement with the lights switched on...... 103 Figure 30: Procedural representation of sieving operation to grade sintered pellets into various size fractions...... 105 Figure 31: Sieve analysis equipment set-up...... 105

xii

Stellenbosch University https://scholar.sun.ac.za

Figure 32: Three different size fractions of the pellet sample obtained from Bokamoso. ... 106 Figure 33: Approximated pellet size distribution of the ferrochrome sample obtained from Bokamoso...... 106 Figure 34: The construction of a lighting mounting frame at Bokamoso Pelletizing Plant, during Site Visit 2...... 107 Figure 35: Capturing of actual pellets-on-rollers footage at Bokamoso Pelletizing Plant .... 108 Figure 36: Roller width measurement using MATLAB©‘s Imdistline function...... 111 Figure 37: Actual pellet size measurements using MATLAB©‘s Imdistline function...... 112 Figure 38: Cropped intensity image obtained from lab-scale roller feeder operation...... 113 Figure 39: Contrast in image intensity highlighted...... 113 Figure 40: High intensity region of single roller isolated within an image...... 113 Figure 41: Pre-processed image of a single isolated roller...... 113 Figure 42: Flow diagram of algorithm using the Blob analysis to determine pellet size ...... 114 Figure 43: Graphical representation of the first five steps of pellet size estimation using the Blob analysis ...... 115 Figure 44:Objects identified using Blob Analysis...... 115 Figure 45: Objects with associated bounding boxes superimposed...... 115 Figure 46: Flow diagram of algorithm using the Hough Transform to determine pellet size 116 Figure 47: Graphical representation of pellet size estimation using the Hough Transform . 117 Figure 48: Result of applying edge detection and a filling function to the image in Figure 14...... 118 Figure 49: The result of applying the circular Hough transform to the image in Figure 23. 118 Figure 50: Encompassing circles indicated on the objects identified using the Hough Transform method...... 118 Figure 51: Three different types of footage analysed during Round 2 of data analyses in Phase 2: Lab-scale Roller Feeder (left), Actual Roller Feeder (middle), Actual process Sintered Pellet-on-conveyor (right)...... 119 Figure 52: Sample of the above procedure applied to simulated pellet-on-roller footage ... 120 Figure 53: Example of the application of the Watershed transform to separate touching OOI (pellets)...... 120 Figure 54: Illumination strips incorrectly identified as OOI, but filtered using the Eccentricity filter...... 122 Figure 55: Incorrectly identified objects, filtered by the Extent filter...... 122 Figure 56: Establishing a reference to estimate particle size from Site Visit 2 footage...... 124 Figure 57: Sample output of the analyses done on simulated pellet-in-process footage. ... 125 Figure 58: Estimated pellet size for the 9-13.2mm size fraction ...... 126 Figure 59: Estimated pellet size distribution for 9-13.2mm size fraction ...... 127 Figure 60: Estimated pellet size for the 13.2-16mm size fraction ...... 128 Figure 61: Estimated pellet size distribution for 13.2-16mm size fraction...... 128 Figure 62: Estimated pellet size for the 16-19mm size fraction ...... 129 Figure 63: Estimated pellet size distribution for 16-19mm size fraction ...... 129 Figure 64: RMSE plotted for the values displayed in Table 23 ...... 130 Figure 65: Estimated pellet size for the mixed size distribution with shutter speed at 1/1000 ...... 132 Figure 66: Estimated pellet size distribution for mixed size distribution ...... 132 Figure 67: Estimated pellet size distribution of the components of the mixed size distribution ...... 133 Figure 68: Sieve (expected) size distribution vs estimated size distribution...... 133 Figure 69: Estimated pellet size for the mixed size distribution with shutter speed at 1/2000 ...... 135 Figure 70: Estimated pellet size distribution for mixed size distribution ...... 135

xiii

Stellenbosch University https://scholar.sun.ac.za

Figure 71: Estimated pellet size distribution of the components of the mixed size distribution ...... 136 Figure 72: Sieve (expected) size distribution vs estimated size distribution ...... 136 Figure 73: Sample output of the analyses done on actual pellet-on-roller footage of the Roller Feeder at Bokamoso ...... 138 Figure 74: Estimated pellet size for the actual pellet-on-roller footage of the Roller Feeder at Bokamoso ...... 139 Figure 75: Estimated pellet size distribution for the actual pellet-on-roller footage of the Roller Feeder at Bokamoso...... 139 Figure 76: Parts of the pellets are frequently omitted from the estimated pellet profiles .... 140 Figure 77: The tendency of the algorithm to identify smaller pellets and omit (miss) larger pellets ...... 140 Figure 78: Sample output of the analyses done on actual pellet-on-conveyor footage at Bokamoso ...... 141 Figure 79: Estimated pellet size for the actual pellet-on-conveyor footage at Bokamoso ... 142 Figure 80: Estimated pellet size distribution for the actual pellet-on-conveyor footage at Bokamoso ...... 142 Figure 81: Estimated pellet size distribution for the actual pellet-on-conveyor footage at Bokamoso (emphasis added) ...... 143 Figure 82: Highlighting the presence of fines as part of the IO ...... 143 Figure 83: Estimated pellet size for the actual pellet-on-conveyor footage at Bokamoso – fines of <4mm omitted ...... 144 Figure 84: Estimated pellet size distribution for the actual pellet-on-conveyor footage at Bokamoso – fines of <4mm omitted ...... 144 Figure 85: Process variables that have a significant effect on pellet size produced by the pelletizing system...... 148 Figure 86: Current state of the plant – proportioning, mixing, pelletizing and sintering...... 149 Figure 87: The proposed conceptual control framework for the pelletizing circuit at Bokamoso. The proposed conceptual control measures are incorporated into the current control system and clearly shown...... 154

xiv

Stellenbosch University https://scholar.sun.ac.za

LIST OF EQUATIONS

Equation 1: Digital image f of size M x N in equation form...... 16 Equation 2: Conversion used to convert colour images to grey-scale images...... 17 Equation 3: Colour space to intensity conversion represented in matrix form...... 17 Equation 4: General expression for spatial domain operations...... 18 Equation 5: Normalised histogram representation (Gonzalez & Woods 2008)...... 20 Equation 6: Standard intensity mapping transformation function...... 21 Equation 7: Inverse intensity mapping transformation function...... 21

Equation 8: Probability Mass Function of the occurrence of an intensity rk...... 22 Equation 9: Histogram Equalization transformation...... 22 Equation 10: Transformation function for r...... 22 Equation 11: Transformation function for z...... 22

Equation 12: Inverse transformation function to obtain zq...... 23 Equation 13: A digital approximation of a first-order derivative at point x...... 25 Equation 14: A digital approximation of a second-order derivative at point x + 1...... 25 Equation 15: Approximation of second-order derivative at point x...... 25 Equation 16: The response of a 3 x 3 spatial filter at a specified pixel location. This response is calculated for each pixel in the image...... 26 Equation 17: The definition of the Image Gradient...... 26 Equation 18: The magnitude of the Image Gradient...... 27 Equation 19: Approximation of the magnitude of the image gradient...... 27 Equation 20: The direction of the gradient vector...... 27 Equation 21: Partial derivative at point (x, y) in terms of x...... 27 Equation 22: Partial derivative at point (x, y) in terms of y...... 27 Equation 23: Digital approximation of the partial derivative at point (x, y) in terms of x...... 28 Equation 24: Digital approximation of the partial derivative at point (x, y) in terms of y...... 28 Equation 25: 2-D circular Gaussian function used to smooth the input image in the Canny edge detector algorithm...... 29

Equation 26: Convolving G with f to obtain a smoothed image fs...... 29 Equation 27: Definition of gNH(x, y)...... 30 Equation 28: Definition of gNL(x, y) ...... 30 Equation 29: Operation to eliminate from gNL(x, y) all the nonzero elements contained in gNH(x, y) ...... 30 Equation 30: Basic procedure for segmenting an image with a thresholding value T...... 30 Equation 31: Cumulative sum of the intensity probabilities in an image...... 31

Equation 32: Cumulative sum of intensities in C1...... 32 Equation 33: Cumulative sum of intensities in C2...... 32 Equation 34: Calculation of the mean intensity values of pixels in C1 and C2...... 32 Equation 35: Average intensity of the entire image, mG...... 32 Equation 36: Threshold quality metric...... 32 Equation 37: Global variance calculation...... 32 Equation 38: Calculation of between-class variance...... 33 Equation 39: Alternative formulation of between-class variance...... 33 2 * Equation 40: The maximization of σ B with the optimum threshold, k ...... 33 Equation 41: General equation of a straight line in the x-y plane...... 33 Equation 42: Equation of a straight line translated to the a-b plane (parameter space)...... 33 Equation 43: Normal form equation for a line...... 34 Equation 44: Vector function for a vector of coordinates and a vector of coefficients...... 34 Equation 45: Equation of a circle...... 34

xv

Stellenbosch University https://scholar.sun.ac.za

Equation 46: The formula that relates real world quantities to the digital quantities of images ...... 82 Equation 47: Pixel-to-mm ratio used in pellet size estimations reported on in this paper. .. 110

xvi

Stellenbosch University https://scholar.sun.ac.za

NOMENCLATURE

DIP Digital image processing DIA Digital image analysis IA Image analysis IO Identified object/-s: An object/-s that has been identified, delineated and distinguished from other objects visible in an image, by the particle size estimation algorithm. IP Image processing MV Measured variable OOI Objects of interest: The objects (being of the same kind) of which the size has to be estimated by a particle size estimation algorithm. The objects of interest for this study are (sintered or green) ferrochrome pellets, and candy coated peanuts. P&ID Piping & instrumentation diagram PM&S Process Monitoring and Systems research group, based within the Process Engineering Department at Stellenbosch University. PSD Particle size distribution PSE Particle size estimation PSES Particle size estimation system SAF Submerged arc furnace VSD Variable speed drive

xvii

Stellenbosch University https://scholar.sun.ac.za

DEFINITIONS

Algorithm Combination of IP and IA techniques used to solve a DIP problem. Bokamoso Refers to Bokamoso Pelletizing Plant at Glencore‘s Wonderkop Smelter, the site on which the study is based. Department Department of Process Engineering at Stellenbosch University. Method Procedure (series of actions) for accomplishing a desired IP or IA task. Particle Delineation The process of identifying and marking out (outlining) the boundaries of an object in an image. Particle Identification The process of recognizing and concluding or establishing that a delineated object is an object of interest. Particle Isolation Follows particle delineation by clearly marking and separating (distinguishing) one object from another. Site Visit 1 The first of two visits to Bokamoso Pelletizing Plant aimed at familiarising the author with the actual FeCr pelletizing system, and collecting sample process footage. Site Visit 2 The second of two visits to Bokamoso Pelletizing Plant aimed at collecting sample process footage for refinement of the pellet size estimation algorithm. In terms of different types of Particle Size Estimation: Automatic Size estimation done by a size estimation system upon an order to do so, without human effort required. Manual Size estimation done by human operators. Offline The size estimation system and/or procedure are disconnected from the particle stream it does size estimation on. Size estimation is done on particle samples which are taken from the particle stream and typically analysed in a different location than the particle stream or the system producing it. Online (also inline) The size estimation system is connected to the particle stream and/or the system producing it. Size estimation is done on the actual particle stream, and results are produced in a timely manner. However, the system has no limits on latency associated with the size estimation process. Real-time An online size estimation system that delivers results of analysis instantaneously or near instantaneously, therefore having a limit in terms of latency associated with the size estimation process.

xviii

Stellenbosch University https://scholar.sun.ac.za

1 INTRODUCTION

1.1 PROBLEM BACKGROUND

1.1.1 Ferrochrome Industry in South Africa South Africa (SA) has the world‘s largest viable deposits of containing ore with deposits amounting to around 75% of the world‘s viable chromite reserves (Cowey 1994; Riekkola-vanhanen 1999; Beukes et al. 2010). Chromite ore is the only commercially exploited source of chromium (Beukes et al. 2010). Based on 2014 statistics, SA is the world‘s largest producer of conventional chrome ore (excluding UG2 concentrate), producing around 12MT of the world‘s 29.4MT production (Richard 2015). Furthermore, SA is the world‘s second largest producer of ferrochrome (FeCr), with production at its 14 smelters producing around 33% of the 11.4MT global production (Richard 2015; Glastonbury et al. 2015; Jones 2015).

1.1.2 Ferrochrome (FeCr) liberation process As part of the FeCr refining process, chromite ore is smelted in submerged arc furnaces (SAFs). Natural chromite recovered from resources located in SA are typically classified as being approximately 12% lumpy ore (6-150mm sieve size), 10% chip/pebble ore (6-25mm sieve size), and 78% fine (Glastonbury et al. 2010). The use of fine chromite ore particles (typically <6mm) in the smelting process increases the chance of the SAF surface layer to sinter, preventing process gasses to escape the SAF. This in turn could lead to the dangerous occurrence of bed turnovers or blowing of the furnace (Beukes et al. 2010; Riekkola-vanhanen 1999; Glastonbury et al. 2015). The need therefore exists for these fine ores to be agglomerated before being fed to a SAF (Beukes et al. 2010). Typically, these fines are agglomerated into spherical pellets in a pelletizing-sintering plant (Beukes et al. 2010). Figure 1 illustrates the typical process flow in such a plant.

Raw Paticle size Sintering materials Filtration Pelletization reduction (steel (chromite (capillary/ (drum/disc (wet/dry belt/shaft concentrates drum filters) pelletiser) milling) furnace) and fines)

Figure 1: Process flow-diagram for a typical pelletizing-sintering plant.

1.1.3 The importance of SAF feed size estimation in FeCr beneficiation processes Along with the benefit of reduced risk associated with operating a SAF, it has been proven that agglomerate pellet size plays an integral part in the effective and stable operation of the furnace (Harayama & Uesugi 1992; Riekkola-vanhanen 1999; Beukes et al. 2010; Glastonbury et al. 2015). Furthermore, higher efficiencies and yields can be achieved with these reduction processes by using a constant and optimal pellet size (Beukes et al. 2010; Riekkola-vanhanen 1999; Montenegro Rios et al. 2011; Ren et al. 2011). To ensure constant and optimal pellet size production, the need arises for the continuous monitoring of the preceding agglomeration circuit that produces the pellets. By monitoring and analysing size deviations of the pellets, system parameters can be altered to produce the desired pellet size (Chen et al. 2014; Hamzeloo et al. 2014). Rao (1994) notes that the success of metallurgical processes depend greatly on the particle size of the materials used in the process. It is noted that for agglomeration processes that incorporate pelletizing of raw material fines, pellet sizes generally varies from 5-25 mm.

1

Stellenbosch University https://scholar.sun.ac.za

Pandey, Lobo and Kumar (2012) note that the optimal pellet size for Corex and Blast furnace feed range between 8-12 mm. Harman & Rama Rao (2007) note that for FeCr production in SAF‘s, the optimal pellet size varies 12-15 mm. Makela & Krogerus (2015) specify that the most preferable pellet size for the production of FeCr using an Outotec process is 12 mm. This was confirmed through discussion with Gloy (Insights into the operation of Bokamoso Pelletising Plant, Wonderkop Smelter, Glencore Plc., 2014).

1.1.4 Manual vs Automatic agglomerate size estimation Conventional manual methods used for aggregate size estimation, such as sieving, are labour intensive and time consuming (Wang 2008; Montenegro Rios et al. 2011). In addition, some methods are intrusive and require processes to shut down, resulting in losses due to system down-time (Hamzeloo et al. 2014). Time delays resulting from sample handling, manual size estimation, and sampling intervals spanning entire shifts, lead to long and costly response times before size distribution estimations are available to plant operators for control purposes (Andersson & Thurley 2011; Wang 2008). Furthermore, these time delays suggest that the results from these manual size estimation techniques are typically not indicative of current system properties, leading to incorrect conclusions about the current state of the process. The above characteristics of manual aggregate sizing methods have pushed industry and academia to investigate alternative, fast and online methods for aggregate size estimation that are automatic, non-intrusive and produce real-time and consistent results. The fields of image analysis and computer vision have seen a major increase in research and application in systems used for aggregate size estimation since the first image analysis system was developed for rock particle size estimation in 1976 (Montenegro Rios et al. 2011; Wang 2008). Continued research in this area can be attributed to the rapid development in computers and imaging equipment, and the fast, inexpensive, less-cumbersome, consistent and non-intrusive nature of such systems (Wang 2008; Hamzeloo et al. 2014; Montenegro Rios et al. 2011). Furthermore, literature suggests that online image analysis utilised in the monitoring, control and optimization of agglomeration operations while incorporating feedback control, could and has already greatly improved productivity and efficiency through fast feedback and efficient control (Andersson & Thurley 2011; Thurley 2013; Montenegro Rios et al. 2011). A commercial system that has seen great success in the FeCr palletizing industry is the Outotec® Pellet Size measurement system (Outotec Oyj 2013). This system determines pellet size by applying computer vision principles to camera footage of FeCr pellets produced in a pelletizing-sintering plant.

1.1.5 Current trends in agglomerate size estimation Currently, traditional, manual methods for pellet size estimation and monitoring are considered to be the industry standard used in mining and mineral beneficiation industries such as the beneficiation industry (Andersson & Thurley 2011; Montenegro Rios et al. 2011). A similar scenario has been confirmed for the ferrochrome beneficiation industry in SA (Gloy, 2014). In addition Sommer (1992) provided reasons for the reluctance of the mineral processing industry of the time to implement automation systems. The factors mentioned included fixed expenditures that enjoyed higher priority, such as high employee salaries and high energy costs in the form of electricity costs. These expenditures forced new technology investments, such as the process monitoring system under consideration, to be postponed or disregarded. Additional factors include loss in revenues due to relatively low metal prices, and a lack of expertise to implement and maintain these automation systems.

2

Stellenbosch University https://scholar.sun.ac.za

Richard (2015) states that the SA FeCr industry still faces similar difficulties, highlighting labour issues regarding wages and energy price hikes as contributors to increasing production costs. With these factors, the reluctance of the SA FeCr producers to implement automation systems could be explained. However, considering the drawbacks of traditional size estimation techniques and the possible benefits of online image analysis based techniques mentioned in the section above, an opportunity and need exists in these industries to re-evaluate the development and implementation of industry specific, affordable, automatic, accurate and real time pellet size estimation systems.

1.2 PROJECT SPECIFICS

1.2.1 Specific need for research project The project was conducted in collaboration with Glencore Plc‘s FeCr beneficiation plant, Wonderkop Smelter. More specifically, the context of the work is defined within the operations of Wonderkop Smelter‘s pelletizing plant named Bokamoso. At Wonderkop Smelter, FeCr is refined from various ores by reduction in DC SAFs. Traditionally, the economically preferred feedstock for FeCr recovery processes is the Bushveld Complex‘s Lower Group 6 (LG6) and the Middle Group 1 and 2 (MG1 and 2) (Glastonbury et al. 2015). This is due to their high chromium content. Due to ore grade deteriorating over time and the need to extend plant life, the need developed to process finer and lower grade ore, as well as different ore types. These include the chromite containing tailings from platinum group minerals (PGMs) recovery circuits using the Bushveld Complex‘s Upper Group 2 (UG2) ore as feedstock. Technical innovations and the increasing availability of UG2 ore, have led to the ability of SA FeCr producers to increase their utilisation of UG2 as feed to their processes (Glastonbury et al. 2015). The status quo of the SA FeCr industry, as explained in Section 1.1.2 and in the foregone paragraph, is also applicable to the operations at Wonderkop Smelter (Gloy, 2014). As stated in Section 1.1.2 the operational requirements of the furnace plant makes material fines unsuitable for reduction in the SAFs. Therefore, a pelletizer plant was built to turn material fines into suitable sized aggregate particles for use in the SAFs at Wonderkop Smelter‘s furnace plant. Bokamoso incorporates Outotec FeCr pelletizing and sintering technology as part of the Outotec® Ferrochrome Process to produce sintered FeCr ore pellets (Outotec Oyj 2015a). Pellet size estimation at Bokamoso is still being done using manual methods (Gloy, 2014). These estimations are only carried out once every 8-hour shift. With the additional time delay due to manual handling and processing of the samples, size estimation results do not give an accurate real time account of production output. Glencore Plc‘s Wonderkop Smelter therefore communicated the need to investigate an alternative, online, real time pellet size estimation system for the use in Bokamoso pelletizing plant.

1.2.2 Problem statement Considering the need and opportunities as defined in Sections 1.1 and 1.2.1, the following two-part problem statement can be defined. The problem statement is defined in terms of the needs and opportunities associated with the SA FeCr producing industry, specifically addressing the need of Glencore Plc‘s Wonderkop Smelter: 1. The need for automatic, real-time and continuous pellet size distribution measurement Considering the availability of commercial object size estimation systems, and the lack of implementation of such systems, conduct a proof-of-concept feasibility study that aims to

3

Stellenbosch University https://scholar.sun.ac.za

establish the viability of the application of PSE in the South African FeCr producing industry. Through the proof-of-concept study, the need for an industry specific and low cost, off-the-shelf pellet size distribution estimation solution for a pelletizing circuit needs to be addressed. The size estimation algorithm and associated hardware system should be simple, and easy to implement, and it should use minimal equipment. However, the outputs of the size estimation algorithm and system should be accurate and reliable enough to make it viable as an option to fulfil industry specific size estimation requirements and needs, while being a viable alternative to existing costly commercial alternatives. 2. The need for improved control of the pelletizing process Assuming accurate estimation of pellet size distribution of pellets produced by the pelletizing plant, investigate a control framework incorporating an estimated pellet size distribution into a control system to improve automatic process control of the pelletizing circuit. The control system should be specifically aimed at controlling the size distribution of the pellets produced by the pelletizing plant. Improved pelletizing process control would result in a safer, and more efficient and profitable FeCr reduction operation in the SAFs of the plant.

1.2.3 Project objectives Considering the problem statement in the above section and the opportunities mentioned in Section 1.1.4, the following two objectives have been defined for this study: 1. Development of an accurate and reliable particle size estimation algorithm suitable for implementation as part of an alternative low-cost conceptual particle size distribution estimation sensor Develop a particle size estimation algorithm capable of accurately and reliably estimating the size distribution of objects (particles) that resemble the FeCr pellets produced in a FeCr pelletizing plant and used in a SAF FeCr reduction process (as is the case at Glencore‘s Wonderkop Smelter), in order to prove the concept that particle size estimation could be done on FeCr pellets using Digital image processing and Computer Visions techniques. This should be done by investigating, evaluating and testing existing elementary Image Processing (IP), Image Analysis (IA) and Computer Vision (CV) techniques (algorithms) and software applied to digital images and related to particle size estimation. Development should also entail analyses on sample footage consisting of both simulated process footage and actual process footage. During the analyses the algorithm should be validated, refined and optimised in terms of suitable accuracy and reliability for viable implementation on low-cost and off the shelf imaging and algorithm processing hardware. This is required to make the algorithm suitable for implementation as part of an industry specific proof-of-concept (conceptual) Particle Size Distribution Estimation Sensor to be implemented and tested at Glencore‘s Bokamoso Pelletizing Plant‘s pelletizing circuit. The results of the testing at Bokamoso should point to the viability of low-cost alternatives to existing costly commercial alternatives. An analysis will be done and conclusions drawn on the suitability of the hardware used during the study for the application in object size estimation. However, the conclusions and recommendations provided are not intended to constitute a full study on viable hardware options for such applications. It should serve mainly as supplementary in achieving the main objective of the study. The output of the solution should also be viable for further implementation in a control system controlling the pellet size distribution produced in a FeCr ore pelletizing plant.

4

Stellenbosch University https://scholar.sun.ac.za

2. Development of a conceptual pelletizing-circuit control framework To compliment Objective 1, develop a conceptual control framework that would indicate in what manner the output of a particle size sensor system similar to the one described in Objective 1, i.e. estimated size distribution of the pellets produced by the FeCr pelletizing plant, could be incorporated and effectively used as part of an existing control system of a FeCr pelletizing plant. The focus of the conceptual control framework should be on controlling and maintaining the produced pellet size distribution at a desired and optimal pellet size distribution, in order to improve the overall pelletizing process and provide a more suitable quality pellet feed for the subsequent reduction process in SAFs.

1.3 PROPOSED METHODS OF THE INVESTIGATION The objectives in Section 1.2.3 are expanded into the following sub-objectives along with proposed methods of investigation to achieve the objectives: 1. Development of a particle size estimation algorithm suitable for implementation as part of a conceptual particle size distribution estimation sensor 1.1. Conduct critical literature review 1.1.1. Investigate and identify typical image analysis algorithms used for appropriate image processing and analysis in order to detect objects in digital images. 1.1.2. Investigate applicable methods capable of estimating the size of detected objects. 1.1.3. Identify suitable hardware for the size estimation system, capable of capturing process images and processing the image using the developed size estimation algorithm. 1.2. Develop pellet size estimation algorithm 1.2.1. Algorithms identified as suitable in objective 1 should be developed in a readily available and well-known software platform for ease of implementation on the proposed hardware, and for future alterations. 1.2.2. Various algorithms should be compared, using adequate sample footage, in order to determine a problem specific solution. 1.3. Test and validate pellet size estimation algorithm and hardware set-up 1.3.1. The developed size estimation algorithm should be tested and validated on both simulated and actual process footage in order to modify and develop an industry suitable solution. 1.3.2. Various performance indicators should be employed to establish the algorithm‘s accuracy, reliability and suitability as a pellet size estimation algorithm.

2. Development of a conceptual pelletizing-circuit control framework 2.1. Conduct critical literature review 2.1.1. Identify and investigate various control structures and control strategies used in the mineral beneficiation environment, specifically for controlling agglomerate size distribution in an agglomeration circuit. 2.1.2. Investigate the process dynamics and control of a typical FeCr ore pelletizing process. 2.2. Develop a conceptual control framework that aims to optimise the size distribution of FeCr pellets produced in a FeCr pelletizing plant 2.2.1. Gather process specific information at an actual FeCr pelletizing plant, in order to assist in the development of a process specific control solution.

5

Stellenbosch University https://scholar.sun.ac.za

2.2.2. Develop a plant specific conceptual control framework aimed at controlling the size distribution of the pellets produced by the FeCr pelletizing plant. It should be developed through the application of appropriate process control design methodology and suitable control structures and strategies.

1.4 THESIS STRUCTURE In order to portray the systematic execution of the objectives and methods of investigation, as outlined above, the content of the thesis will be set-out in the following structure: Chapter 2 Fundamentals: FeCr Pelletizing, Digital Image Processing (DIP) and Particle Size Estimation (PSE), Process Control A general background study that states the fundamentals regarding the FeCr pelletizing process, digital image processing (DIP), particle size estimation (PSE) with DIP, and process control within the mineral beneficiation environment. Chapter 3 Critical Literature Review: DIP for PSE in Industry, Process Control A critical literature review regarding the use of DIP and DIA for PSE in industry, and process control principles applied to a FeCr pelletizing circuit. The chapter will also include a short analysis of the FeCr pelletizing process and the associated process variables, with specific reference to an existing pelletizing process on which the study will be based. Chapter 4 Materials and Methods An outline and discussion of the general experimental methodology developed, along with the materials, software and general hardware used to execute this methodology, in order to achieve the first objective of the study, i.e. Development of a particle size estimation algorithm suitable for implementation as part of a conceptual particle size distribution estimation sensor. Chapter 5 Phase 1 – Development of Conceptual Pellet Size Distribution Estimation Algorithm A detailed discussion on the familiarisation with DIP and PSE and the consequent development of an elementary algorithm capable of determining particle size of a spherical object. The discussion includes the experimental procedure used, hardware and material considerations, the process of acquiring test footage, conceptual algorithm development, experimental data analysis and feedback on the first site visit conducted during the study. Chapter 6 Phase 2 – Experimental (lab-scale) Validation and Refinement of Pellet Size Distribution Estimation Algorithm A detailed discussion on the validation and refinement of a pellet size distribution estimation algorithm suitable for implementation as part of a conceptual particle size distribution estimation sensor. The discussion includes the experimental procedure used, hardware and material considerations, the process of acquiring test footage, and details of algorithm development and associated data analyses based on various types of sample footage, and conducted in the course of two distinct rounds of analyses.

6

Stellenbosch University https://scholar.sun.ac.za

Chapter 7 Conceptual Control Framework An exposition of the development of a Conceptual Control Framework for a FeCr pelletizing circuit, in particular the pelletizing process at Bokamoso Pelletizing Plant. Chapter 8 Conclusions and Recommendations A summary of the outcomes of the study with accompanying comments on its success in achieving the objectives set out in Chapter 1. The most significant findings that were made during the execution of the study will also be discussed, accompanied by recommendations for future studies of a similar nature for improving on the results achieved herein.

7

Stellenbosch University https://scholar.sun.ac.za

2 FUNDAMENTALS: FECR PELLETIZING, DIGITAL IMAGE PROCESSING (DIP) AND PARTICLE SIZE ESTIMATION (PSE), PROCESS CONTROL

In Chapter 1 of this thesis two main objectives for the study are formulated: 1. Development of a particle size estimation algorithm suitable for implementation as part of a conceptual particle size distribution estimation sensor 2. Development of a conceptual pelletizing-circuit control framework

In the light of the abovementioned objectives, a literature study is presented that consists of two main parts:

1. A general background study that states the fundamentals regarding the FeCr pelletizing process, digital image processing (DIP), particle size estimation (PSE) with DIP, and process control within the mineral beneficiation environment. 2. A critical literature review regarding the use of DIP for PSE in industry, and process control principles applied to a FeCr pelletizing circuit.

The first part is discussed in Chapter 2 and the second part in Chapter 3 of this thesis.

Chapter 2 will be divided into the following four subsections and provide general background on:

1. An overview of a generic FeCr pelletizing plant and the need and use of particle size estimation. 2. Research being done on size estimation systems in literature and industry applications using size estimation systems. 3. The use of Digital Image Processing and Analysis techniques to determine size distribution of particles. The emphasis being on moving, spherical particles. a. An overview of digital image processing (DIP) and standard techniques associated with DIP. b. An overview of DIP techniques used for size estimation of object in images. This includes acquisition, so-called pre-processing of images in order to prepare the images for analysis, and the delineation and isolation of particles in the images. Pre-processing typically includes removing noise and enhancing image features necessary for the analysis procedure. Finally, an overview of methods used to determine the size of particles identified and isolated in the above step. 4. Various control structures and strategies which are viable for the use within the mineral processing industry, and especially within pelletizing plant operations. a. Key considerations and methods in designing control strategies. b. Basic control structure and strategies applied in the mineral beneficiation industry, especially process involving agglomeration or pelletizing circuits.

8

Stellenbosch University https://scholar.sun.ac.za

2.1 OVERVIEW OF THE FECR PELLETIZING PROCESS The majority of natural chromite recovered from sources in SA, roughly 78% of recovered chromite, consists of fines (typically <6mm sieve size). In order for South African chromite liberation industries to be sustainable, chromite fines are increasingly being used as input into chromite liberation processes using SAFs. However, the use of chromite fines in SAFs increases the possibility of bed turnovers, thus posing a significant safety risk. From a safety point of view, but also commercial sustainability point of view, the need therefore exists for chromite fines to be agglomerated in order to be fed into a SAF. By definition, pelletizing, in a FeCr beneficiation process sense, is the agglomeration of fine (<100µm), raw materials into spherical shaped elements of roughly 10 – 20mm in diameter. These pellets are created to have specific properties suited for transportation and additional processing in blast furnaces or reduction furnaces (Outotec, 2009). A pelletizing plant typically includes four processes as indicated by Barati (2008):

 raw material receiving  pre-treatment  pelletizing or balling  induration

Many pelletizing plants are located near ore mines. This is because these plants were developed to pelletize the raw materials that are beneficiated at these mines. Transport costs are therefore kept low through this practice. The pre-treatment process usually entails grinding the metal containing ore into fines. These fines are required to have specific qualities suited for the subsequent balling process. In addition, the pre-treatment includes processes such as concentrating, dewatering, grinding, drying and pre-wetting. Pre-wetting is the process of preparing pre-wetted material suited for balling, and involves the addition of an adequate amount of water homogeneously into the dry-ground material. It serves the purpose of adjusting material characteristics that greatly affect the quality of the pellets, such as pellet size, shape and wet- and dry compressive strength (Oikarinen & Pelttari 2007). Furthermore, the process can involve altering the chemical composition of the pellets, leading to the production of higher quality pellets. Also, a binder material, typically bentonite, may be added in this step. Finally, a preferable pellet chemical composition requires the addition of lime and/or dolomite to the ore (Advanced Explorations 2008; Yamaguchi et al. 2010). The pelletizing process involves balling equipment producing ‗green‘ pellets (otherwise known as wet pellets) from the pre-wetted material prepared in the previous process. Mainly two types of balling equipment are used to create green balls: a balling drum or a balling pan (disc). A belt conveyor continuously feeds fine chrome ores into the drums or trays. In some cases additional water is sprayed over the ore. In both the units the effects of centrifugal forces and the capillary attraction of water are used to form spheroids from the fine materials. However, the green balls produced by these two units are not uniform in diameter. About 70% of the discharge is smaller than target size. This is a significant portion of the discharge, which is returned to the pelletizing unit after screening (Montenegro Rios et al. 2011; Yamaguchi et al. 2010). A representation of a typical pelletizing drum system is shown in Figure 2.

9

Stellenbosch University https://scholar.sun.ac.za

Proportioning bins: Filter cake (raw material slurry); Bentonite; Fine plant dust; Coke dust Water addition Pelletizing Water drum addition Mixer Roller Screen

Undersize green pellets Desired size green pellets Roller Sintering Oversize & crushed furnace green pellets Feeder

Undersize & crushed Sintered pellet green pellets bottom layer Steel Belt Figure 2: Representation of a section of a pelletizing circuit: Proportioning to Sintering (adapted from Oikarinen & Pelttari 2007).

Wet pellets are then fed into a multistage sintering process. This process is intended to render the otherwise easily deformable green pellets, into hard and fixed-from spherical pellets, capable of being transported without deformation or breakage occurring. The size of the pellets produced by the pelletizing drum, which is greatly affected by the preceding processes and process inputs, contributes significantly to the efficient operation of subsequent chromite liberation system processes. Specific reference is made to the sintering process and the reduction process. If the pellets do not conform to the required size, it is returned to the agglomeration (pelletizing) plant for rework. This rework operation and the ineffective operation of the blast furnaces in the downstream processes are costly to a pelletizing plant and a smelter plant. It is for this reason that pelletizing plants strive towards implementing control systems capable of monitoring, adjusting and improving pellet production processes (Heydari et al. 2013).

2.2 AN INTRODUCTION TO DIP AND PSE

2.2.1 Typical PSE methods used in industry The particle size estimation systems that will be discussed in the following sections are defined into the following five categories:

 Manual: Size estimation done by human operators.  Automatic: Size estimation done by a size estimation system upon an order to do so, without human effort required.  Offline: The size estimation system and/or procedure are disconnected from the particle stream it does size estimation on. Size estimation is done on particle samples which are taken from the particle stream and typically analysed in a different location than the particle stream or the system producing it.  Online (inline): The size estimation system is connected to the particle stream and/or the system producing it. Size estimation is done on the actual particle stream, and results are produced in a timely

10

Stellenbosch University https://scholar.sun.ac.za

manner. However, the system has no limits on latency associated with the size estimation process.  Real-time: An online size estimation system that delivers results of analysis instantaneously or near instantaneously, therefore having a limit in terms of latency associated with the size estimation process.

2.2.1.1 Manual and Offline size estimation methods and systems The most common manual size estimation methods are mechanical sieving and size estimation using a calliper. A sieve analysis can typically be executed on any kind of granular material and is used to assess the particle size distribution of such a material. A sieving operation can typically take anything from 20 minutes (Al-Thyabat & Miles 2006) up to two hours (Mora et al. 1998). During the procedure, sieves of incremental sizes are placed on top of each other, the smallest fraction being at the bottom. Pellets are placed into the top sieve, with the sieve stack then being placed on a mechanical vibrator or shaker. The vibrations from the mechanical shaker cause the pellets to change orientation and position within a particular sieve. This movement increases the chances of the pellets that are smaller than the sieve grade, to fall through the openings and into the next, smaller grade sieve. Figure 3 shows the equipment set-up used to conduct a sieve analysis. Here three sieves are stacked on top of one another, and placed on the mechanical vibrator.

Figure 3: Sieve analysis equipment set-up.

As mentioned, the other manual size estimation method used is sizing of particles using a sliding calliper. As an example, this method was used to determine reference size of sample by Van Dalen (2004). It must also be mentioned that in some cases, especially during the period where DIP PSE was still a relatively new concept, and in order to overcome the inherent difficulties associated with the specific operation, typical IP and IA procedures (typically automatic) were done offline and manually. Scope restrictions and time constraints of the studies also contributed to PSE being done in this manner. In such, the separation and filling of digital images of particles (typically subsequent operations to initial image pre-processing operations in a DIP algorithm) as well as particle counting and measurement, were done manually in an applicable computer program (Mertens & Elsen 2006).

2.2.1.2 Automatic Offline systems, Online systems, and Real-time systems Systems that fall into these categories are mainly differentiated in two ways. Firstly by the hardware and methods used to acquire the images and secondly by the algorithms and DIP 11

Stellenbosch University https://scholar.sun.ac.za

procedures used to process the obtained images. In both the cases, the diversity of technologies used suggests that the solutions are application dependent. However, it also points to the fact that the fairly new research field is still developing, with many technologies being viable for implementation. The scope of this dissertation entails the development of a PSE algorithm that is suitable to be implemented as part of PSES which meets the criteria of the categories stated in the section heading. In the light of this scope, the following sections will discuss and give an overview of various systems that are classified in any of these three categories. These sections will aim to evaluate different applications of these systems within a variety of industries and application fields, discussing the problems faced and the associated solutions, and analyse trends in DIP and IA. Aspects that will be reviewed include: hardware and software, the experimental processes and system validation procedures, and sources of error.

2.2.2 Relevance of DIP PSE research Literature provides substantial examples where Digital Image Processing (DIP) techniques have been utilised to conduct size estimation of various objects in numerous different processes and applications. These include size estimations of soya beans (Shahin et al. 2006), pharmaceutical pellets (Podczeck & Newton 1995), industrial feeds (Ljungqvist et al. 2011), and various agglomerates produced within the mining industry. The mining industry applications include coal particles (Zelin et al. 2012), building material aggregates (Itoh et al. 2008), gravel and other aggregate particles (Kinnunen & Mäkynen 2011), mineral slurries (Haavisto & Hyötyniemi 2011), and ore pellets (Perez et al. 2011; Thurley & Andersson 2008). The application of PSES using DIP, such as the ones listed above, has proven that systems incorporating IP and IA are very useful and competent tools for analysing and determining various particulate material parameters such as size and shape. Especially concerning shape characteristics, the research conducted suggests that these systems are capable of deducting far more information about the observed particles, as compared to traditional sieving methods. In comparison to traditional manual methods such as sieving, direct measuring or counting and dielectric methods, which are typically done by plant operators, IA is far more efficient and less time consuming (Andersson & Thurley 2011). The speed, convenience and versatility of its application makes DIP size estimation systems an attractive option for particle size estimation in any industry (Mora et al. 1998). When considering process control and the associated operational and financial benefits, it is noted that online systems can provide online and real time feedback, or near real time feedback, to process operators or automatic control systems. The considerable reduction in reaction time enables control systems to react quicker to system faults or out-of-specification production to make the necessary adjustments and rectify system performance (Treffer et al. 2014). In all of the above mentioned cases, DIP methods also eliminate the need for a human operator to produce an input to the control system. This has both a process operating cost reduction advantage, as well as mitigation of the possible negative implications associated with human error. In this manner, as in the cases of all the other instances mentioned above, waste in various forms can be minimised or eradicated (Treffer et al. 2014). However, despite these advantages, offline methods are still widely used to determine size distributions in various mineral processing operations (Montenegro Rios et al. 2011). Therefore, the opportunity exists to apply IA to these operations to increase their efficiency and productivity, and reduce their energy consumption. As an example Montenegro Rios et

12

Stellenbosch University https://scholar.sun.ac.za

al. (2011) state that, in general, fifteen to thirty percent of the pellets produced by pelletizing discs do not have the required size distribution. These pellets have to be returned to the pelletizer, resulting in a large recirculating load. Stable and safe operation of downstream process such as the SAFs are increased by producing a constant and optimal pellet size distribution (Glastonbury et al. 2010; Riekkola-vanhanen 1999). The aforementioned advantages associated with conducting particle size estimation with DIP PSES, especially when compared to typical manual methods, make DIP PSES a very attractive alternative to these manual methods. The versatility of such systems furthermore renders its application scope seemingly endless. Finally, it is clear that the opportunity for the implementation of such systems exist due to a lack of such systems currently operational in various industries, including the mineral beneficiation industry. These reasons are amongst the main factors that form part of the justification and motivation to investigate DIP PSES for the estimation of FeCr pellet size, as stated in the objective of the study.

2.2.3 Background of computerised procedures executed on digital images for PSE

2.2.3.1 Scope of computerised procedures executed on digital images Digital Image Processing, Image Analysis and Computer Vision are three related fields that collectively entail a wide range of computerised operations aimed at rendering digital images into a more useful form for solving a specified problem. These three fields together form part of a continuum with image processing at one end and computer vision at the other. In the middle lies IA which can also be referred to as the field of Image Understanding. The continuum can be divided into three levels of operations performed on digital images: low-, mid-, and high-level operations. Figure 4, adapted from Gonzalez & Woods (2008), summarises the various operations that make up the three levels, and tries to illustrate the scope of the three fields described above.

13

Stellenbosch University https://scholar.sun.ac.za

1. Low-level Operations a. Image Acquisition i. Colour model selection, e.g. Red-Green-Blue (RGB), Cyan-Magenta-Yellow-Key (CMYK) or other colour formats (hardware dependent) ii. Grey-scale/Intensity Conversion b. Image Pre-Processing i. Noise reduction

ii. Contrast enhancement iii. Image sharpening 2. Mid-level Operations a. Image Processing (problem specific) Processing

i. Segmentation (object/region partitioning) Image Digital ii. Object description (feature selection) 1. Suitable computer processable form 2. Attributes extracted a. Contours b. Edges c. Individual object identity

iii. Object recognition/classification

3. High-level Operations a. Image Analysis/Understanding (object/region analysis) Image

i. Making sense of recognised objects Analysis

1. Size determination 2. Counting 3. Object tracking Vision

ii. Cognitive functions Computer

Figure 4: The three levels of computerised operations performed on digital images.

From literature (Liao & Tarng 2009; Chen et al. 2014), the typical steps (groups of techniques) executed to conduct these computerised operations, when used specifically for PSE, can be summarised as indicated by Figure 5:

14

Stellenbosch University https://scholar.sun.ac.za

Image Acquisition

Pre-processing (Noise removal, enhancement)

Particle Extraction (Thresholding, Boundary-based, Region-based, Hybrid techniques) Particle Analysis (Chord Size, Equivalent circle diameter, Maximum size, Simple Ferret Box)

Particle Grading (Size Distribution representation)

Data Report

Figure 5: Schematic of the typical Image Processing and Analysis procedure.

The typical algorithms that form part of this procedure are explained in more detail in Section 2.2.4. The primary objective of this study is to develop a PSE algorithm capable of determining the size distribution of FeCr pellets. With reference to Figure 4, it is clear that the scope of this study points to a solution involving all three major fields as indicated in the figure.

2.2.3.2 Short history of PSE using DIP The concept of analysing and extracting information from digital images dates back to as early as the 1970s (Wang 2008). In terms of identifying and analysing objects captured in digital images, the late nineties and turn of the millennium saw many techniques being developed and investigated for these purposes (Thurley 2013). In subsequent years, the value of possible automation and control applications of digital image processing (DIP) led to a lot of research into methods to extract and analyse information in an accurate, efficient and fast manner (Wang 2008). Consequently, the beginning of the new millennium saw a large growth in DIP applications in various production and processing systems (Zhang et al. 2013).

2.2.4 Overview of the fundamental algorithms associated with DIP

2.2.4.1 Digital image basics In terms of colour images, the most commonly used image processing colour system is the RGB colour system. Using this system, an image is made up of separate red, green, and blue channels or ‗images‘. When displaying an image, this system combines the three channels, defining the colour it displays as the percentages of the red, green, and blue hues mixed together. Depending on the application, various colour systems and representations of images can be used to display the information captured in digital images. Various transformations between these colour systems highlight different aspects of images that can be used for specific and different applications. Another representation of an image important in DIP is the grey-scale representation discussed in the Pre-processing section.

15

Stellenbosch University https://scholar.sun.ac.za

An image, such as the different individual channels of a colour image (RGB, CMYK, etc.) or a grey-scale representation of an image, can be defined as a two-dimensional function, f(x, y). In this definition, x and y are referred to as being spatial or plane coordinates, while the amplitude of f at any coordinate pair (x, y) is referred to as the intensity of the image at that coordinate point. When the values of x, y and f as discussed above are all finite, discrete quantities, the image is referred to as a digital image. Therefore it can be said that f(x, y) is the value of the digital image at a discrete coordinate position, (x, y), with x and y being integers (Gonzalez & Woods 2008). Furthermore a digital image consists of a finite number of elements of the form f(x, y). These elements are referred to as pixels. Each of these elements, having a specific location and value, can be represented in a 2D array having M rows and N columns. Figure 6 gives a graphical representation of the numerical values of a digital image displayed as a 2D numerical array (matrix). This numerical array representation of a digital image is very important in image processing, since most of the processing and algorithm development that are done on images, are done on images represented in this form. Equation 1 is a representation of an M x N numerical array in equation form. Both sides of Equation 1 are equivalent ways to express a digital image quantitatively, with the right side being a matrix of real numbers. Convention holds that the origin of an image is defined at the coordinate position (0, 0).

Figure 6: Example of a Digital Image represented as a 2-D numerical array (0, 0.5, and 1 represent black, grey, and white, respectively). ( ) ( ) ( ) ( ) ( ) ( ) ( ) [ ]

( ) ( ) ( )

Equation 1: Digital image f of size M x N in equation form.

2.2.4.2 Image Acquisition The image acquisition or image capturing process is greatly influenced by its application. This typically refers to the process on which IA will be performed and the desired outputs of the system. Hardware and software need to be chosen according to its imaging capability and quality, accuracy, speed of calculation and robustness in terms of the environment in which it operates, as well its ability to produce images of the previously mentioned characteristics. During image acquisition, image noise occurs in the image. This noise can be the result of poor hardware systems (causing distortion of the signal), interference of other signals on the captured signal, adverse operating environments, or even noise added to the image during the acquisition for processing. The following types of noise are common:

 Salt and pepper noise. This term is used to describe the occurrence of pixels in the image that differ greatly in colour or intensity compared to their surrounding pixels. In addition, it can be said that the value of a noisy pixel does not relate to the colour of surrounding pixels in any way. Salt and pepper noise generally only affects a few

16

Stellenbosch University https://scholar.sun.ac.za

pixels in an image. Images containing salt and pepper noise contain dark and white dots, resembling salt and pepper. Typically, this type of noise is caused by flecks of dust inside the camera and overheated or faulty CCD elements.  Gaussian noise refers to the event where each image pixel value is changed by a (usually) small amount. A histogram can be used to represent a normal distribution of the noise. This plot presents a comparison between the amount of distortion of a pixel value against the frequency of occurrence. Due to the central limit theorem, the Gaussian (normal) distribution is usually a good model to represent this kind of noise.

2.2.4.3 Pre-processing With any IA procedure, the images from which information needs to be extracted needs to undergo application specific pre-processing steps to prepare it for information extraction. For procedures aimed at identifying and estimating the size of objects in images, converting a colour image to an intensity or grey-scale representation of the image is usually the first step in the DIP procedure. This conversion transforms the image to one consisting of a single intensity channel by highlighting the luminance plane information. The luminance plane carries most of the information required to extract important image features such as edges and object boundaries. These features are specifically important for the purposes of this study, since object size estimation relies on the accurate recognition and extraction of objects from an image. Therefore, extracting the luminance plane information is of critical importance. In addition, this conversion reduces the amount of information to be processed, reducing processing time and storage requirements of a typical DIP system. A grey-scale image can be seen as a special case in the colour system. It is a monochrome image, and is typically encoded using 8-bit integer encoding. Consequently, such an image has 256 intensity levels. In the case of such encoding, an 8-bit byte is used to represent each pixel in the image, while the intensity value of each pixel ranges between black (0) and white (255). Grey-scale images are good representations of the degree of brightness (luminance) in an image. Equation 2 is a representation of the general transformation used to convert a RGB colour image into a single grey-scale band image. The equation is applied to each pixel in the image to calculate an intensity value associated with every pixel location in the image. In this equation, adapted from Gonzalez & Woods (2008), R, G and B represents the intensities associated with the red, green and blue band respectively, and v represents the intensity value at a given pixel location (x, y).

Equation 2: Conversion used to convert colour images to grey-scale images.

Equation 2 can be used to generate an intensity or grey-scale image V, represented in matrix form as:

[ ] [ ]

Equation 3: Colour space to intensity conversion represented in matrix form.

Noise removal The next steps involve removing unwanted distortion such as image noise, followed by the enhancement of image features of interest. Before considering noise removal, it is important to understand the concept of spatial domain operations performed on an image. Here, spatial domain refers to the image plane. Processing methods that fall into this category are

17

Stellenbosch University https://scholar.sun.ac.za

based on direct manipulations of pixels in the image. Most filters used in DIP make use of spatial domain operations to remove image noise, by altering image pixel intensity values. Within spatial domain processing there are two primary categories of methods: intensity transformations and spatial filtering. Intensity transformations are operations that operate on single pixels of an image, with the primary goals being contrast manipulation and image thresholding. On the other hand, spatial filtering involves operations that perform tasks, such as image sharpening, by working in a neighbourhood of every pixel in an image (Gonzalez & Woods 2008) In general, spatial domain operations can be described by the expression contained in Equation 4, adapted from Gonzalez & Woods (2008): ( ) [ ( )]

Equation 4: General expression for spatial domain operations.

Here, f(x, y) is the input image and v(x, y) is the output image. T is some specified operation on the input f conducted over a neighbourhood of the point with coordinates (x, y). During neighbourhood processing, the operation T is applied to each neighbourhood as it is moved from pixel to pixel in the image, in order to produce an output image. The neighbourhood typically has a rectangular shape, is much smaller than the image, and is centred on the pixel (x, y). Figure 7 shows the basic implementation of neighbourhood processing.

y

(x, y)

Image f

x

Figure 7: A spatial domain representation of a 3x3 neighbourhood centred on a point (x, y) in an image f.

Median filtering forms part of the spatial domain operations performed on an image, specifically the spatial filtering category. A median filter utilises neighbourhood operations to replace the central value of an M-by-N neighbourhood of pixels with the median value of all the pixels in the neighbourhood. If the neighbourhood does not have an exact centre, the algorithm enforces its bias toward the upper left corner. Due to the median value being less sensitive to extreme values than the mean value, this block is effective in removing salt and pepper noise in an image without significant reduction in image sharpness. While being good at removing salt and pepper noise from an image, median filters also cause a fair amount of blurring of edges. For this reason, these types of filters are often used in computer vision applications where it is not of critical importance to preserve edges, as is the case for size estimation of spherical-shaped pellets, but rather to smooth out a region of intensities in an image.

18

Stellenbosch University https://scholar.sun.ac.za

Another method often used to remove noise is to convolve the original image with a mask that represents a low-pass filter. This is often termed as a smoothing operation. An example of such a mask is a Gaussian mask, which comprises of elements that is determined by a Gaussian function. By applying this convolution, the value of each pixel is brought into closer harmony with the values of its neighbours. This is achieved by setting each pixel to a weighted average value, of the pixel itself and its nearby neighbours. The Gaussian filter represents one possible set of weights. As with median filters, smoothing filters such as the Gaussian mask also tend to blur an image. This is due to the fact that pixel intensity values that are significantly higher or lower than the surrounding neighbourhood would "smear" across the area.

Contrast enhancement The objective of the enhancement step is typically to clarify objects of interest within the image. This could refer to highlighting particles in the foreground, so as to eliminate dim and blurred particles in the background and avoid errors in size estimation. Furthermore, Gonzales & Woods (2008) describe enhancement as any process that manipulates an image so that the result of this manipulation renders the output image more useful than the original image for a specified application. This definition automatically describes enhancement techniques as being problem specific.

Contrast stretching This technique forms part of the piecewise-linear transformation functions conducted on digital images to increase overall image contrast. The technique creates an increased contrast in the image by increasing the spectrum of intensity levels in an image to such an extent that it spans the whole intensity range of the recording medium or display device (Gonzalez & Woods 2008). Low contrast in images can be the result of poor illumination, incorrect aperture settings during image acquisition, or a lack of dynamic range in the imaging sensor. Figure 8 shows a typical transformation function used for contrast stretching. In this function, r and s denote the pixel intensities of the input and the output images respectively. The positions of the two points (r1, s1) and (r2, s2) control the shape of the transformation function. The special case of the r1 = r2, s1 = 0 and s2 = L – 1 produces a thresholding function. This kind of function produces a binary image, and will be discussed later in this section. By changing the positions of (r1, s1) and (r2, s2) various degrees of contrast can be achieved in the output image due to the varying spread of the intensity levels in the output image. In order to achieve a function that is single valued and increases monotonically, general practise requires r1 ≤ r2 and s1 ≤ s2. Enforcing this norm ensures that the order of intensity levels are preserved.

19

Stellenbosch University https://scholar.sun.ac.za

L - 1

s (r2, s2) 3L/4

L/2 T(r)

L/4 Output intensity level, level, Output intensity (r1, s1) 0 0 L/4 L/2 3L/4 L - 1

Input intensity level, r

Figure 8: Typical form of a transformation function used for contrast stretching.

Maximum contrast stretching of the entire range of input intensities can be achieve by setting

(r1, s1) = (rmin, 0) and (r2, s2) = (rmax, L - 1). In these parameters, rmin and rmax represent the minimum and maximum intensity levels in the image. A transformation function having these parameters stretches the input intensity levels linearly from their original range to the full range of [0, L – 1].

Histogram processing Histogram Processing refers to a group of techniques that aim to increase contrast in an image by performing operations using the histogram of an image. Gonzales & Woods (2008) describe the histogram of a digital image with intensities in the range [0, L – 1], as the discrete function h(rk) = nk. In this equation, rk is the kth intensity value and nk is the number of pixels in the image having an intensity equal to rk. Gonzales & Woods (2008) state that it is also common practise to normalise the histogram of an image. This is done by dividing each of the components of the histogram by the total number of pixels in the image. This total is equal to the product M x N, where, as mentioned earlier, M and N are the number of rows and columns present in the image, respectively. From this procedure, a normalised histogram can be represented by Equation 5: ( ) ( )

Equation 5: Normalised histogram representation (Gonzalez & Woods 2008).

From this equation it can be derived that for a given image, p(rk) is an estimation of the probability of occurrence of the intensity level rk in the image. Furthermore, it can be stated that the sum of the components of a normalised histogram is equal to 1. Figure 9 provides a graphical example of an image and its associated histogram. The horizontal axis of the histogram corresponds to the values of rk. These are the intensity values of the image. The vertical axis corresponds to the values of h(rk) = nk.

20

Stellenbosch University https://scholar.sun.ac.za

Figure 9: A grey-scale image and its intensity histogram.

It can be inferred that an image with low contrast will typically have a narrow histogram located near the middle of the intensity scale. A dark image will typically have a histogram with intensities clustered around the low end of the intensity scale. Similarly, a light image will have a histogram containing intensities that are clustered around the high side of the intensity spectrum. High contrast images will typically have a histogram with intensities spread uniformly over the entire range of possible intensities, leading to an image with high grey-scale detail.

Histogram equalization Histogram Equalization, like Contrast Stretching, is an intensity transformation operation used to enhance contrast within an image. It is an intensity mapping operation of the form depicted in Equation 6. Considering continuous intensities, it can be recalled that r represents the intensities of an image, within the range [0, L – 1], with r = 0 representing black and r = L – 1 representing white (Gonzalez & Woods 2008). ( )

Equation 6: Standard intensity mapping transformation function.

Gonzales & Woods (2008) provide the following conditions for effective mapping operations: 1. T(r) is strictly monotonically increasing over the interval 0 ≤ r ≤ L – 1 2. 0 ≤ T(r) ≤ L – 1

Condition 1 guarantees that the output intensity values won‘t be less than the corresponding input intensity values. It also prevents ambiguities from occurring during the application of the inverse transform, by enforcing the requirement that the mapping from s to r needs to be strictly one-on-one. Condition 2 guarantees that the input and output intensity ranges are the same. Other image enhancement techniques that utilise the histogram of images require the use of the inverse transformation, depicted in Equation 7.

( )

Equation 7: Inverse intensity mapping transformation function.

From previous discussions it can be recalled that the intensity levels in an image can be regarded as random variables over the interval [0, L – 1]. This provides the use of a fundamental descriptor of a random variable, its probability density function. When considering discrete intensities, this function is referred to as a probability mass function.

The probability of the occurrence of an intensity level rk in a digital image is presented by Equation 8.

21

Stellenbosch University https://scholar.sun.ac.za

( )

Equation 8: Probability Mass Function of the occurrence of an intensity rk.

Following the definition of Equation 8 as the probability mass function of the intensities in a given image, a transformation T(rk) can be defined that will map an input image pixel with intensity rk into a corresponding output (processed) image pixel with intensity level sk. This transformation, given in Equation 9, is referred to as a Histogram Equalization transformation.

( ) ( ) ∑ ( )

Equation 9: Histogram Equalization transformation.

The application of Equation 9 will not necessarily result in a uniform histogram. It does, however, tend to produce an equalized image with intensity values that span a wider range of the intensity scale, resulting in a contrast enhanced image.

Histogram matching When automatic enhancement is desired, Histogram Equalisation provides good and predictable results. However, this approach is not always the best approach to enhancement. In some cases it is desirable to map the histogram of an input image to an output histogram with a specified shape, in order to produce a desired output (processed) image. The method used to accomplish this output is called Histogram Matching or Histogram Specification. In order to describe the procedure to conduct Histogram Matching, let r and z denote the intensity levels of an input and output image, respectively. From these variables, the probability density function of r, pr(r), can be obtained from the given input image, while pz(z) is the specified probability density function of the output image required. By using Equation 8, and defining a random variable s, it is possible to define transformation functions for both r and z. These are given by Equation 10 and Equation 11 respectively.

These equations are defined so that for a specific k and q, T(rk) = G(zq) = sk. Since pr(r) can be determined from the input image, the values of s can be obtained.

( ) ( ) ∑ ( )

( ) ∑

Equation 10: Transformation function for r.

( ) ( ) ∑ ( )

Equation 11: Transformation function for z.

In Equation 10 , MN is the total number of pixels in an image, nj is the number of pixels that have an intensity value equal to rj, and L represents the total number of intensities in the

22

Stellenbosch University https://scholar.sun.ac.za

image. It is also important to note that pz(zi) is equal to the ith value of the specified (desired) output histogram.

By using the specified (desired) probability mass function, pz(z), the transformation function G(zq) can be obtained. It follows then that the desired value zq can be determined by obtaining the inverse transformation depicted in Equation 12. Since z is obtained from s, this process is said to be a mapping from s to z. In this case, z is the desired intensity values of the required output image.

( )

Equation 12: Inverse transformation function to obtain zq.

2.2.4.4 Image segmentation aimed at particle extraction Following noise removal and image enhancement, the target information extraction can take place. For the purpose of this study, the goal of the image processing algorithm is to determine the sizes of the objects (pellets) captured in the images. Therefore, the information that needs to be extracted are pellets captured in the images. This points to the need to distinguish pellets from the background and other objects in the image, as well as isolate individual pellets. Literature points to image segmentation as the preferred tool for this task. Although pre-processing and extraction seem to be two different and sequential operations, many of the techniques applied in practice combine techniques from both areas. This is done to reduce overall processing time. Segmentation refers to any operation conducted on an image with the aim to subdivide or partition an image into sub-regions or objects. In addition, image segmentation can be described as the process of assigning a label to every pixel in an image, resulting in pixels with the same label sharing certain characteristics (Shapiro & Stockman 2001). The typical goal of image segmentation is to locate objects and boundaries in images, which are represented by lines, curves and sharp intensity changes. Considering the above description, it can be said that the output of an image segmentation algorithm is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. Pixels in the same region are similar with respect to some characteristic or computed property. Some of the most commonly utilised properties include pixel colour, intensity, or texture. In contrast, the adjacent segments vary significantly with respect to the same property or group of properties (Shapiro & Stockman 2001). The extent and level of this partitioning is problem dependent. However, the basic requirement remains that the resulting image should be more meaningful and easier to analyse for a specific application, such as size estimation of objects. Furthermore, segmentation is viewed as one of the most challenging tasks in any computerised IA procedure. The fact that the eventual success of most image processing procedures that involve some sort of segmentation, is determined by the segmentation step‘s accuracy, adds more significance to the challenge. All these reasons place a bigger emphasis on increased segmentation accuracy. Most segmentation algorithms can be divided into two main categories. These categories are based on two main properties of intensity values: discontinuity and similarity (Gonzalez & Woods 2008). The first category (Edge-based segmentation) aims to partition images into regions based on sudden and definite changes in intensity. The points (boundaries) where these changes (discontinuities in intensity) occur are referred to as edges. With regards to the second category (Region-based segmentation), image partitioning (into regions) is based on similarities in regions according to a predefined set of criteria. Techniques such as

23

Stellenbosch University https://scholar.sun.ac.za

thresholding, region growing, and region splitting and merging fall into the second category. When practically applying segmentation techniques, it has been shown that segmentation performance can be drastically improved through combining various different techniques from both categories. Gonzales & Woods (2008) suggest five conditions that need to be satisfied for effective segmentation. Considering R as the spatial region covered by an image, successful segmentation of R into n sub regions, R1, R2, …, Rn, should conform to: 1. Segmentation must be complete by including every pixel in a region. 2. The points in a segmented region must be spatially connected in some way. 3. Segmented regions must be disjoint. 4. A set of pixels belonging to the same segmented region, should all have the same properties in terms of intensity and connectedness. 5. In terms of the properties mentioned above, two neighbouring regions must differ in terms of the terms of these properties.

Edge Detection

Basic concepts of Edges Edge detection usually refers to detecting sharp, local changes in intensities in an image. Edge detection is the most common method used for segmenting an image based on sharp, local intensity changes. An edge detector is a local image processing technique designed to identify edge pixels. The term edge pixels refers to pixels at which these sharp changes in intensities occur. An edge or edge segment is then defined as a set of connected edge pixels. Edges can be divided into two types of edges, ramp edges and step edges, according to their intensity profiles. Step edges are associated with extreme changes in intensities occurring over a one-pixel distance (between two adjacent pixels). These changes are referred to as being sharp and abrupt. Ramp edges are indicative of a smooth transition between extreme intensities and typically occur over a few pixels. In developing algorithms to detect edges, mathematical expressions are developed that represent these two types or models of edges. The effectiveness of the edge detection algorithms depends on the differences between these mathematical models of the edges, and the actual edges in the image. Figure 10 shows an image containing intensity changes typically associated with ramp and step edges. Point A shows an area associated with a ramp edge, and point B shows an area associated with a step edge.

24

Stellenbosch University https://scholar.sun.ac.za

A

B

Figure 10: Image containing Ramp and Step Edges.

First- and second-order derivatives are widely used and well suited for detecting sharp, local changes in intensity values. Considering a one dimensional digital function f(x), an approximation of a first-order derivative at the point x can be obtained by expanding the function f(x + ∆x) into a Taylor series around the point x. In this expansion ∆x = 1, and only the linear terms are kept. The result is referred to as a digital difference, as shown in Equation 13:

( ) ( ) ( )

Equation 13: A digital approximation of a first-order derivative at point x.

An equation for a second-order derivative can be similarly obtained by differentiating Equation 13 with respect to x: ( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

Equation 14: A digital approximation of a second-order derivative at point x + 1.

This expansion is, however, about the point x + 1. In order to obtain a second-order derivative at point x, 1 is subtracted from the arguments in Equation 14 to form Equation 15: ( ) ( ) ( ) ( ) ( )

Equation 15: Approximation of second-order derivative at point x.

Table 1 provides a summary of the general properties of first- and second-order derivatives when used to locate edges in images adapted from Gonzales & Woods (2008).

25

Stellenbosch University https://scholar.sun.ac.za

Table 1: Properties of first- and second-order derivatives in determining edges in images.

Properties of derivative when used to locate edges in images First-order derivative Second-order derivative The magnitude can be used to determine the Responds stronger to finer detail, such as presence of an edge at a specific pixel in an thin lines, isolated points, and noise. image. Generally produces thicker edges than second-order derivative. Produces a double-edge response at ramp and step intensity transitions. The sign is often used to determine whether an edge transition is from a dark to light region, or vice versa.

The use of spatial filters is the most common method used for calculating the first- and second-order derivatives at every pixel location in an image. The method entails defining a discrete formulation of a first- or second-order derivative, and then constructing a filter mask based on the discrete formulation. The filter mask can be considered the digital representation or implementation of the derivative equation. These filter masks are often referred to as gradient operators, difference operators, edge operators, or edge detectors. Furthermore, it is important that for a given derivative mask, the coefficients of the mask sum to zero (refer to Figure 12 for an example of a Prewitt edge detector mask with coefficients summing to zero). This ensures that the response of the mask in areas of constant intensity will be zero. To recap on spatial filtering operations, the response of a given filter mask of size 3 x 3 applied over a region of pixels is presented in Equation 16. The response of the mask at a given pixel location is computed by calculating the sum of the products of the mask coefficients with the intensity values of the pixels in the region. This leads to the response E, with its expression given in Equation 16:

Equation 16: The response of a 3 x 3 spatial filter at a specified pixel location. This response is calculated for each pixel in the image.

In this equation, wk represents the mask coefficient of the kth mask element. zk represents the intensity value of the kth pixel encompassed by the mask. In order to determine the strength and direction of an edge at location (x, y) of an image f, the image gradient is utilised. It is a vector denoted by f and is defined in Equation 17:

[ ] [ ]

Equation 17: The definition of the Image Gradient.

The most significant property of the image gradient is that it points in the direction of the greatest rate of change of the function f at the specific point (x, y).

26

Stellenbosch University https://scholar.sun.ac.za

The magnitude or length of the image gradient is denoted by M(x, y) and given by Equation 18:

( ) ( ) √

Equation 18: The magnitude of the Image Gradient.

This value is equal to the rate of change in the direction of the gradient vector . An approximation to this equation is given in Equation 19, utilising the absolute values of gx and gy. This equation is favourable due to its computational burden being less, whilst preserving most changes in intensity levels.

( ) | | | | Equation 19: Approximation of the magnitude of the image gradient.

It is also important to note that gx, gy and M(x, y) are all images that have the same size as the original image f(x, y). They are obtained by varying x and y across all pixel locations in f. Also, M(x, y) is referred to as the gradient image or simply the gradient. Furthermore, the direction of the gradient vector is given by the angle (x, y) as defined in Equation 20:

( ) [ ]

Equation 20: The direction of the gradient vector.

This angle is measured with respect to the x-axis and, as in the cases above, (x, y) is also an image with size equal to that of the original image f. Finally, it must be noted that the direction of an edge at (x, y) is orthogonal to the direction of the gradient vector at (x, y). In order to obtain the gradient, the partial derivatives at each point (x, y) in the image need to be calculated. As stated earlier, digital approximations of the partial derivatives over specified neighbourhoods around a point (x, y) need to be calculated. From Equation 13, the following two equations represent digital approximations of the partial derivatives gx and gy: ( ) ( ) ( )

Equation 21: Partial derivative at point (x, y) in terms of x.

( ) ( ) ( )

Equation 22: Partial derivative at point (x, y) in terms of y.

In terms of calculating edge direction, these equations are most effectively implemented using masks that are symmetric about the mask centre point. From this statement, it is clear that the smallest of these masks are of size 3 x 3. Due to the ability to capture information on opposite sides of the point of interest, these types of masks are able to determine edge direction more accurately. Considering a 3 x 3 region of an image such as the region displayed in Figure 11, the simplest digital approximations of the partial derivatives of Equation 21 and Equation 22 utilising masks with size 3 x 3, are given in Equation 23 and Equation 24.

27

Stellenbosch University https://scholar.sun.ac.za

z1 z2 z3

z4 z5 z6

z7 z8 z9

Figure 11: 3 x 3 region of an image, with intensity values represented by z.

( ) ( )

Equation 23: Digital approximation of the partial derivative at point (x, y) in terms of x.

( ) ( )

Equation 24: Digital approximation of the partial derivative at point (x, y) in terms of y.

Various masks have been developed based on the above approximations of the partial derivatives. Most notably are the Prewitt Operators and the Sobel Operators. Examples of these masks are given in Figure 12 and Figure 13 respectively. The Sobel operator differs from the Prewitt operator by using a weight of two in the centre coefficient, as is visible in Figure 13. This weight provides image smoothing (noise suppression), aiding in the accurate detection of edges due to the sensitivity of derivatives to image noise.

-1 -1 -1

0 0 0

-1 -1 -1

Figure 12: Prewitt Operator.

-1 -2 -1

0 0 0

-1 -2 -1

Figure 13: Sobel Operator.

In addition to smoothing, thresholding the gradient image can help to aid in edge detection by making the edge detection more selective. With this method, a thresholding value can be picked that eliminates edges caused by noise, whilst highlighting the true edges and making them appear sharper. In most cases, smoothing is used in conjunction with thresholding to highlight principal edges.

The Canny Edge Detector Along with utilising various masks to conduct edge detection (as described in the previous section), the Canny Edge Detector also incorporates edge characteristics and noise content in determining the edges present in an image. The basic steps of the Canny edge detector algorithm (Canny 1986) can be summarised in the following four points, as summarised by Gonzales & Woods (2008): 1. Use a Gaussian filter to smooth the input image.

28

Stellenbosch University https://scholar.sun.ac.za

2. Compute the gradient magnitude and gradient angle images. 3. Conduct nonmaxima suppression to the image of the gradient magnitude. 4. Detect and link edges by using double thresholding and connectivity analysis.

Canny (1986) concluded that the first derivative of a Gaussian function is a good approximation of the optimal step edge detector. By smoothing an image with a circular 2-D Gaussian function, detection of the edges from the image gradient could be greatly enhanced. Following the above statement, let f(x, y) represent the input image. G(x, y) is then used to denote the Gaussian function that is implemented to smooth the input image f(x, y), and is given by Equation 25:

( )

Equation 25: 2-D circular Gaussian function used to smooth the input image in the Canny edge detector algorithm.

The smoothed image, denoted as fs(x, y), is obtained by convolving G and f, given by Equation 26:

( ) ( ) ( )

Equation 26: Convolving G with f to obtain a smoothed image fs.

Following this operation, the gradient magnitude and gradient direction or angle are computed, as depicted in Equation 18 or Equation 19, and Equation 20. M(x, y) typically contains wide ridges located around local maxima. These ridges need to be thinned in order to obtain better defined edges. The Canny edge detector incorporates a technique called nonmaxima suppression to obtain these thin edges. One way to implement this technique is to specify a number of discrete orientations of the gradient vector. The method below demonstrates the implementation of this nonmaxima suppression procedure, as indicated by Gonzales & Woods (2008). For the purposes of demonstration, four discrete orientations (directions) have been specified for the gradient vector:

 In a 3 x 3 region of α(x, y), let d1, d2, d3, and d4 denote four basic directions of an edge, as determined by the gradient vector.

 At every point (x, y), find the direction dk that is the closest to the value α(x, y).

 Define gN(x, y) as the nonmaxima-suppressed image.

 If M(x, y) is less than at least one of its neighbouring values along the direction dk,

appoint gN(x, y) to be 0. This procedure is referred to as suppression. Otherwise,

appoint gN(x, y) = M(x, y). The result of this procedure is that gN(x, y) consists only of thinned edges. Its values are equal to M(x, y), with all the nonmaxima edge points being suppressed.

The final step in the Canny edge detector method is to threshold gN(x, y) to reduce any false edge points. To accomplish this, the Canny edge detector incorporates hysteresis thresholding, which utilises two thresholding values. These are denoted as TL (high threshold value) and TL (low threshold value). These values should have a high to low threshold value of 2:1 or 3:1 (Canny 1986). This thresholding operation effectively creates two additional images, gNH(x, y) and gNL(x, y). Both are initially set to zero. Equation 27 and Equation 28 are used to define these images:

29

Stellenbosch University https://scholar.sun.ac.za

( ) ( )

Equation 27: Definition of gNH(x, y).

( ) ( )

Equation 28: Definition of gNL(x, y)

The general outcome of the thresholding procedure entails that gNH(x, y) will have fewer nonzero pixels than gNL(x, y). It is, however, also true that all the nonzero pixels in gNH(x, y) are contained in gNL(x, y), since gNL(x, y) is formed using a lower threshold. Through subtracting gNH(x, y) from gNL(x, y) (Equation 29), we can eliminate all the nonzero pixels contained in gNH(x, y).

( ) ( ) ( )

Equation 29: Operation to eliminate from gNL(x, y) all the nonzero elements contained in gNH(x, y)

From this operation, all nonzero elements in gNH(x, y) are defined as strong edge pixels. Alternatively, all nonzero elements in gNL(x, y) are defined as being weak edge pixels. Strong edge pixels are considered to be valid, and are marked so in the output image. However, depending on the value of TH, edges in the gNH(x, y) could have gaps. The following procedure describes how longer edges are formed:

 Consider gNH(x, y) and locate the next unvisited edge pixel in gNH(x, y). Arbitrarily define this pixel as p.

 Mark all the weak pixels in gNL(x, y) that are connected to the pixel p, as valid edge pixels.

 If all the nonzero pixels in gNH(x, y) have been visited, set all pixels in gNl(x, y) that were not marked as valid edge pixels, to zero. Alternatively, return to the first step and repeat the process.

If the above procedure has been executed successfully, the final output of the Canny edge detection algorithm is obtained adding to gNH(x, y) all the nonzero pixels contained in gNL(x, y).

Thresholding Thresholding is considered to be one of the simplest and most popular methods to conduct image segmentation by (Gonzalez & Woods 2008). Thresholding is a segmentation technique that partitions images directly into sub regions based on intensity values. For a grey-scale image f(x, y) having a histogram of the form displayed in Figure 9, it is clear that the intensities in the image are grouped into two dominant groups: one group with higher intensities, and one group with lower intensities. For a certain value T selected to be somewhere between the intensities of the two main groups, it will be possible to partition the image into two main regions, for example an object and a background. Equation 30 (Gonzalez & Woods 2008) describes this process of obtaining an output image g(x, y) by selecting a specific thresholding value T. The result of this equation, g(x, y), is a binary black and white image in which white pixels correspond to all points in f(x, y) having an intensity value larger than T (object points), and black pixels that correspond to all points in f(x, y) that have a value smaller or equal to the value T. ( ) ( ) { ( )

Equation 30: Basic procedure for segmenting an image with a thresholding value T.

30

Stellenbosch University https://scholar.sun.ac.za

T is said to be a global threshold if it stays constant and applicable over the entire image. A variable threshold is one that changes over the image. A value T is a local or regional threshold if its value at any point (x, y) in an image depends on the properties of the neighbourhood of the point (x, y), such as an average neighbourhood value. A dynamic or adaptive threshold is one that depends on the spatial coordinates (x, y) (Gonzalez & Woods 2008). Multiple thresholding is a term used to describe thresholding applied to images that have multiple dominant intensity groups. This could for instance indicate two types of light objects on a darker background. In this case two thresholding values need to be determined, along with three intensity values for regions in the segmented image. It is clear from the previous discussions that the success of the thresholding operation depends on the breadth and depth of the separations between the intensity regions in the image histogram. Factors affecting the separations or valleys include image noise, separation distance between peaks, relative sizes of the objects pertaining to the regions, illumination uniformity, and reflectance uniformity of the objects and background in the image.

Global thresholding using Otsu’s Method Gonzales & Woods (2008) suggest that thresholding can be regarded as a statistical- decision theory problem, aimed at optimizing the average error made when dividing pixels into two or more groups. Finding a solution depends on the probability density function (PDF) of the intensities of each group and the probability of each group occurring. However, estimating the PDFs is difficult, complicating practical applications of thresholding solutions. Otsu‘s Method is regarded as an optimal alternative to overcome the abovementioned difficulties. This method accomplishes effective use of global thresholding by maximising the between-class variance, aiming to create classes that are distinct with regards to the intensity values of the pixels contained in them. In addition, it is a favoured technique due to its low computational effort, since it is based entirely on computations conducted on the histogram of an image. In order to describe Otsu‘ Method, the following variables are declared (as in previous examples):

 L = {0, 1, 2, …, L – 1}; the distinct intensity levels in an image

 ni ; the number of pixels with intensity i

 MN = n0, n1, n2, …, nL-1 ; the number of pixels in image of size M x N

 pi = ni/MN; the elements of the normalised histogram

From the definition of pi Equation 31 follows:

Equation 31: Cumulative sum of the intensity probabilities in an image.

By selecting a random threshold T(k) = k for 0 < k < L – 1, an image can be thresholded into two regions C1 and C2 with intensity ranges [0, k] and [k + 1, L – 1] respectively. Following this definition, the probability that a pixel is assigned to region C1 and C2, is designated as P1(k) and P2(k) respectively, and can be given by the cumulative sums in Equation 32 and Equation 33:

31

Stellenbosch University https://scholar.sun.ac.za

( ) ∑

Equation 32: Cumulative sum of intensities in C1.

( ) ∑ ( )

Equation 33: Cumulative sum of intensities in C2.

The mean intensity values m1 and m2, of the pixels assigned to C1 and C2, can be calculated by Equation 34. In the latter case (m2) the sum is enforced over the interval [k + 1, L – 1]:

( ) ∑ ( | )

( ) ∑ ( | ) ( ) ( )

( ) ∑ ( )

Equation 34: Calculation of the mean intensity values of pixels in C1 and C2.

In Equation 34 P(i|Cj) is the conditional probability of i, given that i is part of Cj. From the result of Equation 34 it is possible to calculate the average intensity of the entire image, represented by mG and given in Equation 35:

Equation 35: Average intensity of the entire image, mG.

The validity or effectiveness of the chosen threshold k can be tested by the dimensionless 2 metric contained in Equation 36. In this equation, σ G is referred to as the global variance, or the variance of the intensity of all the elements in the image. This variance can be calculated using Equation 37.

Equation 36: Threshold quality metric.

∑( )

Equation 37: Global variance calculation.

2 The term σ B in Equation 36 is referred to as the between-class variance, and can be calculated using Equation 38. In the following equations the argument k of the probabilities P(k) is left out for notational clarity.

32

Stellenbosch University https://scholar.sun.ac.za

( ) ( ) Equation 38: Calculation of between-class variance.

Alternatively,

( )

( )

( ) Equation 39: Alternative formulation of between-class variance.

Observing the Equation 39 it can be concluded that as the difference in magnitude between 2 m1 and m2 increase, the magnitude of σ B also increases. Gonzalez & Woods (2008) mention that the between-class variance is therefore a measure of separability between classes. Finally, it can be concluded that the objective of Otsu‘s method is to determine the applicable value of k that maximises the between-class variance. This is maximization is expressed in Equation 40, with k* being the optimum threshold value:

( ) ( )

2 * Equation 40: The maximization of σ B with the optimum threshold, k .

Finding the optimum threshold value k* therefore involves evaluating Equation 40 for all values of k, providing the condition 0 < P1(k) < 1 holds, and thus selecting the value of k that produces the maximum value. If a maximum value is achieved for more than one value of k, the standard procedure is to take the average of all the values of k that produced a 2 maximum value for σ B. Once an optimal value of k has been obtained, the image can be segmented using Equation 40.

2.2.4.5 Alternative method for particle extraction Many of the methods used for object size estimation (circular-, ellipse- and polygon fitting) and which are discussed in Section 3.1.1.4., are based on the circular Hough transform as discussed below. This transform can be used to identify spherical objects, and also determine their size.

The method suggested by Hough (1962) was developed to identify straight lines in images. It does this by translating a line in the x-y plane, described by the general equation stated in Equation 41, to a line in the a-b plane described by Equation 42.

Equation 41: General equation of a straight line in the x-y plane.

Equation 42: Equation of a straight line translated to the a-b plane (parameter space).

The a-b plane is also referred to as the parameter space. To understand the result of this translation, note first that within the x-y plane, a fixed pair a and b would result in a straight line for any point (xk, yk). Similarly it can be deduced, through the transformation mentioned above, that a fixed pair x and y, would result in a fixed line in the a-b plane for any point (aj, bj). Otherwise put, all points (xk, yk) on the line yk = axk + b have associated lines in the a-b plane (parameter space). And all these lines will intersect at a specific point (a’, b’). This concept is illustrated graphically in Figure 14.

33

Stellenbosch University https://scholar.sun.ac.za

b’ y b

(xi, yi) b = -xia + yi

a’

(xj, yj)

b = -xja + yj x a Figure 14: x-y plane (left); a-b plane or parameter space (right)

From the above illustration it can be seen that a line within the x-y plane can be found by identifying the points in the a-b plane where a large number of lines intersect. Furthermore, and to avoid the situation where a approaches infinity (as in the case of vertical lines in the x-y plane), Hough (1962) suggested using the normal representation of a line (also called the Hesse normal form), given by Equation 43. It follows then that similarly, and by changing the plane to the ρ-θ plane, lines in the x-y plane could be identified by locating the points (ρ’, θ’) where many lines intersect.

Equation 43: Normal form equation for a line.

Gonzales & Woods (2008) argue that this method, developed to identify straight lines in images, can be applied to any vector function of the form stated in Equation 44. ( )

Equation 44: Vector function for a vector of coordinates and a vector of coefficients.

Here c is a vector of coordinates and v is a vector of coefficients. The difference resulting from every application will be a change in the parameter space dimensions: parameter space dimensions will increase as the number of coordinates and associated coefficients increase. From the above explanation it can therefore be deduced that points lying on a circle with eqaution stated in Equation 45, will result in a 3-D parameter space and accumulation of points of the form A(i, j, k).

( ) ( ) Equation 45: Equation of a circle.

2.2.5 DIP used for size estimation of objects As stated in the objectives for this study, the first main objective of the study is to determine a mean size distribution of the FeCr pellets produced at a FeCr pelletizing plant. This section covers the main considerations proposed in literature that affect the accurate working of a PSES.

2.2.5.1 Fundamentals of a DIP PSE system Wang (2008) suggests that image analysis (IA) systems aimed at analysing rock particles for size estimation purposes generally consist of three parts: image acquisition, image

34

Stellenbosch University https://scholar.sun.ac.za

segmentation and particle size and shape measurement. Wang (2008) describes these parts as follows:

 Image acquisition: Entails everything concerning the set-up of an application dependent system to acquire good quality images.  Image segmentation: This is the most important and difficult part in recognising patterns in order to identify objects of interest in images. It depends on the characteristics of the images used by the IA system, and greatly affects the accuracy attainable by the size measurement algorithms.  Particle size and shape measurement: Entails the algorithms that measure and present desired attributes of the objects identified by the image segmentation algorithm. The most important requirement of these measurements is that they should be reproducible.

Considering the above descriptions, it can be reasoned that the most profound variation between different particle size estimation systems, within and outside of the mining industry, would be concerning system elements that can be assigned to any of the three categories mentioned above. It could also be reasoned that the image segmentation part is the most important part of the three parts mentioned.

2.2.5.2 Basic considerations regarding input data In order to effectively solve a given problem involving image processing and analysis, it is important to understand the typical input data to be manipulated and from which desired image attributes will be extracted. The specific techniques applied in conducting image processing and analysis will depend largely on the data available for processing, and the desired output of the procedure. Table 2 contains a summary of the image characteristics that typically have the most significant effect on the successful execution of DIP procedures on digital images (Montenegro Rios et al. 2011; Maerz 1998; Thurley & Ng 2005).

Table 2: Significant image characteristics regarding DIP.

Characteristic Factors affecting Characteristic Image contrast  Image contrast plays important role in detection edges of objects. Distinct and large intensity contrast is desired between objects of interest and both image background and other objects in image. Objects of interest  Well-defined, distinct and similarly shaped objects improve accuracy of sizing algorithms.  Particles should not overlap, which could lead to false and incorrect identification and size estimation. Image noise  Noise caused by imaging equipment.  Adverse working environment (sprinklers, dust, vibrations, varying day-light illumination)  Movement captured in image. Illumination  Direct, uniform, and high intensity illumination of the imaging scene greatly aids the execution of DIP procedures on digital images.  All shadows and reflections should be eliminated in images.  The illumination source should poses colour rendering ability that increases contrast in image scene. Image definition  Higher image resolution leads to more distinctly definable edges,

35

Stellenbosch University https://scholar.sun.ac.za

but increases processing time. An optimal compromise between these two factors should be investigated.

Typically, image contrast presents itself as the biggest obstacle to overcome. Specific techniques that typically compensate for low image resolution and image noise, include image smoothing filters, such as the median filter, and morphological image processing techniques. Also, a challenge exists to clearly distinguish the objects of interest from the background, to aid in accurately identifying objects. Part of the discussions that follow in Chapter 3.1.1.3 revolve around the typical algorithms used to mitigate the undesirable effects the above image characteristics can have on DIP procedures.

2.3 PROCESS CONTROL FOR THE MINERAL BENEFICIATION ENVIRONMENT

2.3.1 Background on need for process control in mineral beneficiation environment In terms of the production and processing environments, traditional control methods require manual samples of process outputs to be analysed in order to determine the level of operation of the system. These manual samples are often time consuming, require processes to undergo interruptions in production, and are labour intensive (Andersson & Thurley 2011; Montenegro Rios et al. 2011; Wang 2008; Hamzeloo et al. 2014). In some cases however, the major drawback of these manual sampling methods is the fact that the results are not true representations of current system performance. This is due to the delay in feedback caused by the analysis procedure and sampling time intervals (Gloy, 2014). In order to achieve effective process control, the need therefore exits for real time system monitoring and analysis (Thurley 2013). Therefore, one of the main advantages of DIP that has been researched extensively is its application as a process monitoring mechanism (Wang 2008; Zhang et al. 2013; Thurley 2013). Prospects of improved efficiencies, quality and yields due to more efficient and effective process control, keep researchers busy with refining existing methods, and developing new IA algorithms (Zhang et al. 2013). Consequently, DIP‘s ability to analyse process outputs efficiently, accurately and in real time, has been improving over the last decade.

2.3.2 General control objectives and key considerations In developing control strategies and systems, Marlin (2000) sites the following seven major categories of objectives and goals applicable to any control system: 1. Safety of people 2. Environmental protection 3. Equipment protection 4. Smooth operation and production rate 5. Product quality to meet required specifications 6. Profit of the operations 7. Provide monitoring and diagnosis, to be used by operators, supervisors and engineers

Machunter & Pax (2007) describe key considerations in terms of plant circuits that need to be considered when developing plant control strategies. A few relevant to the current study are listed below:

36

Stellenbosch University https://scholar.sun.ac.za

1. Understanding, describing and representing the circuit operations in a concise way. 2. Determine maximum feed rate and tailings handling capacity. These are generally the main system capacity constraints. 3. To achieve automated control, available actuators and sensors for key parameters must be used appropriately. 4. Simulations of system operation must be done not only at the expected upper and lower bounds of the feed grade, but also at the average or design feed grade. In addition, it is advised to conduct multiple discrete simulations of possible operating scenarios of a circuit. Through this step, the controllability of the circuit and its sensitivity to various disturbances can be further explored. 5. Establishing an operating control philosophy by analysing the data from circuit performance at critical operating points.

Bernard (1988) states that the process of constructing a controller generally entails 3 main steps: 1. The plant process must be understood. For analytical approaches, this knowledge is contained in a mathematical model that describes the plant‘s dynamics. In the case of rule-based approaches, this knowledge is contained in collections of rules that attempt to describe the operation of the plant. With these rules there is an emphasis on conditional rules that have been obtained empirically. These rules consequently prescribe a certain action in response to specific conditions (in the format of if-then statements). 2. Identification of key parameters. These include: the plant must be able to respond in the minimum amount of time; during the occurrence of transients, the maximum allowed stress levels should not be exceeded; overshoots should be limited, typically to a certain percent of the operating level. 3. Develop a control methodology.

2.3.3 Control methods and strategies In terms of control strategies for industrial plants, single-loop PID feedback control is considered to be a good option. It is known to be very versatile and considered easy to use, since the PID algorithm can be applied to nearly all processes without alteration. Considering the little amount of information required for design and tuning, its performance can be judged as good (Marlin 2000). However, its simplicity often results in performance that is not always the best possible option. Key disadvantages to single loop feedback control include:

 The need for process output to be changed before the corrective feedback action begins.  Inadequate feedback can result in control instability. And the PID controller does not necessarily provide the best possible control for all processes.  The performance of a single-loop feedback controller can be poor for some combinations of disturbances and feedback dynamics.

Marlin (2000) states that any modifications to a feedback controller aimed at improving control performance, needs to utilise additional knowledge about the process dynamics. Additional process insights can be gained by using additional measures of process outputs and inputs, using explicit modelling in the control calculation, or by modifying the existing PID algorithm to obtain the new control objective. Therefore, a strategy involving a

37

Stellenbosch University https://scholar.sun.ac.za

combination of a feedback controller and some form of control enhancement is typically implemented in industrial control applications. Marlin (2000) however stresses the importance of retaining feedback control from the controlled variable, regardless of the complexity of the implemented control enhancements.

2.3.3.1 Control enhancements In terms of control strategies for the pelletizing plant, and in line with the abovementioned need for control enhancement, Cascade control and Feedforward control have been identified as viable options for investigation. Both strategies are extensions of single-loop PID feedback control. Each of these strategies offer unique advantages to the already stable, easy to use, and good performing single-loop PID feedback control strategy. The two enhancements were chosen due to their relative ease and wide spectrum of implementation, and effective and efficient control performance. And since single-loop PID feedback control can be applied to almost any kind of process, the two suggested algorithms are also very versatile in their application. In their application however, their individual strengths need to be taken into account, in order to achieve maximum control efficiency possible.

Cascade control

Algorithm Background The basic difference between cascade control and single-loop PID feedback control is that it incorporates an additional, ‗secondary‘ measured process input variable to assist in control. This extra measurement is selected based on information about the most common or likely disturbances and about the process dynamic responses to the disturbance. Otherwise put, the chosen variable indicates the occurrence of the identified key disturbance that strongly degrades system performance. It then tailors the control system to the selected variable, aiming to effectively control this secondary variable, and thus negate the negative effect of the disturbance. Proper insight into process operations and dynamics is necessary for proper cascade control design and the choice in secondary control variable is critical to the success of the cascade controller. Refer to Figure 15 for a schematic representation of a cascade control system.

Figure 15: Schematic of a cascade control system

Advantages  Control strategy performance can be dramatically improved, through the reduction of both the maximum deviation and the integral error for disturbance responses.  Like single-loop PID feedback control, it can be implemented on a wide variety of processes.  It can be implemented with a wide variety of analogue and digital equipment.

38

Stellenbosch University https://scholar.sun.ac.za

A cascade controller essentially uses two feedback controllers. Both of these controllers are capable of using the standard PID controller algorithm. The important feature however is in the way these controllers are connected to each other: the primary controller output is equal to the secondary controller set-point. This type of connection essentially results in the secondary control loop being the manipulated variable of the primary controller. Therefore, the net feedback result or effect is the same for single-loop control and cascade control. The difference in control performance is however realised in the speed at which control happens. In choosing the secondary control variable, the objective is to control a loop that is a much faster process than the primary single-loop control process. The secondary loop typically has a shorter dead time than the primary single-loop control system. Thus, with a shorter dead time, the cascade structure can provide better control through a faster correction of the process state. Besides the faster control exercised due to shorter dead time in the secondary control loop, an important point to note is the importance of the primary controller. The primary controller is still necessary for the following three reasons:

 The secondary variable may not be capable of eliminating the total effect of the disturbance.  Other disturbances not affected by the cascade can still occur.  The ability to change the primary set point must be retained.

Finally, it is also important to note that no additional modelling is necessary to implement the secondary controller. Only the model of the process is used to tune the controllers. This leads to cascade controllers not being very sensitive to modelling errors in models, with the exception that large errors might cause oscillations in one of the controllers.

Design Criteria Marlin (2000) provides the following control design criteria for the implementation of a cascade controller: Cascade control is desired when: 1. Single-loop control is not capable of providing satisfactory control performance. 2. A measured secondary variable is available.

The following criteria must be satisfied by a secondary variable: 1. The variable must indicate the occurrence of an important disturbance. 2. There must be a causal relationship between the manipulated variable and secondary variable. 3. The dynamics of the secondary variable must be faster than the dynamics of the primary variable.

Performance Measurement In terms of the last criteria for the second variable, Marlin (2000) promotes a general guideline that the secondary variable loop must be three times as fast as the primary variable dynamics. This control performance is measured in terms of relative primary-to- secondary dynamics, denoted by η. Another way to investigate control performance of a cascade controller is by use of the amplitude ratio. It is used to investigate the frequency response of a controller to a range of disturbance frequencies. This ratio gives the magnitude of the variation in the controlled

39

Stellenbosch University https://scholar.sun.ac.za

variable for a unit sine input. In this case the smaller the amplitude ratio for a disturbance response, the better the control performance of the controller.

Cascade controller algorithm and implementation A cascade controller can use the standard PID control algorithm. The secondary controller must have the proportional mode, but does not require the integral mode. It is favourable to use the integral since a proportional only controller has an offset which can propagate to the primary controller. Additionally, the integral will aid control if the primary controller is off-line. Like-wise, the second controller may have a derivative mode. However, the dead time is usually so small in the secondary loop that it almost never justifies a derivative loop. Since a cascade controller uses a standard PID controller algorithm, standard operator displays do not have to be altered much, making it easy to implement on existing operator displays. Additional hardware adds additional cost to the implementation. If existing systems cannot be used, the increase in performance and necessity of it must be weighed against the additional input costs (Marlin 2000).

Example of Cascade control implementation Harayama & Uesugi, (1992) implemented a two-camera, cascade control system to control pellet size, the primary system control variable. The secondary control variable was water flow rate, which quantified the amount of water added to the fines to produce the pellets. The cameras acted as the sensors of the two control variables. The first camera was used to monitor the pellet forming procedure. Utilising its feedback in the minor control loop, the response of the control system could be improved. The second camera was used to monitor the final pellet size, from which the pellet size index was deduced, and which was used utilised as the process value, average pellet size produced. The hardware used was cutting edge for the time, and included 2 CCD cameras, 16MHz CPU and 3 series docked transputers. This system could capture images, process and analyse the images, and actuate control in 4 seconds. It achieved very good control results, achieving pellet size variation of ±0.5mm with respect to a desired 12.5mm diameter pellet size. This was regarded as satisfactory control (Harayama & Uesugi 1992).

Feedforward control

Algorithm background This type of control is based on the concept of an early warning system. Unlike feedback control, it does not use a measurement of a system output. It uses instead a measurement of an input disturbance to the plant as additional information to adjust the manipulated variable before the controlled variable changes from its set-point. Feedforward control is usually used in conjunction with feedback control, to retain the important features of feedback control in the overall control strategy. In essence feedforward control is based on cancelling the effect of a disturbance. To achieve this, the manipulated variable must be altered to compensate for the measured disturbance. Its objective is to cause the exact mirror of the disturbance, so that the sum of the two equals zero. It uses a model to calculate the manipulated variable with regard to the disturbance. It then aims to achieve the exact opposite effect on the system. It is therefore dependent on models of the disturbance and the process. From the above explanation of the feedforward controller mechanics, it is clear that its effective implementation relies on the correct calculation of the mentioned models, and that the measured disturbance is the only one experienced by the process. Also the control calculation to be implemented must be realizable. However, it is seldom the case that the

40

Stellenbosch University https://scholar.sun.ac.za

first two criteria are satisfied, and therefore it is always advisable to combine feedforward control with feedback control to ensure steady state offset. Refer to Figure 16 below for a block diagram of a feedforward-feedback control system.

Figure 16: Block diagram of a feedforward-feedback control system with sensors and final element

Design criteria Marlin (2000) also provides design criteria for feedforward control: Feedforward control is desired when: 1. Feedback control is not capable of providing satisfactory control performance. 2. A measured feedforward variable is available.

The following criteria must be satisfied by a feedforward variable: 1. The feedforward variable must indicate the occurrence of an important disturbance. 2. There must not be a causal relationship between the manipulated variable and feedforward variable. 3. The dynamics of the disturbance must not be significantly faster than the dynamics of the manipulated-output variable. This is applicable to the case when feedback control is present.

The final feedforward variable criterion tries to prevent the case of a double correction for the same disturbance. If the feedback controller also senses the disturbance and tries to correct it, the double correction could cause overshoot in the controlled variable and thus poor control performance.

Performance Measurement The key factor to feedforward control performance is model accuracy. A measure for its performance is the integral of the absolute value of the error (IAE). Marlin (2000) proves that even with feedforward modelling errors, the presence of the feedback controller‘s integral term, maintains IAE much lower than that achieved by only feedback control. Therefore the use of feedforward control in conjunction with feedback control over a large range of process dynamics is justified.

Implementation Once again, there are costs associated with the implementation of a feedforward controller, due to additional hardware and installation. Its implementation also decreases overall system reliability due to the system having more components present that are capable of failing. However, the benefits it brings in terms of control performance need to be evaluated against the additional cost and generation of the system and disturbance model.

41

Stellenbosch University https://scholar.sun.ac.za

Both the above mentioned control methods lend themselves to being implemented in a wide variety of applications.

Examples and considerations of image analysis system control Montenegro Rios et al. (2011) did not specifically develop a control mechanism. Instead they set out to prove that their image processing and size distribution determination algorithm could be used in industry. To prove this, they tested the algorithms in static and dynamic tests. In the static test they used a control algorithm to identify the image from a series of images that captured a pellet size distribution closest to a set-point size distribution. This set- point was defined a priori. Their system proved very successful, suggesting that the use of the closed-loop configuration of the system for static process inspection or control tasks was validated. As an example, the control task entails changing the rotational speed of the pelletizer discs or drums. This parameter is often used in controlling size distributions of pellets in various metal ore concentration operations. For the Dynamic test, they identified a few important aspects of a successful dynamic control system:

 Shutter speed plays an integral part in the correct identification of individual pellets, and consequently the size distribution determination. A thousand frames per second was established as a good shutter speed to photograph pellets moving at about 1.2m/s (meters per second).  Spreading of the pellets (due to the dynamic nature of the test) meant that pellets could be identified much easier than in the static case. Pellets traveling on a conveyor or free-falling, are considerably more spread out and loosely packed than pellets accumulated on a pile, making segmentation of individual pellets much easier.  Particle colour plays a critical role in the efficiency of the segmentation algorithm since the segmentation algorithm depends heavily on the contrast between particles and shaded regions. This contrast is also higher for ‗green pellets‘ than for cooked pellets which becomes much darker compared to the grey of the ‗green pellets‘.

Montenegro Rios et al. (2011) selected their equipment on the basis of cost, ease of programming, and adaptation to the hardware and software used for the image acquisition and processing step. A NI USB-6008 interface board was used, to which any USB or FireWire cameras can be connected. One prerequisite was that the image data had to be converted to AVI format. Upon completion of the two control tests Montenegro Rios et al. (2011) described the task of changing the rotational speed of pelletizing discs as a possible candidate for a secondary manipulated variable in a cascade control system. The primary control variable would then be controlling the size distribution of pellets in the ore pelletizing operations. Furthermore, Montenegro Rios et al. (2011) state that the pellet velocity in the drums/disks is larger than the speed of the conveyor belt that receives the pellets from the pelletizer and conveys them to the subsequent process. When deciding on the appropriate location to install the machine vision responsible for determining pellet size distribution, the speed of the pellets in the image needs to be accounted for. Shutter speed, as stated above, will once again play a role, and so too the ability of the image analysis software to deal with moving objects.

42

Stellenbosch University https://scholar.sun.ac.za

3 CRITICAL LITERATURE REVIEW: DIP FOR PSE IN INDUSTRY, PROCESS CONTROL

Chapter 3 will be divided into the following two subsections: 1. A critical literature review on Particle Size Estimation Systems (PSES) that are based on digital image processing and digital image analysis (DIA). In order to obtain a holistic picture of the development of the field of Particle Size Estimation (PSE), systems based on other technologies will also be examined, but to a lesser extent. 2. A short analysis of the FeCr pelletizing process and the associated process variables. This will be done with reference to an existing pelletizing process on which the study will be based.

3.1 SIGNIFICANT CONSIDERATIONS FOR DEVELOPING PSES THAT ARE BASED ON DIP AND DIA

3.1.1 Software Considerations Applied in IP and IA procedures, all DIP and PSE techniques demonstrate unique strengths, weaknesses and efficiencies, which typically depend on the characteristics of the image and the area of application. Koh et al. (2009) therefore argues that there does not exist a single technique that suites all application purposes. Caution should however be applied so that a solution to the problem is not only focused on the software part of the SES. Particular attention should be given to investigating hardware solutions, which in many cases provide a much easier and efficient solution to a specific problem (Koh et al. 2009).

3.1.1.1 Typical platforms used for algorithm development and data management Amongst software packages used to conduct DIP procedures, MATLAB© with its Image Processing ToolboxTM has been used extensively in various applications, amongst others in Dalen (2004) and Kinnunen & Mäkynen (2011). Free software packages such as OpenCV provide more than adequate alternatives with much of the functionality when it comes to the basic operations. ImageJ, another open source software package can also provide basic and specific functions that form part of a basic DIP procedure. In such, Treffer et al. (2014) used this software for particle count purposes after initial processing by other specialised software packages. Examples exist where captured and analysed data is consolidated and processed in standard spread sheet software such as Microsoft Excel (Al-Thyabat & Miles 2006).

3.1.1.2 IP and IA algorithms used in DIP applications With the knowledge of the previous section concerning DIP procedures, the algorithms used for a various PSE systems were studied. This was done to obtain a holistic picture of the type of algorithms currently being used and investigated in the field of PSE. Table 3 summarises the basic algorithm structure of various studies that utilised DIP procedures for PSE purposes.

Table 3: Summary of the structure of typical algorithms used in DIP applications

Author Algorithm Structure Yao et al. (2010) Binarisation, edge detection, noise removal, size estimation. Zelin et al. (2012) Grey-scale conversion, tresholding, binarisation, image filling,

43

Stellenbosch University https://scholar.sun.ac.za

eliminating background noise (overlapping particles), image filtering to remove unwanted artefacts from previous processing steps, smoothing of identified objects, size estimation. Lin & Miller Image capturing, segmentation, object detection, object measuring, (1993) output data analysis. Mkwelo et al. Estimating particle position, bilateral filtering, segmentation, (2005) classification of regions, region size estimation. W. X. Wang Edge detection, inspection of gradient image, thresholding, splitting of (2006) particles, size and shape measurement through best-fit rectangle. Mukherjee et al. Gaussian smoothing and histogram equalisation, obtain feature maps (2009) through morphological erosion, dilation, opening, closing and bottom- hat operations on these images and a logistic regression technique, train algorithm with established feature maps, use maps to identify pixel clusters to produce classifier-driven pre-processed image, segmentation through energy function, blob boundaries detection and blob measurement, classification concerning object area, solidity and eccentricity, second classification concerning maximum edge strength and region homogeneity to establish correctly identified particles, size estimation of regions classified as objects of interest. Perez et al. Sub-divide images, extract colour and texture features, assign sub- (2011) images to different classes, determine seeds for watershed segmentation through Gaussian low-pas filtering, histogram equalization and erosion, opening by dilation, closing by reconstruction and regional maxima morphological operations, segmentation using watershed algorithm, classification correction that assumes all sub-images within a specific blob are of the same class, identify sub-images that form part of the same object through voting process taking into account rock boundary information, size estimation of images classified as objects of interest. Thurley & Ng Application of morphological operations to determine the underlying (2005); Thurley particle shapes, median filtering, detecting the edges of occluded areas, (2006) morphological Laplacian to detect edges between overlapping pellets, morphological black top hat for general edge detection, combining edge detection techniques, distance transform, calculating the local maxima, forming seed regions, watershed segmentation and filtering, size estimation of segmented regions. Salinas et al. Grey-level images, low-pass filtering, three parallel process were (2005) developed and separately to obtain three different binarised images referred to as masks, fused together using a logical ‗and‘ operator, determining seeds, watershed algorithm, background corrections by smoothing an image of only the background present in the image plane, normalising it with the maximum image value, and then dividing rock images by this normalised image. Montenegro Rios Image acquisition and conversion to an 8-bit image, image gamma, et al. (2011) brightness and contrast level adjustment, image segmentation using an appropriate threshold, circular objects identification by morphological detection discarding underlying partially occluded round particles, objects size estimation using vector propagation and scanning of images using circular masks. Banta et al. Image filtered, image binarised (threshold applied), raster-type scanning

44

Stellenbosch University https://scholar.sun.ac.za

(2003) for object edges and interior points detection, A Sobel edge detector, morphological and geometric image processing methods to separate touching and overlapping pellets, measurement of geometric parameters length and breadth, radii from the centroid to edge points, volume modelling, evaluation of model accuracy by coefficient of multiple determination R2. Kinnunen & Filtering with Wiener equalising filter and Gaussian filter, Canny edge Mäkynen (2011) detection, morphological closing to combine edges that are close enough, filter to erase small erroneous edges, image inversion, watershed application, object labelling, convex hull and thinnest square calculations for size estimation, and finally calculation of object volume. Liao & Tarng Automatic clustering method to threshold an image, extraction of (2009) connected components, computation of gravity points, determine minimum and maximum radius (distance to the particle contour), relate measurements to the major and minor axes lengths of an ellipse, calculate particle area as equivalent to area of ellipse.

The summary in Table 3 provides baseline insight into the various processes that make up algorithms of PSE systems utilising DIP procedures.

3.1.1.3 Processes used for image pre-processing, image segmentation, object recognition and post-processing of images

Pre-processing procedures

Grey-scale transformation Model conversion from RGB to grey-scale images is considered to be one of the first steps in any DIP procedure (Zelin et al. 2012; Chen et al. 2014). During this conversion, an image is typically converted into 8-bit format (Treffer et al. 2014; Montenegro Rios et al. 2011; Salinas et al. 2005; Wang & Bergholm 2005). Both these steps are aimed at reducing the amount of information to be processed by the PSES, while retaining vital information for the execution of PSE, such as image intensity (luminance).

Image noise reduction The concept of image noise refers to anything in the data that is not required to produce the desired results, and in most cases, complicates the process of producing the desired results. With this concept in mind, the following types of noise present in images are presented, along with various techniques to eliminate the noise or reduce its effect on the PSE process.

Image smoothing techniques  Exponential high pass filtering, using wave filtering by the Fourier transform coefficients in the frequency domain, to smooth and remove unwanted surface textures of identified objects (Zelin et al. 2012), while highlighting actual particle edges.  Smoothing image through morphological opening (Zelin et al. 2012)  Bilateral edge-preserving filter (Mkwelo et al. 2005)  Anisotropic edge preserving filter, though its iterative nature makes it less suitable for real-time processing (Mkwelo et al. 2005)  Linear Gaussian smoothing filter to produce more uniform regions (Mkwelo et al. 2005; Kinnunen & Mäkynen 2011)

45

Stellenbosch University https://scholar.sun.ac.za

 Wiener equalising filter (Kinnunen & Mäkynen 2011)  Elimination of salt and pepper noise (typically spots and line segments) and other unwanted and irrelevant information using a constructed normalisation surface which was obtained by normalising each pixel value (Wang 2007). This normalisation surface acts as a threshold to only retain relevant image information, by setting pixel values of irrelevant regions equal to those of the surrounding pixel regions.  Region intensity level smoothing (homogenising) by eliminating intensity slopes in regions through normalisation (Wang 2007).  Mean and median low-pass filtering for correction of distortion due to varying light and image noise (Al-Thyabat & Miles 2006; Thurley & Ng 2005; Thurley 2006; Salinas et al. 2005)  Gaussian filters for image enhancement (Al-Thyabat & Miles 2006; Mukherjee et al. 2009; Perez et al. 2011)  Top hat and bottom hat filters for image enhancement (Al-Thyabat & Miles 2006)  De-speckling filter was applied to reduce image noise (Mertens & Elsen 2006)  Erosion and dilation operations for further noise reduction and for reducing the effect of small, unwanted image artefacts (Mertens & Elsen 2006)  An alternating sequential low-pass filter incorporating morphological dilation and adaptive thresholding (Salinas et al. 2005)  Morphological dilation and gradient operations (Salinas et al. 2005)

Miscellaneous  Identifying and classifying unimportant background information in order to remove such classified regions from the rest of the PSE process (Zelin et al. 2012). For this purpose, the background can be identified using local darkness (local minima) identification as in Wang (2007).  Detect under-lapping edge and remove under-lapping data (Thurley 2006)  Laplacian and variance filtering to attenuate particle edges (Al-Thyabat & Miles 2006)

Contrast enhancement The concept of contrast enhancement refers to any procedure which aims to increase the contrast between particles, or between particle and particle background (Kwan et al. 1999). If this procedure is successful, particle boundaries are obtained easier, which could increase PSE accuracy. Contrast enhancement has been done using various methods. Mukherjee et al. (2009) used local histogram equalisation done for every 25x25 non-overlapping windows of the original image. In their study they found that local histogram based equalization is better than any global contrast enhancement process due to the wide intensity variation in the images analysed in their study environment. Perez et al. (2011) also utilised a form of histogram equalisation and applied it successfully to enhance overall image contrast. By removing the background in the image with a morphological opening operation Chen et al. (2014) allowed for their PSES to focus on the contrasts between the particles of interest. Other background corrections can be accounted for by smoothing an image of only the background present in the image plane, normalising it with the maximum image value, and then dividing rock images by this normalised image (Salinas et al. 2005). If no information is available to specifically denote background, a fast Fourier filter is suggested by maintaining the DC term and eliminating some of the lower frequencies (Salinas et al. 2005).

46

Stellenbosch University https://scholar.sun.ac.za

As a final example, Montenegro Rios et al. (2011) used the commonly utilised method of adjusting image gamma, brightness and contrast.

Image segmentation Image segmentation forms the backbone of the operations associated with most of the DIP PSE systems. This delineation of objects is complicated by various factors, including factors associated with the PSES hardware and software, operation of the PSES, and the inherent qualities of the particles being studied, such as shape and image texture features (Wang 2007). Many different procedures, methods and techniques of delineation have been studied since the dawn of DIP and computer vision. A few are highlighted in the sections to follow.

Identifying object position In various cases, a segmentation algorithm requires a centre point for a particle to be established before segmentation can commence. Mkwelo et al. (2005) located the centroids of rocks by combining an adaptive thresholding algorithm, a distance transform and identification of peak positions of the distance transform. Raster-type scanning was used by Banta et al. (2003) to scan the whole image and mark object edges and interior points. Similarly, Liao & Tarng (2009) computed a gravity point, similar to the centroid of the particle, for each identified particle.

Edge detection Edge detection is one of the main concepts utilised in the field of particle size estimation. Wang & Bergholm (2005) emphasises that the effectiveness of techniques aiming to produce an edge map of images are application dependent. Shen et al. (2001) argues that these types of techniques are suited better than template matching techniques for the analysis of images of overlapping particles, greatly varying particle sizes, and greatly varying grey-levels of particles. Table 4 provides a summary from literature of various edge detection techniques that have been studied and utilised in PSE systems. It attempts to provide insight into the type of edge detection algorithms that have and can be used in PSE applications.

Table 4: Summary of various edge detection techniques used in literature.

Author Edge Detection Technique Utilised Koh et al. (2009) Illumination used to cast shadows around objects, with the shadows being used to define positions of edges (edge map). Zelin et al. (2012); Morphological edge detection based on morphological gradient. Thurley & Ng (2005); Andersson & Thurley (2007) Kinnunen & Canny edge detector applied. Mäkynen (2011) W. X. Wang Sobel edge detection operator used. (2006); Chen et al. (2014); Banta et al. (2003) Yao et al. (2010) Using concave corner points identified by a sector area, corner point pairs could be identified using shortest Euclidean distance measures between the points. Automatically inserted line segments then completed a particle edge map.

47

Stellenbosch University https://scholar.sun.ac.za

Thurley (2006) Morphological Laplacian to detect edges between overlapping pellets. Thurley (2006) Morphological black top hat for general edge detection. Thurley (2006) Counting neighbourhood pixels in sparse areas to establish edge (points with low counts of neighbourhood pixels in a specified are, are typically edges), after which all edge points are combined. Wang & Bergholm Custom one-pixel-wide edge detector detecting edges that, like the (2005) watershed algorithm, does edge detection by locating valley or ridge edges. Wang (2007) The Roberts operator is used to obtain the edge map.

Malek et al. (2010) argues that boundary based segmentation techniques, Sobel, Prewitt and Canny edge detectors, as well as some morphological operations, are best suited for images that are simple in terms of intensity variations and free of noise. If edge maps are not as clear as desired, Wang (2007) established that edges of particles can be strengthened by subtracting a gradient image, multiplied by a pre-defined factor, from the original.

Threshold application A general practice during edge-detection is to apply a threshold to the image before identifying object edges. By applying a threshold, various different objects can be isolated from one another (Treffer et al. 2014; Montenegro Rios et al. 2011). Some authors have employed multistage thresholding (Salinas et al. 2005), while some applied adaptive thresholding (Mertens & Elsen 2006; Salinas et al. 2005). It is also common to utilise hybrid methods, with one of the earliest examples by Lin & Miller (1993). Table 5 provides a summary of various techniques that have been employed by various authors in order to apply a threshold to an image.

Table 5: Various techniques utilised to apply thresholds to images.

Author Technique used to determine threshold Zelin et al. (2012); A number of authors have underlined Otsu‘s global thresholding Lin & Miller (1993); operation to produce a threshold as being suited for image Chen et al. (2014) binarisation. Lin & Miller (1993) Concept of maximum entropy between pixel values applied to determine threshold values. W. X. Wang (2006) Repeatedly applied adaptive thresholding, based on local thresholding principles, to compensate for varying illumination and significant grey- level intensity differences. The algorithm assumes high contrast between background and particles, and small variation of intra-particle intensity levels. Banta et al. (2003) Threshold obtained using an optimal value located on the intensity histogram of the image. Liao & Tarng Automatic clustering method to obtain thresholded image. (2009) Dalen (2004) Threshold selection through manual grey-scale histogram inspection to determine intensities associated with particles was done by. In the same study, Dalen (2004) ensured a uniformly coloured background was present, which allowed for a constant background intensity map to be generated. This aided in isolating the background from any

48

Stellenbosch University https://scholar.sun.ac.za

further analyses.

Region based segmentation Region-based segmentation aims to work in on sections of an image called regions to eliminate the shortcomings of global thresholds. Specific reference is made to small minimums and maximums being ignored due to the small contribution these areas make to the global minimums and maximums. Furthermore, region-based segmentation approaches can overcome the problem of over- and under-segmentation produced by edge detectors, but only if the texture markings form regions that are small enough to be merged with other larger surrounding regions (Wang 2007). To conduct region based segmentation on an image, three general steps need to be followed: seed selection, region growing, and region split and merge. These three steps are discussed below.

Seed selection In terms of seed selection, some applications of the PSES allow for easy seed selection due to the nature of the objects of interest. As an example, the top of bubbles having higher brightness levels than the rest of the bubble are chosen as seeds by Guoying et al. (2011). If such obvious solutions are not possible, alternatives can become quite complex. Perez et al. (2011) utilises morphological operations including erosion, opening by dilation, and closing by reconstruction and regional maxima in order to establish coordinates for seeds of areas. In other work, Thurley & Ng (2005), Andersson & Thurley (2007) and Thurley (2006) uses a distance transform and a calculation of local maxima to form seed regions.

Region growing Rather than operating on the entire image, these algorithms typically segment objects visible in the image individually (Guoying et al. 2011) or grow regions from seeds, also called seed filling (Lin & Miller 1993). The watershed algorithm is a very popular region growing algorithm due to its ability to provide continuously traced detected edges (Mkwelo et al. 2005; Al-Thyabat & Miles 2006; Perez et al. 2011; Thurley & Ng 2005; Andersson & Thurley 2007; Thurley 2006; Salinas et al. 2005; Kinnunen & Mäkynen 2011). In a floatation set-up, Guoying et al. (2011) used an algorithm similar to the watershed algorithm, to approximate the boundaries of the seed regions by four consecutive curves, which are made to grow outward towards the bubble boundary, until the conditions for bubble boundary are met. In some instances seed region growing techniques have been combined with boundary segmentation techniques (Malek et al. 2010).

Region split-and-merge Wang (2007) used a region split-and-merge algorithm consisting of split-and-merge algorithm of Suk & Chung (1983), two part merging algorithm, pre-defined average grey- level threshold to redefine background regions, and a pre-defined mineral shape parameter as decision criteria to merge regions of a single object. In W. X. Wang (2006) a split algorithm was employed to delineate touching particles which incorporates polygonal approximation of particles. The algorithm sorts concave points on the object boundary into different classes according to the angle and lengths of 2-vertex lines. Finally the algorithm determines possible start and end points of a split, accepting a split path using the outcome of a supplementary cost function.

Morphological image segmentation Although not as popular as other techniques listed, examples of do exist of morphological operations being used for image segmentation. A such, Montenegro Rios et al. (2011)

49

Stellenbosch University https://scholar.sun.ac.za

incorporated morphological operations to detect circular objects, while Dalen (2004) and Zelin et al. (2012) used morphological erosion to separate and identify individual objects.

Object recognition

Template matching Identifying objects based on template matching has been successfully done by authors such as Shen et al. (2001); Mukherjee et al. (2009). This method is however invalid for images of overlapping particles, greatly varying particle sizes, and greatly varying grey-levels of particles due to the burden to train data would become excessively large (Shen et al. 2001). The following authors were noted to have used template matching for particle identification:

 Morphological erosion, dilation, opening, closing and bottom-hat operations are used to create feature maps for training in Mukherjee et al. (2009). Objects were then matched objects to training data of local and global image characteristics that are learned using a logistic regression technique (Mukherjee et al. 2009). So-called energy function which is significantly dependant on gradient differences (such as contours) and region homogeneity.  Image matching was done against an image database of standard images in Murtagh et al. (2005). Feature selection and multiple discriminant analysis were used to support nearest neighbour image querying. To obtain a set of features, they implemented spatial and resolution aspects through a wavelet transform.

Feature extraction Feature extraction is another method employed across literature to identify objects of interest. In such Itoh et al. (2008) used statistical texture feature extraction to identify particles. Specifically, the applied regression models to estimate particle size of particles identified using feature extraction. Perez et al. (2011) specifically used colour and surface texture as features that were selected and identified among objects, after which a voting process, concerning all identified objects, was utilised to increase the accuracy of object identification in an image.

Hybrid and other techniques In almost all cases of particle delineation algorithms, some sort of combination of techniques is employed to accomplish the required particle delineation. This is mainly due to the vast amount and greatly differing applications of DIP PSE. And equally significant, is the fact that each method has its strengths and weaknesses. Thus, since no single algorithm is capable of being applied effectively to all problems, PSE algorithms are typically some sort of hybrid techniques made up of various delineation techniques. Among types of algorithms not yet discussed, is a genetic algorithm employed by Wang et al. (2006) to segment images. The genetic algorithm utilised was capable of adapting to changing operating environment characteristics. The algorithm allocates a label describing the segmented cluster of pixels to which a pixel belongs, and a label describing its position in the image. A similarity function relates a pixel and a given cluster, until a halting condition is reached, based on the convergence of the total variance calculated from internal cluster variances. Other algorithms include the algorithm used by Banta et al. (2003) to separate touching and overlapping pellets. Here a combination of morphological and geometric image processing methods was used. Wang (2007) proved that in order to delineate particles, analysis of void

50

Stellenbosch University https://scholar.sun.ac.za

spaces and cusp-like features, representing background and boundaries between particles, can be incorporated.

Post processing procedures Post processing typically aims the enhancement (if possible) and labelling of identified objects after successful particle identification and delineation, and before size estimation commences (Kinnunen & Mäkynen 2011). It may also involve some correction for noise or errors that resulted from the foregoing procedures.

Image enhancement and correction Post processing image enhancement can entail any or all of the following actions:

 Separating touching particle grains through erosion and dilation operations (Mertens & Elsen 2006)  Filling of ‗holes‘ or missing parts in identified objects (Zelin et al. 2012)  Post segmentation filtering to enhance detected objects (Thurley & Ng 2005) and remove unwanted small objects (Thurley 2006)  Combine edges that are close enough through morphological closing (Kinnunen & Mäkynen 2011)  Restore original object size and boundaries through morphological dilation (van Dalen 2004) and link incomplete boundaries by the same type of procedures (Koh et al. 2009).  Erase small erroneous edges through the use of the appropriate filter (Kinnunen & Mäkynen 2011)  Correct for the loss of information due to IP procedures through a magnification factor (Zelin et al. 2012)

3.1.1.4 Processes used for particle description and size estimation

Particle descriptors and size estimation Wang (2006) states that any method aiming to determine aggregate size should meet the following criteria:

 It should be rotationally-invariant: The measured size of a particle should be unique, regardless of its positioning in the image.  The algorithm used to conduct the size measurement itself should be as simple as possible. Many algorithms are not suited to online particle size measurements due to their complexity requiring too much computing time. So, for instance, Fourier and Fractural measurements methods are good for characterizing single particles, but are not suited for statistical analysis of multiple particles.

Banta et al. (2003) agrees with these statements and reiterate what is stated in Wang et al. (2006): Particle descriptors or features should be chosen that are translation-, rotation- and scale invariant, and reproducible. Wang (2006) concludes that a measurement will be stable and fast if the measurement method meets these two criteria. In addition, the size and shape can be reproducible. Wang (2006) further concludes that the Multiple Ferret method best meets these two criteria.

51

Stellenbosch University https://scholar.sun.ac.za

Dimensions

Rectangle fitting This method is one of the earliest and more commonly used for estimating the size of objects of interest. Typically, a bounding rectangle or bounding box (Feret box) is drawn around segmented particles, from which length and breadth measurements of particles can be deduced (Mora et al. 1998; Salinas et al. 2005; Wang 2006b). In this case, the length and breadth of the bounding rectangle is parallel to the length and breadth of the image plane in which the object of interest is captured and displayed. In some instances, the long and short axes of the bounding boxes were used to compute the screen size of the rock fragments. Wang (2006b) combines this measurement with an evaluation of the particle‘s aspect ratio in order to estimate a size measurement for a given object. An alteration of the abovementioned method is the best-fit rectangle. It aims to eliminate the error associated with fitting a fixed orientation rectangle to an abject which has varying lengths and breadths as it is rotated in the image plane. Thus the best-fit rectangle is applied after a rotationally invariant orientation for the particle is determined using a least-second moment method (Wang & Li 2006; Wang et al. 2006; Thurley & Andersson 2008). This enables a much truer length and width of the particle to be established by fitting a best-fit rectangle. The result of this measurement provides a maximum and minimum Feret diameter as size estimation (van Dalen 2004; Presles et al. 2012). Some authors describe a method that establishes mean Feret diameters (Al-Thyabat & Miles 2006; Al-Thyabat et al. 2007; Wang 2006b). Mean diameters are obtained by averaging a specific amount of these Feret box dimensions. In such, Al-Thyabat et al. (2007) averages 16 values for Feret diameter measurements to obtain a mean Feret diameter.

Ellipse and circle fitting As an alternative to the rectangle fitting discussed previously, analytical solutions for regular shapes, still exist through the fitting of regular shape models to the particles, such as spheres, cubes and ellipses (Al-Thyabat & Miles 2006; Outal et al. 2008). Once again, by applying a rotation-invariant anisotropic morphological filtering and ellipse fitting using second moment of inertia, the maximum and minimum diameters of the best-fit ellipse is used as a representation of the length and width of a particle (Treffer et al. 2014; van Dalen 2004). Presles et al. (2012) used both the inscribed and circumscribed circles radii to estimate object size. However, comparing physical measurements with estimated measurements, Dalen (2004) found that widths of particles were overestimated. A correction factor was employed since a constant error was present. A related method to ellipse fitting aims to establish the diameter of equivalent projected area circles (equivalent area diameter) (based on the number of pixels in each enclosed region). Like the best-fit rectangle and ellipse, this method was proved to be a very good method and consequently very popular (Koh et al. 2009; Treffer et al. 2014; Al-Thyabat & Miles 2006; Al- Thyabat et al. 2007; Banta et al. 2003; Wang & Bergholm 2005; Kinnunen & Mäkynen 2011; Wang 2006b).

Polygon fitting An expansion of the concept of rectangle fitting is polygon fitting. Presles et al. (2012) employed morphological or shape parameters (aspect ratios), determined from normalised ratios of geometrical (size) measurements, to fit 2D shape diagrams of known polygons to identified particles (compact convex sets in an Euclidean 2D plane). These shape diagrams were then used to model 3D objects for estimation of size and mass distributions. Similarly

52

Stellenbosch University https://scholar.sun.ac.za

Kinnunen & Mäkynen (2011) used convex hull and thinnest square calculations for size estimation (the smallest convex region which contain the object) W. X. Wang (2006) employed polygonal approximation of particles by sorting concave points on object boundaries into classes according to the angle and lengths of 2-vertex lines. Wang et al. (2006) found that an eight-line best-fit polygon approximation was best suited to avoid small scale fluctuations along particle boundaries.

Other measurements of Length and width Chen et al. (2014) used the ‗tracks‘ of free-falling pellets to estimate particle widths. In the same work Chen et al. (2014) employed another method to estimate pellet widths whereby widths were estimated along the horizontal image direction. An algorithm scans grey-scale filtered images row-by-row to obtain curves representing pellets in the images. Gaussian process regression (GPR) models are applied to the incomplete curves of overlapping pellets in order to estimate widths of pellets.

Other measures of size The concept of using a radius to represent the size of particle was applied by Banta et al. (2003) and Liao & Tarng (2009). First mentioned calculated a set of radii that were drawn from the centroid of the particles, to a subset of edge points. The latter established a minimum and maximum radius (distance to the particle contour) which was then used to estimate particle size from. Zelin et al. (2012) first determined the area of a particle of interest, along with its perimeter. Based on the relation between these two characteristics, and equivalent diameter was established which was used for particle size estimation. Wang & Bergholm (2005) used an image edge density measure to estimate particle size. The measure related particle boundary to image area (defined as particle inter area and the area of void spaces). Approximate diameters of aggregate particles were then used as the specific measure for particles size representation. In terms of overlapped particles, Shen et al. (2001) devised a method to complete partially obscured particles in order to establish an accurate size estimation of a sample. In the study, the sizes of overlapped particles are estimated from single arcs from the edge detection phase which complete the ‗incomplete‘ captured in an image.

Area Various authors calculated an area, also referred to as a projected area and defined as the pixel count of the pixels located within an object‘s boundary, to estimate particle size (Mora et al. 1998; van Dalen 2004; Koh et al. 2009; Zelin et al. 2012; Presles et al. 2012; Treffer et al. 2014; Lin & Miller 1993; Wang & Li 2006; Mukherjee et al. 2009; Kinnunen & Mäkynen 2011). Liao & Tarng (2009) also calculated particle area, but used an area that equals the area of an ellipse having major and minor axes lengths equal to the minimum and maximum radii (distance to the particle contour) of a particle.

Volume and mass As early as Lin & Miller (1993) efforts were made to translate particle descriptors to sieve sizes of particles. In their work, Lin & Miller (1993) developed a kernel function to transform estimated chord-length distributions into sieve-size distributions. In terms of 2D projections of particles, PSD based on mass and volume has been calculated in various ways from such 2D area projections of particles. Mass fractions can be estimated by making assumptions on particle thickness and a measure of flakiness (Kwan et al. 1999).

53

Stellenbosch University https://scholar.sun.ac.za

Also, by multiplying screen size by the pixel area of objects (Salinas et al. 2005). Outal et al. (2004) developed a transfer function, established on a calibration sample under controlled conditions, to relate 2D projected areas of aggregates obtained through image analysis, to volume proportions measured by sieving. Outal et al. (2008) experimentally corresponded the two histograms of the retained areas and the retained volumes, calculated by images and screening successively, and established two laws for reconstructing a volume-based PSD for crushed aggregate Banta et al. (2003) established a functional relationship between the particle‘s volume and 2D measured features to model particle volume for particle gradation. Salinas et al. (2005) derived particle volume by multiplying the estimated screen size with the basin area determined by the number of pixels contained within an imaged particle. Kinnunen & Mäkynen (2011) estimated volume from object diameter, surface area and shape values, assuming that the gravel density is constant. Similarly, Treffer et al. (2014) estimated volume of particles from maximum and minimum diameters or best fit ellipse fitted to particles.

Other Features A stated in previously, a measurement of the perimeter of an object can be used in various size and area calculations. This has been witnessed in various studies (van Dalen 2004; Zelin et al. 2012; Presles et al. 2012; Lin & Miller 1993; Wang & Li 2006; Banta et al. 2003). Mukherjee et al. (2009) was specifically focused on determining the exact object boundary as opposed to imposing a polygon shape onto an identified object in order to estimate its size. In establishing a PSD, particle count (in a sample) is also a necessary characteristic of an image space to calculate (van Dalen 2004). A novel method to conduct a particle count was done by Treffer et al. (2014) by calculating the area fraction of identified particles of the total image area. This is referred to as the amount of particles captured in the optical sampling space. Other features calculated include:

 Particle orientation estimated (Treffer et al. 2014)  Particle position: estimated from circle arcs using the least-squares method (Shen et al. 2001), and from a hybrid method involving a distance transform (Mkwelo et al. 2005)  Particle velocity (Shen et al. 2001)  Grey-level range of identified objects (Wang & Li 2006)

Classifiers Classifiers have been defined in many studies with the purpose of describing specific aspects of interest of identified particles. In other instances, classifiers have been used as a criterion to accept or reject particles as adequately identified, and/or delineated (isolated).

Dimensional An aspect ratio (width to length/height) has been used to describe identified particles (van Dalen 2004), and also to aid in calculating particle size (Wang 2006b). While elongation (length divided by width), is of specific importance in the agglomerate industry (Wang et al. 2006).

Shape Shape values have been widely used to distinguish different particles for size estimation (Kinnunen & Mäkynen 2011; Wang & Li 2006). In such, a couple of classifiers have been

54

Stellenbosch University https://scholar.sun.ac.za

defined to describe the extent to which a particle represents a circle. Mukherjee et al. (2009) defines eccentricity as a measure of how much and object deviates from being circular. For this classifier, a measure of 0 relates to a circle and 1 a straight line. And Heydari et al. (2013) defines a circularity classifier as a measure of how circular an identified object is, with a measure of 1 relating to a perfect circle and 0 to a straight line. In their study, this classifier is used as a criterion in that particle elements having a circularity criterion value of more than 0.8 are classified as single elements. Wang (2007) and Mukherjee et al. (2009) define convexity of identified particles and applies it as a shape constraint. Both authors also present a solidity or area ratio as a proportion of the pixels in the convex hull that are also in the object (object area divided by convex hull or bounded polygon area). Furthermore, rectangularity (particle area divided by bounded rectangle area) is defined by Wang et al. (2006). It can be used to distinguish between two similar particles, since two particles that have the same size and elongation, can have different measures of rectangularity. In their work, Mukherjee et al. (2009) defined a Regression-based classifier from some of the classifiers mentioned above. This classifier evaluated an identified object‘s area, solidity and eccentricity against possible ranges for these shape parameters. Ranges were obtained through training of the classifier. This reduced processing time and the need for user input in classifying objects as correctly identified particles of interest. It was considered highly likely that objects with a solidity measure higher than 0.8 and eccentricity between 0.1 and 0.06 were correctly identified particles of interest (positive identification). Particles deemed as correctly identified underwent a further and final evaluation after which an object was classified as a positively identified particle of interest. The final classification incorporated two measures: maximum edge strength and region homogeneity measurements. As another example of a classifier used as a criterion for acceptance, Wang & Bergholm (2005) defined an average shape factor and variance of size to assist in size estimation. If the variation of size was not to large, particle size could be estimated from dimensional measurements.

Region Classification: OOI, non-OOI and distinguishing different OOI’s Table 6 gives a summary of some of the classifiers that aim to distinguish particles from each other, from background, and from regions that have been erroneously identified as particles of interest.

Table 6: Region classifiers used in literature.

Author Classifier Mukherjee et al. Two class classifier to allocate pixels to either a particle of interest or to (2009) background. Mkwelo et al. Classification incorporating a proximity-based classifier based on rock (2005) shape, edge strength, and region intensity characteristics, in order to distinguish particle regions from non-particle regions. Subsets of features are selected using Thornton‘s separability index. Mkwelo et al. K-nearest neighbour classification of identified regions (2005) Dalen (2004) Non-linear classifiers, such as k-nearest neighbours classifier, was used to distinguish different type of particles estimated. Perez et al. Sub-dividing images and classifying these image parts according to a set (2011) of colour and texture features. Also, to improve classification of the blobs,

55

Stellenbosch University https://scholar.sun.ac.za

a classification correction is applied to each blob that assumes that all the sub-images within a specific blob are of the same class. Perez et al. Rock boundary information is taken into account during a voting process (2011) to identify sub-images that form part of the same object. Wang (2007) Pre-defined average grey-level threshold to identify background regions. Wang (2007) Pre-defined particle shape parameters to merge particle regions. These include: common boundary length, grey-level difference of adjacent regions, and junction point analysis of merged particles.

Thurley & Two-feature classification method to distinguish partially occluded green Andersson iron ore pellets from completely visible pellets. (2008)

Heydari et al. Classifiers characterising identified particles as constituting single, double, (2013) triple or more pellets. Metrics used to distinguish multiple pellet elements from one another:  The number of peaks in an image intensity histogram  Summation of image pixels in the same column after the rotation of a particle element  Particle element perimeter and area relation  Width to length ratio of a surrounding rectangle

Representation of particle size estimation results Treffer et al. (2014) proposed a couple of characteristics of PSD data that would make it more effective when presented after analysis. In such, they proposed the following:

 Only a single parameter describing a PSD should be extracted and monitored to reduce the amount of data being managed by the size estimation system.  Presenting parameters of PSD against time should give an operator a good idea of overall process performance and quality of products produced.  Real-time (continuously updated) representation of the time evolution of the parameter used to represent particle size.

3.1.2 Hardware considerations

3.1.2.1 Hardware components of a particle size estimation system

Image capturing Devises Many studies and different authors have used a variety of off-the-shelf, charge-coupled device (CCD) video and still cameras in size estimation systems. These systems would use CCD devices to capture digital images of the objects under consideration, after which object identification and size estimation techniques would be conducted on these images. The successful use of these devices suggests that specialised alternatives are not essential for experimental purposes (Kwan et al. 1999; Mora et al. 1998; Treffer et al. 2014; Lin & Miller 1993; Wang & Li 2006; Al-Thyabat & Miles 2006; Thurley & Ng 2005; Salinas et al. 2005; Kinnunen & Mäkynen 2011; Liao & Tarng 2009). However, notable limitations of these photographic methods include the effects of inadequate lighting and depth perception. Inadequate lighting will be discussed under the illumination section. In terms of depth perception, Thurley & Ng (2005) argue that without an effective way to distinguish depth in images, these photographic methods would bias the

56

Stellenbosch University https://scholar.sun.ac.za

sample. In order to overcome the stated problems concerning lighting (overlapping of particles) and depth perception, Thurley & Ng (2005) used a technique that utilises 3D surface data with the use of a laser in conjunction with a CCD camera. Considering other characteristics associated with image capturing devices, shutter speed, ISO (sensitivity to light) and aperture are three aspects that are of specific importance. This is especially true for applications where the objects of interest are moving. Finding a balance between these three characteristics enables clear, correctly illuminated images to be captured which is of utmost importance for DIP operations (Kinnunen & Mäkynen 2011).

Illumination It is evident from literature that proper lighting is essential for any DIP particle size estimation operation. Specific aspects of illumination that are noted include intensity, uniformity and contrast. Inadequate lighting is listed as one the most notable limitations of photographic DIP methods. Inadequate lighting typically refers to shadows being cast on objects by other objects within the image, as well as poor illumination of objects. Shadows can lead to objects being partly obscured, limiting the visible size of objects, while completely obscuring others. Poor illumination is concerned with the poor colour rendering of objects being an obstacle to distinguishing objects from surrounding objects. Many size estimation systems are concerned with identifying objects that have similar and uniform texture. In such, overlapping objects can cause incorrect identification results if objects are grouped together and identified as a single object. Various configurations of lighting equipment have been used in different studies. A fluorescent ring lamp configuration was used by Montenegro Rios et al. (2011) in an attempt to eliminate shadows being cast by particles due to point illumination being provided from specific directions only. Liao & Tarng (2009) utilised a backlit image plane, in an attempt to mitigate the effects of similar and uniform texture of the objects under consideration.

Miscellaneous Many of the applications of DIP particle size estimation systems are typically within adverse operating environments. Dust, water, excessive heat, and vibrations are among the elements these systems have to be protected against. Protective casings for cameras, to protect it from these adverse working environments have been used by W. X. Wang (2006) and Liao & Tarng (2009). In a nother instance, W. X. Wang (2006) also implemented a dust- proof cabinet for the computer conducting image analysis.

3.1.2.2 Parameters of Hardware components used in particle size estimation system

Imaging equipment

Type As mentioned in the section Image capturing Devises, many off the shelf digital cameras have been used to capture images for DIP procedures. In such, a 6Mp Nikon D100 SLR was used by Miles (2006), and a Canon 450D by Kinnunen & Mäkynen (2011).

Image format Images are typically represented in 8-bit format (van Dalen 2004; Treffer et al. 2014; Lin & Miller 1993; Wang & Li 2006). Due to the need to conduct DIP procedures in real time, image formats are typically chosen that reduces the amount of information that needs to be stored and processed. Compressed image formats such as JPEG have been proven to be adequate for this purpose (Al-Thyabat et al. 2007).

57

Stellenbosch University https://scholar.sun.ac.za

Resolution In line with the statements in the preceding section, image resolution plays an important role when it comes to adequately representing an OOI for accurate size estimation, i.e. the amount of pixels an OOI encompasses in the image. And secondly, resolution plays an important role when it comes to a system‘s ability to process images in real time, subject to the computational burden due to the amount of data needed to be processed. In terms of the second point, image sizes that have been used and proven to be adequate for DIP procedures have varied between 0.26Mp-1.5Mp to suite the desired application and algorithms, and processing capabilities (Mora et al. 1998; Koh et al. 2009; Lin & Miller 1993; Wang & Li 2006). In terms of the first point mentioned above, resolutions of between 100-400dpi have been used on rice grains (van Dalen 2004), of which 200dpi was proven to be adequate enough for size estimation without an unmanageable processing burden due to image size. 50dpi was found to be too coarse and inaccurate for size estimation purposes due to the amount of pixels encompassed by a single rice grain being too little (van Dalen 2004). In line with this finding, it was also noted that particles should be represented by at least 10 pixels (Mora et al. 1998) to attain acceptable size estimation accuracy. This amount is however problem specific. Therefore the number of pixels encompassed by the smallest relevant OOI in the image, which is needed for the accurate estimation of such an OOI, has to be established. This would be dependent on the ability and requirements of the size estimation algorithm to achieve a desired estimation accuracy, and can be taken from literature or be determined through problem specific experimental work. It would then follow that the actual area percentage that the OOI takes up in die image or region of interest (ROI) that is imaged, needs to be established. This percentage is then used, along with the prior established number of pixels required for the smallest relevant OOI, to establish the resolution required for the (entire) image/ROI. It was thus noted that the correct image resolution that would satisfy both the abovementioned factors, would have to be determined for the specific problem at hand. It would have to be subject to the applicable restrictions and parameters in terms of: - size of a typical OOI - distance between OOI and imaging equipment - image quality (amount of data represented by each pixel) - processing capacity of the hardware and software used for analyses - desired estimation accuracy

Exposure

Shutter speed As has been stated, high shutter speeds are imperative to eliminate image blur when imaging moving particles (Salinas et al. 2005). In order to gauge the shutter speeds required for imaging particles in various states of movement, the following is taken from literature:

 Stationary particles (pellets): 1/25s (Koh et al. 2009)  Particles (pellets) transported on a conveyor: 1/200 (Al-Thyabat et al. 2007)  Free-falling particles: 1/4000 (Wang & Li 2006)

For the specific problem at hand, the travelling speed of the OOI is of importance. Knowing the speed would enable the calculation of the distance that the OOI travels during the period that the camera shutter is open. This would aid in establishing the amount of blur or elongation that the OOI would undergo, and which would be captured in the image used for

58

Stellenbosch University https://scholar.sun.ac.za

analysis. The elongation would have to be taken into account when considering adequate shutter speeds and when interpreting size estimation results, with desired estimation accuracy ultimately being the main driver in the decision making process.

ISO (gain) W. X. Wang (2006) highlights the importance to obtain a correct setting for ISO in order to obtain high contrast images. However, ISO settings are dependent on the intensity of the illumination and the contrast it creates between the individual particles captured in an image. This characteristic is very much dependent on the application of the DIP system and has to be established through experimental work.

Magnification Magnification plays a role when relating displayed particle size to actual particle size. Itoh et al. (2008) maintained a fixed camera magnification in order to keep the area covered by each pixel constant. Also in their experiment, the length of the camera lens to the objects of interest was kept constant to ensure that the size estimations from the images were consistent.

Illumination Although various illumination levels, measured in lumens, were tested and found to be adequate by Itoh et al. (2008), it is stressed that required illumination levels are application dependent. The importance of adequate illumination and the detrimental effects of a lack thereof are maintained and underlined. As mentioned, colour rendering is crucial to the effective operation of DIP procedures in distinguishing objects of interest from each other. The sources of illumination need to produce a light that is as white as possible in order to produce the most accurate representations of the actual colour of the objects of interest.

3.1.2.3 Set-up of a particle size estimation system By paying attention to specific aspects of the set-up of a PSES, the outputs in terms of size estimation accuracy can be increased greatly. The following are measures applied in various DIP PSE applications:

 Uniform background: a uniform background can significantly improve the effective operation of the IP and IA procedures of a size estimation system (Mora et al. 1998; van Dalen 2004). In such, colour intensities associated with background can be filtered out to isolate objects of interest. Also, varying background intensities contribute to the total image noise that the SE algorithm needs to process and evaluate.  Orientation of artificial illumination sources: An emphasis has been placed on the orientation and intensity of artificial illumination sources as early as work done by Mora et al. (1998). This is especially important in order to eliminate image features that negatively affect the IP and IA analyses such as casting of shadows that affect particle delineation.  Camera orientation: Various projections of OOIs have been tested and evaluated. A study by Al-Thyabat et al. (2007) analysed particle images obtained using three different camera positions for particles being transported on a conveyor belt. These were a top view of the conveyor belt, a profile view at the end of the conveyor, and a view of objects in free-fall. In the research it was proven that a top view camera placement produced the best results, despite the presence of particle fines in the

59

Stellenbosch University https://scholar.sun.ac.za

free-falling stream, and particle over-lap in the profile view being some of the biggest contributors to poorer performance of the latter two views. Other studies also point to the top projection of particles as the preferred camera orientation (Itoh et al. 2008; Koh et al. 2009; Murtagh et al. 2005).  A fixed distance between camera lens and aggregate surface aids in the reliability and consistency of size estimations done by using DIP procedures (Itoh et al. 2008).

3.1.3 Experimental procedure The following sections will discuss specific aspects of the experimental set-up, the calibration of size estimation systems, as well as considerations regarding movement of samples used for experiments. These discussions will aim to provide an insight into critical precautions that need to be taken in order to conduct successful experiments for PSE procedures.

3.1.3.1 Experimental set-up

Imaging equipment Koh et al. (2009) argued that by measuring and setting up an image plane to be a specific size (311-207mm), this could serve as a reference to conduct size estimation from. Furthermore, Murtagh et al. (2005) argued that the aggregate (objects of interest) should cover the entire field of view in order to eliminate effects of background on the DIP procedures. W. X. Wang (2006) emphasises that camera placement should be so as to create maximum contrast between particles and background Considerations regarding camera configurations include: cameras should be located above and perpendicular to the particles of interest being transported horizontally on a conveyor in order to minimise distortion of particle size when viewed in the captured image (Treffer et al. 2014; Murtagh et al. 2005) and cameras located in front and perpendicular to free-falling stream of pellets for the same reason as stated before (Treffer et al. 2014; Wang & Li 2006).

Illumination equipment Treffer et al. (2014) highlights the importance of direct illumination in order to achieve maximum colour rendering and minimal shadows. Also, in order to provide the correct illumination, placement should be at an optimal distance from the objects of interest, and place emphasis on the top layer of pellets in a pile (Montenegro Rios et al. 2011). And by using backlighting instead of top lighting of objects, the effects of varying surface texture differences can be mitigated (Banta et al. 2003). Kinnunen & Mäkynen (2011) furthermore emphasise the importance of a uniform and monotone background, such as a black background. Such a background would enhance the ability of the DIP procedure to distinguish between background and objects of interest, minimising the computational burden on the image processor.

Particle transportation In constructing a particle transportation system, the main goal typically entailed replicating the transport scenario of the actual particles of being studied. Due to the focus of this document being PSE in the mineral processing environment, these states transport typically involve stationary pellets accumulated in a pile; pellets moving on a transport medium such as a conveyor; and freefalling pellets (such as those exiting a chute or pelletizer disk). IP and IA studies have been conducted on particles in all of these various forms of movement. In most cases, studies involved building custom designed systems to replicate the actual transportation of the particles being studied. As an example, Lin & Miller (1993) and Thurley

60

Stellenbosch University https://scholar.sun.ac.za

(2006) built a custom design conveyor system to replicate the actual transportation of their particles of interest. When designing such systems, key considerations need to be evaluated and incorporated into a design in order to ensure correct simulation of actual operation, and thus give credibility to the experimental results achieved. As an example, a vibrating feed chute was employed by Treffer et al. (2014) to ensure random orientation of particles as they pass in through the imaging plane. However, alterations to system operation due to limitations of custom designed and constructed systems can become necessary when conducting experiments. In operating a custom designed conveyor system, Miles & Koh (2007) had to carefully control conveyor speeds to produce desired flow of material. At slow conveyor speeds the conveyor head drum did not have enough power to drive the conveyor if loaded with material. The required changes should be taken into account when setting up experiments and evaluating experimental results.

3.1.3.2 Calibration of size estimation system Many examples exist of reference objects, such as measuring equipment and objects of known size, being placed in the image plane and imaged alongside particles of interest. From these images, pixel to millimetre ratios could be established to infer actual object dimensions as represented in images of the objects. A calibrated stage micrometer (Mora et al. 1998) are among the examples of reference objects used for this purpose. To calibrate their system, W. X. Wang (2006) placed scale rulers with markers in the centre of particle flow to calibrate and estimate scale factors. If these factors were found to be equal, the calibration was accepted and distortion taken to be minimal. Similarly, Salinas et al. (2005) calibrated their imaging system by taking images of calibration objects, thus obtaining a pixel to mm ratio for size estimation. Calibrated scaling factors were used to relate pixels to mm by Banta et al. (2003).

3.1.3.3 Experimental samples

State of particle movement A comparison between stationary, conveyor transported and free-falling particle streams was conducted by Kinnunen & Mäkynen (2011). From their study, as well as other studies, Table 7 aims to summarise some of the main challenges and advantages of each of the above mentioned cases as deuced from these experimental procedures.

Table 7: Advantages and disadvantages of IA studies on pellets in various forms of movement

Form of pellet movement Main advantages Main disadvantages Stationary pellets  Minimal potential of  Pellets partially obscuring image blurring other pellets Pellets on conveyors  Potential of image blurring  Pellets partially obscuring other pellets Freefalling pellets  Less obscuring of  Potential of image blurring pellets

3.1.4 Validation of particle size estimation results The following section aims to provide insights into the methods used to validate the experimental results obtained from studies conducted using various PSE systems. Specifically, a comparison to a reference measure will be discussed, as well as performance indicators that evaluate the accuracy of these results. 61

Stellenbosch University https://scholar.sun.ac.za

3.1.4.1 Comparison to a reference (true gold) measure The most common ground truth incorporated by various authors is the results of a mechanical sieve size analysis of a sample. These analyses are used to determine a reference particle mass fraction and sieve size distribution of the samples on which size estimation systems were tested (Mora et al. 1998; Kwan et al. 1999; Koh et al. 2009; Wang & Li 2006; Al-Thyabat & Miles 2006). Table 8 summarises other method of validation contained in literature.

Table 8: Various methods of validation of experimental results for different PSE procedures.

Author Method of validation Dalen (2004) Manual measurement using a sliding calliper. Koh et al. (2009) Using sample particles of uniform size, regular shape and known dimensions. Treffer et al. Reference particle size is determined using a commercial size (2014) analysis tool QICPIC based on particle shadow projections. Reference particle size distributions were obtained from theoretical size estimation ratios of the particles and processes under study Lin & Miller (1993) Computer generated images of particles with known shapes and dimensions used to test SES Outal et al. (2008) Projected area measurement of similar rock fragments, whose cumulated volumetric proportions follow the Rosin-Rammler model for describing PSD in comminution processes Salinas et al. Samples of varying rock sizes were measured using standard Tyler (2005) meshes and the results presented as accumulated weight distributions Sample images were manually segmented and compared with those obtained with the IA algorithm

3.1.4.2 Performance indicators

General Malek et al. (2010) measured the effectiveness of the segmentation operation conducted by the algorithm by using a Receiving Operator Curve (ROC), with original Regions of Interest (ROIs) set as the standard to which the segmented images are related. Banta et al. (2003) utilised the coefficient of multiple determination, R2, to determine the correlation between a multiple regression model‘s estimated PSD and manually measured data.

Accuracy The following tests have been incorporated to test the accuracy of PSE systems:

 T-test comparison at 5% significance level of reference size distributions and SES size distribution (van Dalen 2004).  The sum of squared residuals were calculated to quantify differences in estimated size distributions from actual size distributions (Al-Thyabat et al. 2007).  Error% in the estimated values of the size parameters was calculated (Al-Thyabat et al. 2007). This was done by dividing the absolute difference between the estimated and measured parameter divided by the measured parameter.

Precision The following tests have been incorporated to test the precision of PSE systems:

62

Stellenbosch University https://scholar.sun.ac.za

 Standard deviation of the repeatability calculated for both the reference and estimated size distributions. The parameters for this comparison were determined through an analysis of variance (van Dalen 2004).  Positive absolute error indicator for accuracy of SES (Zelin et al. 2012).

3.1.5 Sources of error influencing size estimation accuracy and efficiency The following section will discuss specific sources of error that influence the accuracy and efficiencies of particle size estimation systems. The section will look at factors that are inherent to the PSES and factors external to size estimation system. It will then look at various methods that have been employed to increase PSE accuracy that has been affected by the factors mentioned in the other mentioned subsections.

3.1.5.1 Factors associated with SES It has been established that various errors are inherent to certain PSES. In some case these errors are caused by specific characteristics of these PSES, and in other cases, these errors are cause by the operation of some of these size estimation systems. These errors are grouped according to their association with the hardware, the software or the sampling procedure used during the PSE process. A summary of these errors are contained in Table 9.

Table 9: Sources of error associated with particle size estimation systems.

Error Author Source of error Association Mora et al. (1998) Image quality plays an important role in the size estimation system‘s ability to conduct effective IP and IA procedures. Mora et al. (1998) Shadows caused by artificial illumination induce difficulties associated with effectively executing IP and IA procedures. Mora et al. (1998); Distortion due to inherent properties of the optical Lin & Miller (1993); components of various cameras need to be catered for. W. X. Wang (2006) Optimal camera placement needs to be investigated to minimise this error. Mora et al. (1998) Smaller particles cannot be accurately represented in terms of pixel units, especially in low resolution images. Hardware Treffer et al. (2014) Distance of particles from camera can cause over- and Associated under-estimation of particle sizes Outal & Pirard Angle of image acquisition can create image distortion (2004) and thus an erroneous representation of the particle of interest. W. X. Wang (2006) Inadequate lighting intensity can complicate the DIP procedure to effectively identify and isolate POIs. Miles & Koh (2007) Shadows from inadequate lighting. Gonzalez & Woods Salt and pepper noise (bright and dark spots) caused (2008); Wang by imaging equipment sensors. (2007); Banta et al. (2003) Miles & Koh (2007) Camera position which can distort images, rendering

63

Stellenbosch University https://scholar.sun.ac.za

inaccurate representations of POIs. Miles & Koh Fast conveyor/transport medium speeds can induce a (2007); Mukherjee lot of image blur complicating the DIP procedure to et al. (2009) effectively identify and isolate POIs. Banta et al. (2003) Shadows cast around aggregates top lighting Miles & Koh (2007) Inadequate contrast between particles and between particles and background due to inadequate lighting. Salinas et al. Inadequate illumination of particles contribute to (2005) overlapping particles being identified as one particle, and single particles being divided into several pieces, respectively. Koh et al. (2009); Susceptibility to false edges and over-segmentation Zelin et al. (2012); caused by varying particle surface textures (intensity Wang (2007); variations within the surface of each particle) by Outal & Pirard Watershed segmentation and edge detectors such as (2004) the Canny edge detector. Mkwelo et al. Sensitivity to image noise can also contribute to (2005) incorrect segmentation by the watershed algorithm. Segmentation using predefined markers (object centre points) can help mitigate this effect, coupled by classification process to rectify wrongly identified regions that do not represent particles. Malek et al. (2010) When applied to complex images containing a lot of image noise, boundary segmentation techniques, including the Sobel, Prewitt and Canny edge detectors, would often fail to detect some edges, or detect extra edges that are not present in the image (Malek et al. 2010). Cleaning and joining incomplete detected edges are in many cases necessary post-processing procedures for these kind of edge detectors (Mkwelo et Software al. 2005). Associated Zelin et al. (2012); Linking and consequent over-estimation of particle size Outal & Pirard due to touching particles of low-granularity or minimal (2004) distinguishable grains (edges). Wang (2007); Touching and overlapping can cause under- Miles & Koh (2007) segmentation of particles when algorithms consider incomplete particles as whole particles. Zelin et al. (2012) Under-estimation of particles partly obscured due to their position on the image boundary. Treffer et al. (2014) Multi-sampling of particles by SES. Mkwelo et al. Smoothing or blurring of edges, typically by region (2005) smoothing filters such as a Linear Gaussian filter. These filters do not discriminate between edges and image regions. Wang et al. (2006) Geometric measurements relating to size and size characteristics cannot necessarily be correlated to particle shape measurements and characteristics. Miles (2006) Statistical analysis should be done on a large amount

64

Stellenbosch University https://scholar.sun.ac.za

of data. The smaller the data set, the less reliable the estimated size distribution. Mukherjee et al. Impose a structuring element on the shape through (2009) morphological operations, leading to a distorted object boundary. Outal & Pirard Especially when relating image analysis PSD to (2004) traditional volume and mass fractions obtained by sieve analysis, size concepts of image analysis measurement systems and traditional sieving measurement methods are different. In terms of sieving, a particle is attributed a size if it passed through a specific mesh. This passage depends on all the spatial dimensions of the particle. In terms of image analysis, 2D projections are obtained by extracting particle contours, with various sizes for the fragment being possible due to its third dimension. Salinas et al. Sampling error due to the rocks not presenting a flat (2005) surface, which is assumed by the pixel calibration procedure performed on the 2D images. Salinas et al. Sampling error due to the rocks not presenting a flat (2005) surface, which is assumed by the pixel calibration procedure performed on the 2D images. Salinas et al. Overlapping particles being identified as one particle. (2005) Salinas et al. Single particles being divided into several pieces, (2005) respectively. Mora et al. (1998); The varying and irregular shape and random orientation Wang (2007); and location of particles (anisotropic particles) are a Wang et al. (2006); source of error. This is due to the majority of models Miles (2006); Outal used for size estimation being models of best fit (such & Pirard (2004) as rectangular bounding box), and due to most size estimation systems using only a 2D projection of the particle in its imaged orientation, respectively. Salinas et al. Estimations of particle size and volume based on (2005) models of best fit (such as rectangular bounding box model) introduces varying degrees of error. Wang & Bergholm The following errors are associated with some well- (2005) known edge detectors when applied to their specific set-up:  Robert edge detector resulted in an image with a lot of white noise.  Laplacian edge detector and a Canny edge detector missed to many particle boundaries Zelin et al. (2012); Sample size and fluctuating size of sample of the W. X. Wang (2006) evaluated objects play a role in the accuracy of the Sampling SES. Associated Treffer et al. Small samples of particles having a large size (2014); Shen et al. distribution (greatly varying sizes of sample particles)

65

Stellenbosch University https://scholar.sun.ac.za

(2001) can lead to inaccurate estimation of a PSD for the particle population. The larger the PSD of the particle population, the greater the amount of analysed data needed to accurately estimate a PSD.

3.1.5.2 Factors external to size estimation system The following section will focus on factors that are not connected to the operation of inherent nature of the PSES. Rather, these errors area associated with factors that are external to the PSES.

Factors associated with the objects of interest

Shape and texture Delineating of objects is complicated by particle shape and image texture features (Wang 2007). If particle boundaries are rough and irregular the ability of the DIP procedure to accurate estimate particle size is influenced (Wang et al. 2006). Fitting shapes to such particles to relate size, or finding specific radii or chord lengths, become difficult operations. In the case where surface texture of particles are rough and irregular, more sophisticated DIP procedures need to be utilised to overcome effects such as over segmentation as stated in the previous subsection. An estimation error is present when a reference PSD is established of a particle sample through mechanical sieving with sieves that have square apertures. This error is due to the fact that an elongated particle with a length of up to 1.25 of that of the aperture size, can pass through the specific square sieve aperture (Mora et al. 1998). Therefore for irregularly shaped particles, a relation drawn between breadth of a particle and its reference sieve size must be adjusted with a suitable ratio. This ratio can be inferred through trial and error, and is dependent on the type and source of material. Alternatively, sieves with round apertures can be used to establish a reference PSD. Wang et al. (2006) also highlights that particles with similar geometric measurements relating to particle size, can differ greatly in terms of shape characteristics. Therefore, is the PSE objective requires both these characteristics to be estimated, provision has to be made for the factors influencing each characteristic.

Illumination and colour rendering Sources are unanimous that varying surface textures (intensity variations within the surface of each particle) and spectral characteristics of identified particles, both of which can be amplified due to inadequate lighting, or the inherent colour of the particles, can lead to false edge detection by conventional particle identification and delineation operations (Koh et al. 2009; Zelin et al. 2012; Wang & Li 2006; Wang 2007; Mukherjee et al. 2009; Banta et al. 2003). Treffer et al. (2014) note that particles that have good reflective quality when appropriately illuminated, produce more accurate size estimation results. However, excessive reflectance of light by particles can cause problems to the IP and IA procedures (Al-Thyabat et al. 2007). Also, varying light distribution can cause irregular reflective qualities, which is detrimental to the effective operation of many DIP procedures (Al-Thyabat & Miles 2006). Furthermore, inadequate contrast between particles and between particles and background (Wang & Li 2006), shadows cast by particles on other particles due to overlapping (Al- Thyabat et al. 2007), and varying intensity levels (grey-levels) of particles (Shen et al. 2001), all influence the ability of many DIP procedures. As such, more sophisticated algorithms need to be developed and applied to mitigate these affects.

66

Stellenbosch University https://scholar.sun.ac.za

Location and transportation When conducting size estimation of objects accumulated in a pile, these techniques capture information of the surface objects of the pile. The size distribution of the entire pile can then be inferred form the size distribution of the surface objects. However, the following errors are associated with these kinds of techniques that focus on the objects on the surface of the pile (Andersson & Thurley 2007; Wang 2006a; Thurley 2006; Rosato et al. 1987; Andersson & Thurley 2011): 1. Segregation and grouping error, more generally known as the brazil nut effect. This effect is caused by motion or vibration of the pellets as they are transported, typically in piles (such as on a conveyor or in a truck). It describes how piled objects tend to separate into groups of similarly sized objects. Particle layer imaged is not necessarily representative of all the layers in the pile (Outal et al. 2004). Thurley & Andersson (2008) note that iron ore green pellets appear to not be affected by this effect. 2. Capturing error. Only top-layer of particles of a population in a pile, or on a conveyor, is being imaged (Al-Thyabat & Miles 2006). This error is therefore associated with the size based probability that a particle will be located on the surface of a pile. 3. Partial profile error. This effect is related to the fact that only one side or profile of an entirely visible particle can be seen and used for size estimation. Also, due to varying and random orientation of particles, images do not necessarily display the largest projected area of entirely visible particles (Outal et al. 2004). This introduces an error when estimating particle size. However, Wang (2006) has noted that the best-fit- rectangle measure for particle size based on a single visible profile correlates well with actual sieve size. 4. Overlapped particle error. This error is incurred when partially occluded particles due to overlapping of particles, and touching particles, are treated as whole pellets. This causes the estimated size distribution to be bias towards smaller size fractions (Treffer et al. 2014; Shen et al. 2001; Wang & Li 2006; Al-Thyabat et al. 2007; Murtagh et al. 2005; Montenegro Rios et al. 2011). Another source of under- estimation of PSD is partly obscured particles, due to their position on the image boundary, being treated as whole particles (Zelin et al. 2012; Treffer et al. 2014). Thurley (2002) incorporated a classification criteria of surface pellets based on visibility, resulting in only completely visible pellets being analysed for size estimation purposes. It is noted that this method eliminates the effects of this error. 5. Weight transformation error. It is noted that this error is introduced due to the possibility of particles in a specific sieve size fraction having significantly different weights. Elongated particle are an example, typically having larger volumes than other pellets in the same sieve size fraction.

Further errors in this category include:

 In general, smaller particles tend do not appear on the surface of a particle pile, but tend to move to the bottom of a pellet pile, or the heap of particles transported on a conveyor, leading to an over-estimation of PSD (Outal et al. 2004; Salinas et al. 2005)  Piling of particles present in conveyor and free-falling aggregate footage can lead to difficult segmentation (Kinnunen & Mäkynen 2011) due to overlapping. As well as the

67

Stellenbosch University https://scholar.sun.ac.za

difficulties associated with very little difference in colour rendering of individual particles for delineation purposes.

Factors associated with the operating environment Table 10 provides a summary of various factors associated with the environment in which PSES operate, that may contribute to errors in PSE.

Table 10: Operating environment factors that cause errors in PSE

Author Source of Error Kwan et al. (1999) It is emphasised that a uniform background of the particles under consideration can contribute significantly to the effective execution of IP and IA procedures. On the other hand, varying background can be detrimental to a PSES operation. Treffer et al. Dust and water (rain or spray) particles captured in the image can be (2014); Miles & analysed as small particles of interest, causing inaccurate size Koh (2007); Banta estimation. et al. (2003) Lin & Miller (1993); Fast travelling particles, typically moving on a transport medium such W. X. Wang as a conveyor belt or free-falling particles, can cause image blur. (2006); Treffer et al. (2014); Miles & Koh (2007) W. X. Wang Varying daylight illumination effecting threshold selection. (2006); Mukherjee et al. (2009) Miles & Koh (2007) Vibrations affecting SES hardware, specifically the imaging equipment

3.1.5.3 Methods employed to increase PSE accuracy It has been noted that many of the features employed by researchers, of which some are listed below, are dependent on the system set-up, size estimation algorithm, operating environment, as well as the particular particles of interest. This should be taken into account when deciding on similar correcting procedures to use in the current study.

Hardware related Table 11 provides a summary of hardware related methods and actions that can increase PSE accuracy.

Table 11: Hardware related methods to increase PSE accuracy.

Author Method to increase PSE accuracy Banta et al. (2003); Backlighting of particles can prevent over segmentation due to W. X. Wang (2006) unwanted edges resulting from particle surface texture features. When using backlighting instead of top lighting of objects it is possible to eliminate any shadows top lighting cast around aggregates. Murtagh et al. Use of diffuse and homogenous lighting to negate occurrence and (2005) effects of shadows.

Treffer et al. (2014) Use of camera with shallow depth of focus to mitigate over- and under-estimation of particle sizes. Treffer et al. (2014) Over- and under-estimation of particle sizes can be mitigated by not

68

Stellenbosch University https://scholar.sun.ac.za

illuminating particles located at specific distances from the imaging equipment. Treffer et al. (2014) Measurement reliability is improved by accurate alignment of the imaging equipment with the particle sample to prevent over and under estimation of particle size, and prevent image distortion. Lin & Miller (1993); Image blur can be mitigated by using high shutter speeds for the W. X. Wang imaging equipment. (2006); Salinas et al. (2005) W. X. Wang (2006) Image distortion was mitigated by setting up the camera perpendicular to the plane of the particle flow. W. X. Wang (2006) Image distortion was calculated and corrected for by calibrating the SES to scale rulers placed in the middle of an aggregate stream and imaged with the aggregate. Kinnunen & To overcome the piling of particles which can lead to difficulties Mäkynen (2011) associated with segmentation, a suggestion for improvement includes installing an automatic spreader mechanism to spread out particles before being imaged. Alternatively, directing a smaller gravel stream from the main stream for size estimation purposes. Dalen (2004) Use of a uniform, contrasting colour background (black), to increase contrast between particles and particle background. Treffer et al. (2014) Overlapping particle delineation is aided by illumination by different coloured light sources from different angles.

Software related Table 12 provides a summary of software related methods and techniques that can be applied to a PSES in order to increase accuracy of the PSE conducted by the system.

Table 12: Software related methods to increase PSE accuracy.

Author Method to increase accuracy Mora et al. (1998) Quality inspection of acquired images can form part of an of an offline system. Images were viewed and inspected on a monitor to determine whether the images were suitable to be processed and analysed by the SES. W. X. Wang (2006) An online inspection algorithm was developed and implemented to classify an image as appropriate for IA procedures. This measure uses an image gradient magnitude image to determine an average grey-value of the gradient image and the edge density measurement to establish a threshold associated with a good or an unfitting image for further analysis. Koh et al. (2009) Methods were compared to results obtained with well-known methods. In such a manner particle identification and delineation results attained with investigated methods were compared to results attained with well-known methods. Treffer et al. Partly visible particles due to overlapping or location on the image (2014); W. X. boundary, are omitted from size estimation analysis. In conducting Wang (2006); automatic size estimation of iron ore pellets, Andersson & Thurley Andersson & (2007) overcome overlapped particle error by distinguishing between

69

Stellenbosch University https://scholar.sun.ac.za

Thurley (2007) entirely visible and partially visible pellets. Size measurements are consequently conducted on entirely visible pellets only. The distinction is done with statistical classification methods, specifically a visibility ratio. The ratio‘s performance is validated using the holdout method. It determines the ratio‘s performance by comparing it to a classifier, which in turn is established on a specific training set. Mukherjee et al. Incorporating shape parameters and ratios along with geometrical (2009) size measurements for irregularly shaped, randomly orientating particles, can improve accuracy of estimated PSD. Utilising this approach, Mukherjee et al. (2009) evaluated an identified object‘s area, solidity and eccentricity against possible ranges for these shape parameters. Wang (2007) Region-based approaches can overcome the problem of over- and under-segmentation produced by edge detectors, but only if the texture markings form regions that are small enough to be merged with other larger surrounding regions. Wang et al. (2006) Segmentation based on a genetic algorithm capable of adapting to changing operating environment characteristics. Miles (2006) Using a joint analysis of a combination of measurements to determine a size distribution, typically a combination of size and shape measurements. Andersson & By evaluating what size and shape measurements are most efficient Thurley (2011) at discriminating between different sieve-size classes, Andersson & Thurley (2011) minimise particle profile error connected to size estimation of iron ore pellets. Using six different size measurements and ordinal logistic regression to estimate pellet size, they determine that the equivalent-area circle measurement of particle size is the most efficient estimate to discriminate between different sieve-size classes. Montenegro Rios Using morphological operations to detect circular objects aided in et al. (2011) discarding underlying round particles that were partially occluded, eliminating the error to identify these particles as small particles. Wang & Bergholm To shorten image processing time Wang & Bergholm (2005) (2005) eliminated the image binarisation step (thresholding operation) in a typical IA algorithm and applied a custom one-pixel-wide edge detector to a grey-level image Kinnunen & 3D and 2D imaging were compared, with 3D imaging producing more Mäkynen (2011) accurate results. Zelin et al. (2012) Convert background particles to black, so as to mitigate the effect of overlapping particles.

Operation of SES related Table 13 summarises methods associated with the operation of a PSES in order to increase the accuracy of the estimated particle size.

Table 13: SES operation related methods to improve PSES accuracy.

Author Method to increase accuracy Itoh et al. (2008) When analysing stationary particles, placing sample particles in a manner that eliminates particle overlap and prevents particles from

70

Stellenbosch University https://scholar.sun.ac.za

touching one another, ensures that better particle delineation and isolation can be produced by the IP and IA processes of the SES. Zelin et al. (2012); Sample size of the evaluated objects play a role in the accuracy and Treffer et al. (2014) reliability of the SES. W. X. Wang Reducing the residence time of particles in the optical sampling (2006); Treffer et space, could reduce the chances of multi-sampling the same particle. al. (2014) Also, overlapping of particles can be partly overcome by analysing a free-falling pellet stream rather than pellets travelling on a transport medium, such as a conveyor belt, or analysing particles in a pile. Miles & Koh (2007) Images of free-falling objects, as opposed to object in a pile or on a transport medium such as a conveyor, could reduce the effect of particle overlap. Treffer et al. (2014) Only particles having a size above a predefined geometrical size measurement threshold (derived through iteration and observation) are analysed to mitigate the false identification of dust particles as particles of interest. Miles & Koh (2007) Effect of dust mitigated by washing of particles before analysis. This can however cause other problems associated with reflectance of light by particles. Salinas et al. Image acquisition and analysis should be done at predefined time (2005) intervals rather than processing in real time to eliminate the burden on the processing unit.

3.2 CONTROL FRAMEWORK FOR A FERROCHROME PELLETIZING PLANT

3.2.1 Analysis of the Fecr pelletizing process and process variables From the description of a FeCr pelletizing process in Chapter 2.1, the main input variables to the process can be identified as follows:

 Size of the raw material feed (Volume/particle)  Amount of raw material feed (Volumetric flow rate)  Type of binder added to the agglomerate  Amount of binder added to the agglomerate (Volumetric flow rate)  The amount of water added to the agglomerate (Volumetric flow rate)  Rotational speed of the drum/disc (rev/min)  Incline of the drum/disc axis toward the ground (degrees)

Yamaguchi et al. (2010) points out that it is difficult to adjust the drum operation for varying raw material conditions. If raw material conditions such as chemical composition, particle size, moisture, etc. are however uniform, the operation is relatively stable. On the other hand, a balling disc has the ability to classify green balls by itself. This reduces the amount of rework of non-conforming pellets. By changing the disc‘s revolution rate, inclined angle and depth of the pan, the disc‘ operation can be easily adjusted to cater for changes in raw material conditions (Yamaguchi et al. 2010). It is therefore noted that in terms of pelletizing processes utilising pelletizing drums, the last two variables are difficult to change during operation. It is however included in the variables since it can be changed if extreme circumstances necessitate it, such as a fundamental altering of one of the process steps.

71

Stellenbosch University https://scholar.sun.ac.za

As additional secondary input, energy consumption by the transport mediums and pelletizer discs and drums can be measured, controlled and optimised. The output variable is the pellet size distribution of the pellets produced by the pelletizing plant. This is referred to as the primary control variable.

72

Stellenbosch University https://scholar.sun.ac.za

4 MATERIALS AND METHODS

In order to achieve Objective 1, as outlined in the Introduction of this document, specific tests and experiments had to be conducted to develop and refine a problem specific pellet size estimation algorithm. The following chapter will outline the general experimental methodology developed, along with the materials, software and general hardware used to execute this methodology. The chapter will start by giving an outline of the project plan for this study, specifically designed to achieve the objectives of the study. The project plan will be followed by a discussion of the general experimental procedure used to execute the study. The remainder of the chapter will discuss the software and hardware that was used for the purposes mentioned above. Since the project plan determined that the study would be divided into distinct phases, it is considered appropriate to discuss the planning, execution, and results of each phase, separately in the consequent chapters, i.e. Chapter 5 and Chapter 6. This will be done while building on the information conveyed in Chapter 2 and Chapter 3, as it lays the foundation to what is being executed and conveyed in Chapter 5 and Chapter 6.

4.1 PROJECT PLAN In order to achieve the objectives that have been established for this study, the planned work to be conducted has been divided into the following three phases: 1. Research and development of conceptual pellet size distribution estimation algorithm 2. Experimental validation and refinement of pellet size distribution estimation algorithm 3. Design conceptual control model for the pelletizing process at Bokamoso pelletizing plant, using the outputs of the pellet size distribution estimation system The above phases can be further expanded into the following sub-tasks: 1. Research and development of conceptual pellet size distribution estimation algorithm a. Literature study on the application and use of DIP and DIA to determine the size distribution of spherical objects i. The literature study was conducted, summarised and presented in Chapter 2 and Chapter 3 of this document. b. Algorithm development based on procured footage: simulated and actual process footage i. Algorithm development based on footage of simulated pellets 1. Particles of interest: Mock pellets - Candy coated peanuts 2. Imaging equipment: Commercially available imaging equipment of the Process Monitoring and Systems research group (PM&S) 3. Location: Footage captured at Stellenbosch University ii. Algorithm development based on footage of actual process pellets from the site (Site Visit 1 footage) 1. Footage obtained during site during Site Visit 1 a. Commercially available imaging equipment of the PM&S b. Lighting equipment available at Bokamoso Pelletizing Plant. 2. Experimental validation and refinement of pellet size distribution estimation algorithm using simulated process footage and actual process footage.

73

Stellenbosch University https://scholar.sun.ac.za

a. Development of lab-scale set-up to replicate intended operating environment of the pellet size estimation algorithm i. Design, develop and build a simulated part of the actual pelletizing process at Bokamoso pelletizing plant ii. Design and develop a custom system to enable capturing images of pellets moving through the pelletizing process 1. Mounting frame for imaging equipment 2. Lighting equipment b. Validation and refinement of algorithm based on footage produced through physical experiments to obtain simulated process footage i. Algorithm validation and refinement based on simulated pelletizer process footage using actual pellets produced at Bokomaso pelletizing plant 1. Sintered chrome pellets produced at Bokamoso pelletizing plant 2. Commercially available imaging equipment of the PM&S 3. Stellenbosch University c. Validation and refinement of algorithm based on actual process footage i. Algorithm validation and refinement using actual process footage obtained during Site Visit 2 1. Footage of pellet on rollers and pellets on conveyor 2. Commercially available imaging equipment of the PM&S 3. Custom built lighting structure 3. Design conceptual control framework for the pelletizing process at Bokamoso pelletizing plant, using the outputs of the pellet size distribution estimation system It should be noted that Phase 3 will not be further discussed in this chapter. It will be discussed in a separate chapter, Chapter 7. This chapter will only discuss the experimental procedure related to the development and refinement of the pellet size estimation algorithm.

4.2 GENERAL EXPERIMENTAL PROCEDURE At the start of the study it was important to define the system in which the final solution will be operated. Therefore the flow sheet, process units, camera hardware and software, sampling methodology as well as available process variables of the pelletizer plant to be investigated, had to be specified. This was conducted through site visits to the plant, as well as through correspondence with key partners and representatives of the pelletizer plant. After specifying all applicable process variables, parameters and capabilities, suitable pellets-on-conveyor-belt and free-falling pellet footage needed to be procured for test data purposes. Along with this footage, sampling results and pellet samples needed to be procured. An optimal outcome was that the footage and sampling results would be obtained from the site visits. As mitigation, if this had not been possible, pellets would have been produced in the laboratory at Stellenbosch University. For the laboratory test, a key to the success of the test was considered to be the ability to simulate the actual pelletizer process at Wonderkop smelter. During the site visits mentioned above, a clear understanding of the system mechanics and dynamics had to be obtained. Form this knowledge a scaled version of the pelletizer could be built. The focus was to achieve effective simulations of the falling particle streams, as well as the pellets transported on the conveyors. Speeds of moving particles, illumination, and work environment factors are some of the most important factors that had to be simulated as accurately as possible.

74

Stellenbosch University https://scholar.sun.ac.za

Laboratory test work with physical and simulated pellets, procured or produced, was to be conducted at Stellenbosch University. MATLAB© was identified to be the main software package used for the image analysis procedure. Literature proves its effectiveness for this purpose. Other open source software options, as discussed in Chapter 3, were also considered viable options for use for the image analysis procedure, as and if needed. From the literature study, specific applicable image analysis algorithms had to be chosen to implement and test a prototype computer vision online pellet size sensor. The testing would then be based on available footage obtained either from the site visit, or during testing on the lab-scale set-up operated in the laboratory. The Process Monitoring and Systems research group at the Process Engineering Department was identified as partners to be involved in the design, testing and validation of the prototype computer online pellet size sensor algorithm. Following the successful design and testing of the prototype computer vision online pellet size sensor algorithm, the online pellet size sensor had to be then altered to include problem specific modifications to the software. The final step in the study was then to develop the conceptual monitoring and control framework to control the pelletizing plant. Once again, it was very important that the process to be controlled was well understood, in terms of its parameters, and restrictions. Current process capabilities need had to be investigated in order to determine whether existing system could be incorporated within the control strategy. Also, the results of the online pellet size estimation and available process variables need had to be included in the development and testing of the control framework.

4.3 SOFTWARE USED FOR EXPERIMENTAL PROCEDURES MATLAB© and Simulink®, developed by The MathWorks, Inc., were the two main software packages used for algorithm development and testing for this study. These two software packages were chosen due to their wide application and regular use in Computer Vision applications, as noted in the Literature Study. The integration of MATLAB© and Simulink® and the various add-ons associated with these packages have been proven to provide the user with a wide range of image processing tools and functions to aid in a wide range of Image Processing and Computer Vision applications. Within these two packages, the functionality of the Image Processing Toolbox TM and the Computer Vision System Toolbox TM add-ons were used to develop the algorithms used in this study. These two add-ons were used since they offer a wide range of reference- standard algorithms, functions, and applications, and aid in the design and simulation of computer vision and video processing systems for image processing, analysis, visualization, and algorithm development. The second factor considered for using these packages for this study, was the availability of these packages for use within the Process Engineering Department at Stellenbosch University. Adequate processing and computing power, as well as programme support, was available within the Department. In addition to the abovementioned software used to develop and test the algorithms, two other software packages were used during the data analysis part of the study. Microsoft Corporation‘s Microsoft Excel spread sheet software was used to store, consolidate, filter and analyse the output data obtained from the development and testing of various algorithms with MATLAB© and Simulink®, as mentioned above. In the case of video footage

75

Stellenbosch University https://scholar.sun.ac.za

being used for test data, Windows Live Movie Maker was used to trim videos in order to cut out unwanted and unnecessary parts of the video footage.

4.4 HARDWARE USED FOR DATA ANALYSIS AND DATA PROCESSING A PC owned by the Department and specifically acquired for this specific study was used for all the software functions stated in the previous section. After acquisition or capturing of test footage, the data was transferred from the various image acquisition hardware or storage devices onto the PC. It was then rendered into a suitable form, as stated in the previous section, and then analysed and manipulated as required. The PC used for this study had the following basic specifications:

 Windows 7 Professional operating system  Intel® Core TM I7 processor  500GB hard drive  4GB RAM

76

Stellenbosch University https://scholar.sun.ac.za

5 PHASE 1 – DEVELOPMENT OF CONCEPTUAL PELLET SIZE DISTRIBUTION ESTIMATION ALGORITHM

In generating problem specific algorithms it is important to keep in mind the ultimate outcome that has to be achieved. Along with the ultimate goal, problem specific environmental factors and characteristics should be considered continuously to ensure a solution that addresses the problem in its entirety. With these two statements in mind, the following summary was developed as the objectives and aims for preliminary algorithm development, i.e. development of a conceptual pellet size distribution estimation algorithm:

 Ultimate goal: o With regards to Phase 1, it was to develop an algorithm capable of determining particle size of a spherical object.  Sequential, secondary objectives: o Acquire a sharp, high contrast image. o Eliminate any image noise. o Enhance image contrast to aid in segmentation. o Perform accurate and effective segmentation of image (partitioning into objects). o Determine particle size from segmented objects.

The rest of the chapter presents a discussion of the methodology executed to achieve the abovementioned goals.

5.1 OUTLINE OF EXPERIMENTAL PROCEDURE The first step of the experimental procedure entailed obtaining footage of spherical objects on which to test the effectiveness of the conceptual particle size estimation algorithm. The footage had to have the qualities listed above. As proven through the Literature Study, the various settings of the image capturing device, such as exposure and shutter speed, play a vital role in the adequacy of the footage for analysis by a DIP algorithm. Therefore, various runs of the footage gathering procedure, also called test runs, were done, using various camera settings within the limits discussed in the Literature Study. Each of these runs would then be analysed by the same algorithm in order to establish the effects the various parameters have on the effectiveness of the algorithm. In terms of algorithm development and testing, each test run would be used as input to various algorithms consisting of a combination of different procedures and techniques within the DIP field. Using various performance metrics, the various combinations of procedures could be evaluated in order to develop a suitable conceptual algorithm for particle size estimation. In addition to tests being conducted on simulated pellets footage, tests were also to be done on actual process footage obtained from the Bokamoso. Footage would be gathered during Site Visit 1.

5.2 HARDWARE AND MATERIAL CONSIDERATIONS In terms of the hardware and software used for Phase 1, it is noted that the aim was to familiarise the author with the concepts of computer vision and particle size estimation.

77

Stellenbosch University https://scholar.sun.ac.za

Therefore, the most commonly used and applicable computer vision algorithms were initially utilised to develop a conceptual pellet size distribution estimation algorithm.

In addition, the type of footage on which the algorithm would be validated at first, wasn‘t intended to exactly resemble the process of interest, i.e. the pelletizing process at Bokamoso Pelletizing Plant. The focus would rather be on obtaining images that would allow for elementary particle size distribution estimation, with the images at least containing objects that would resemble the objects of interest, i.e. ferrochrome pellets. However, the motion of the objects of interest (ferrochrome pellets) on which the final product of the study was to be implemented on, had to be replicated. For the Ferrochrome Pelletizing Plant at Bokamoso Pelletizing Plant, initial considerations, stemming from correspondence with personnel at the process of interest (Gloy 2015) and the literature study of Chapter 2, concluded that a free- falling pellet stream would be the most likely scenario for the motion of the imaged pellets. In such, literature suggests that this mode of transport of the objects of interest has advantages over other modes, such as reduced occurrence of particle overlap. Therefore it was decided to have free-falling objects of interest falling in front of a uniformly coloured background as replica of the system of interest.

5.2.1 Imaging hardware requirements For the purposes of Phase 1, it was not considered important to use high end imaging hardware to acquire test footage. Off-the-shelf equipment that was readily available within the Department was chosen for this purpose. Image acquisition equipment specifications:

 Footage was acquired by a Canon Legria HFM 36 digital video camera  Optical sensor size of 1/4‖ 3.2 x 2.4mm

5.2.2 Additional hardware requirements For the purposes of Phase 1, no additional lighting was utilised. The lighting in the room was considered adequate for initial testing. In terms of computing and processing hardware to run the algorithm on, the PC mentioned in Section 4.4 was utilised.

5.2.3 Experimental Set-up The Canon Legria was set-up to be stationary in front of a uniform white background. The white background was made using standard white paper which was set-up against a wall. Pellets would be imaged as they were dropped manually out of a container, and in front the white background. The image capturing zone only included about half of the area of the white background, thus ensuring that the only changes in intensity captured, would be the dropped pellets. The distance between the camera lens and the uniform background was about 30cm.

5.2.4 Test material considerations and preparations Multi-coloured, spherical, candy-coated peanuts, with mean diameter ±11mm were used to simulate ferrochrome pellets, similar to those produced as Bokamoso Pelletizing Plant. These pellets were considered adequate for initial algorithm development purposes due to their shape and size characteristics resembling actual ferrochrome pellets. From initial conversation with Gloy (2015), these pellets had a mean diameter of ±12mm and were of spherical shape. Also, these objects were readily available.

78

Stellenbosch University https://scholar.sun.ac.za

5.3 EXPERIMENTAL PROCEDURE FOR ACQUIRING TEST FOOTAGE

5.3.1 Execution of test runs To acquire video streams for analysis by the conceptual algorithm, 136 simulated pellets (candy-coated peanuts) were dropped manually and photographed during each run. Pellets were made to free fall from a height of ±1m above the image capturing zone, dropping in front of the a background. The camera was set to run, after which pellets were manually dropped. After all the pellets were dropped, the camera was stopped. Each test run video was then edited using standard editing software, allowing for only the footage showing dropping pellets to be imported into the algorithm for analysis. Camera settings used during image acquisition:

 Frames per second (fps): 25fps  Shutter speed: varied between 120, 250, 500, 1000, and 2000.  Focal length: 4.1mm  Frames/image size: 960 x 720 pixels  Camera zoom: Normal zoom, with camera lens located 30cm away from the falling objects.

Since tracking of pellets was not considered for this study, the frames per second setting was not considered an important feature of the camera. Therefore the default setting was chosen. Shutter speeds were varied due to literature findings not being conclusive on which shutter speed would produce adequate results. Objects would be dropped relatively close to the camera lens, a focal length of 4.1 was chosen. Finally, no zoom was applied, due to the close proximity of the objects of interest to the camera lens.

5.4 ALGORITHM DEVELOPMENT

5.4.1 Development and application of algorithm Being a graphical programming environment for modelling, simulating and analysis of dynamic systems (The MathWorks Inc. 2014), Simulink®‘s graphical block diagramming tool interface was used to simulate any desired DIP procedure to be tested. The steps that made up these procedures or algorithms were represented by specific blocks that formed part of the Simulink® library of functional blocks. As such, these blocks were connected within workspace to form the desired DIP algorithm. The Simulink® workspace allowed for video or still images to be imported into the algorithm from die hard drive of the computer. Therefore the footage to be analysed would be imported into the algorithm using one of the functional blocks from the Simulink® library, and then analysed by executing the remainder of the steps in the algorithm. The output was then drafted within an excel document for further analyses at a later stage.

5.4.2 DIP techniques employed in algorithm With reference to Figure 5 in Chapter 2 (of the various stages of a DIP procedure), Table 14 gives a summary of the methods, techniques and tools used to develop the algorithms evaluated during Phase 1 of the study. The list is followed by a brief description and explanation of the mathematical principles, procedures and transformations used in the various techniques and tools. In addition, a short description is given of the mechanisms of the Simulink blocks used for each technique.

79

Stellenbosch University https://scholar.sun.ac.za

Table 14: A summary of the techniques used for algorithm development.

Processing stage Desired effect Specific technique Simulink block used

Image Acquisition Import image data From Multimedia File into algorithm Intensity conversion R‘G‘B‘ to Intensity Color Space conversion Conversion Pre-Processing Noise Reduction Median Filter Median Filter Contrast Contrast Stretching Contrast Adjustment Enhancement Histogram Histogram Equalization Equalization Image Processing Segmentation Thresholding Autothreshold Edge Detection Edge Detection

Image Analysis Object recognition Object counting Blob Analysis Computer Vision Object analysis Size determination Blob Analysis

5.4.2.1 Filtering From literature studies it was concluded that image noise should be removed or reduced before any other operation is conducted on an image. Noise filtering tests were conducted using the median filter. Tests were run to determine its effectiveness in removing image noise. Furthermore, tests were run that evaluated the influence of the size of the filter mask on particle clarity after some enhancement procedure.

5.4.2.2 Image enhancement

Contrast Adjustment Contrast adjustment was generally used to increase the overall image contrast, with the main objective of increasing contrast between the simulated pellets and the background present in the image. Increasing the contrast in such a way would aid the latter image processing step, segmentation. Tests with this operation were run in conjunction with median filters and edge detectors. Tests were run to determine the effects contrast adjustment has on the image histogram, to determine the comparison between Contrast Adjustment and Histogram Equalisation, and the effects Contrast Adjustment has on correctly identifying objects in the image.

Histogram Equalization Histogram Equalization is another method to increase the contrast within an image and between the objects in an image. Tests were run to determine the effects of this method on the image histogram, and the overall contrast of the image. Furthermore, tests were run in parallel and series with the contrast adjustment operation, in order to determine a comparison of each, as well as the influence on one another.

5.4.2.3 Segmentation Segmentation of images was done by two main processes: Thresholding and Edge Detection. The first was implemented by utilising Otsu‘s method to calculate the optimal

80

Stellenbosch University https://scholar.sun.ac.za

threshold value by maximizing inter-class variance. The second process was mainly done using the edge detection algorithms developed by Sobel and Canny. The main purposed of these tests were to effectively segment the image into the objects contained in the image, in order to establish size distribution of the objects in the images. The effectiveness of the autothreshold Simulink block was tested by providing as inputs, images with varying contrasts and varying object sizes, with the aim to test the effects of these differences on the results of the thresholding operation.

5.4.2.4 Determining Object Size The Blob Analysis block in the Simulink library was used to identify individual objects and calculate the size of the various objects detected. The Blob Analysis block can be used to calculate various statistics for labelled regions in a binary image. The input to the block was a binary thresholded image. The outputs are various matrices containing characteristics of the objects identified. This includes object area, bounding box containing an object, the centroid of the object, and the number of objects (blobs) detected in every frame. The blob or object size is given in pixel area, with an area expressed in the amount of pixels the object occupies in the image. The dimensions of the bounding box were given in pixel height and pixel width. The accuracy and consistency of the Blob Analysis was tested mathematically and empirically. The first was done using a conversion from image dimensions to real world dimensions. The second was done by comparing results of the simulations to the results of a physical analysis of the simulated pellets. Algorithm specific parameters tested and evaluated Table 15 contains a summary of the various image processing and analysis categories and techniques tested and implemented during Phase 1 of the algorithm development. The parameters and the different values tested for each parameter are also given.

Table 15: The image processing and analysis categories and techniques tested and implemented during Phase 1 of algorithm development.

Image processing Method Parameters Parameter value(-s) and analysis area

Filtering Median filtering  Mask neighbourhood  3x3, 4x4, 5x5, 8x8 size  Output size  Same as input port  Padding options  Constant  Pad value  0 Image Enhancement Contrast  Input intensity range  Full input range Stretching [min max]  Manually selected range [0.1 1]  Output intensity range  Full data type range [0 1] Histogram  Target histogram  Uniform Equalization  Number of bins  256, 64, 32, 16

Segmentation Thresholding  No user input (Otsu‘s method)

Edge Detection  User-defined threshold  Unchecked (Computer

81

Stellenbosch University https://scholar.sun.ac.za

(Canny) generated threshold)  Approximation of weak  70 edge and non-edge pixels percentage  Standard deviation of  1 Gaussian filter Edge Detection  User-defined threshold  Unchecked (Sobel) (Computer generated threshold)  Threshold scale factor  4 Morphological  Structuring element  Disk analysis - Opening shape and size  15, 10, 5, 3

Particle analysis Blob analysis  No user input

5.4.3 Validation and Representation of results In terms of the outputs of the algorithms developed during Phase 1, two main criteria would be used to determine the effectiveness of the algorithms. Firstly, its ability to correctly identify and delineate particles within the test footage. And secondly, its accuracy in estimating the size of the identified particles. In terms of size estimation accuracy, the outputs of the algorithms were to be validated in two ways, mathematically and empirically. Mathematically, Equation 46 was to be used to relate actual particle size and fixed reference dimensions, to estimated particle size. Empirical validation referred to the estimated size being validated against actual measurements of the particles under consideration. The results of these analyses were to be represented using suitable graphs that would relate the individual estimations and overall estimation results to the actual size of the particles. ( ) ( ) ( ) ( )

( ) ( )

Equation 46: The formula that relates real world quantities to the digital quantities of images

5.5 DATA ANALYSIS The design philosophy of the conceptual algorithm entailed primarily to evaluate basic IA techniques and procedures. These primarily included:

 Image filtering: removing image components that negatively affect the operation of subsequent DIP and IA procedures.  Image enhancement: altering image components to emphasise components of interest that aid in the effective operation of subsequent DIP and IA procedures.  Image segmentation: partitioning of an image according to a defined criteria in order to delineate objects of interest.  Particle analysis: applying specific procedures in order to extract specific and desired information from the image. With the above in mind, the following two sections provide the insights gathered from analysing the test footage with the conceptual pellet size estimation algorithm.

82

Stellenbosch University https://scholar.sun.ac.za

5.5.1 Discussion of results

5.5.1.1 Filtering Filtering with the median filter is an effective way of reducing and/or eliminating noise in and image. Figure 17 shows two images that were obtained by applying an edge detection algorithm (Sobel edge detector) to each respectively. The image on the right had a median filter of size 3 x 3 applied to it before edge detection. The difference between the two images is evident: Noise is clearly visible in the left image (white circles) while very little or no noise is visible in the image on the right.

Figure 17: (left) Original image, (middle) Edge detected without prior filtering, (right) Edges detected after filtering with a median filter of size 3 x 3.

Furthermore, it was concluded that for the images processed, a median filter of size 3 x 3 worked the best in reducing noise and achieving smoothing of the image. The bigger the size of the filter, the bigger the distortion of objects in the image resulting consequently in sub-optimal edge detection. Smaller filters generally tended to achieve the desired smoothing and noise elimination, whilst preserving most of the edge details and some of the image sharpness.

5.5.1.2 Image enhancement Due to inadequate lighting being an obstacle to effective image segmentation, techniques such as histogram equalisation and contrast stretching are employed to increase the overall image contrast to aid in image segmentation.

Contrast stretching Looking at Figure 18 it is evident that in terms of visible appearance, contrast stretching works well on images with fairly low intensity values. The linear ‗stretching‘ of the image intensities emphasise small differences in intensity values very well. The associated image histograms shown below each of the intensity images clearly illustrate the stretching effect. The bottom right Contrast Stretched histogram contains intensities across the entire spectrum.

83

Stellenbosch University https://scholar.sun.ac.za

Figure 18: The effect of contrast stretching: (left) median filtered image, (right) contrast enhanced image. The bottom row contains the histograms associated with the intensity images directly above them.

However, thresholds applied to images obtained using this method, tend to produce unsatisfactorily result in terms of segmenting an image. It has been proven in subsequent tests that the thresholding operation can ‗miss‘ objects, resulting sometimes in non- segmented images. This can be explained by the absence of distinct peaks in the image histogram. This result is further elaborated on in Section 5.5.1.3. One of the ways to combat the abovementioned segmentation problem is by only stretching the contrast of the intensity region of interest. Effectively, the background intensities are filtered out, in this case being the dark, lowest intensities. Figure 19 shows the Contrast Stretched image where such an operation has been applied. In this case, the ‗lowest‘ was determined by visually inspecting the image histogram, and manually manipulating the range over which contrast stretching was to be applied. Future work could look at ways to automatically filter out these background intensities. It will be showed later in Section 5.5.1.3 that thresholding applied to images enhanced in the abovementioned manner, produce much better segmented images.

Figure 19: Contrast Stretching (adjustment) over only the highest intensities (low intensities eliminated). (top left) The median filtered intensity image, (top right) the Contrast Stretched intensity image, with stretching done over the intensity range [0.1 1].

84

Stellenbosch University https://scholar.sun.ac.za

Histogram equalisation In order to compare the effects of histogram equalisation to contrast stretching, Figure 20 shows the results of enhancing an input image with histogram equalization and contrast enhancement respectively, in order to increase the contrast between the simulated pellet and the background. The histograms of each image are displayed below the intensity images. It should be noted that due to the difference in intensity range present in the intensity images below, the scales of the associated histograms differ accordingly. Figure 21 shows the relevant block diagram, used to obtain the results of Figure 20.

Figure 20: (top left) Median filtered image, (top middle) Histogram Equalisation used to increase image contrast, (top right) Contrast Stretching used to increase image contrast. The images in the bottom row are the image histograms associated with the intensity image directly above it.

Figure 21: The Simulink model (block diagram) used to obtain the results shown in Figure 15.

It is evident from the images that Contrast Adjustment works much better than Histogram Equalisation to achieve the above mentioned objective. Due to bad lighting, images are dark, and contrast between pellet and background is very little.

85

Stellenbosch University https://scholar.sun.ac.za

Figure 22 is a representation of the image histogram obtained after applying Histogram Equalisation to the input image.

Figure 22: Histogram of a Histogram Equalised image with y-axis maximum limit at 150 000

The outcomes showed in Figure 20 and Figure 22 are explained by recapping on the methods each technique uses to enhance contrast. Histogram Equalization aims to distribute pixels evenly over the whole intensity range. The aim is to transform the image so that the output image has a flat histogram. This action however amplifies the lack in contrast by not discriminating efficiently between background and object. This result is clearly visible in the discrete pixel groupings visible in Figure 22. On the other hand, Contrast Adjustment adjusts pixels values linearly, emphasising the small difference in intensity of the objects. This leads to better discrimination between object and background.

5.5.1.3 Image segmentation

Thresholding using Otsu‘s method To further illustrate the significance of applying Median filtering to an image before any other processing procedures are applied, a test was conducted by thresholding a Median filtered image, compared to thresholding the original, unfiltered image. The results showed that the Median filter eliminated the effect of random noise that caused irregularities in the objects identified. This sometimes led to an object being split and segmented into two separate objects. The Median filter ensured that such noise is smoothed out, allowing for more accurate segmentation.

5.5.1.4 Particle analysis Figure 23 provides an example of a Simulink model that incorporates a Blob analysis block.

86

Stellenbosch University https://scholar.sun.ac.za

Figure 23: Example of a Simulink model implementing Blob Analysis.

A mathematical validation of the Blob Analysis block is done by using a formula that relates distance to object to the circumstances under which the image was taken. This equation is given in Section 5.4.3 as Equation 46. It is presented again in order to facilitate in the discussion of the rest of the section. ( ) ( ) ( ) ( )

( ) ( )

Equation 46: A formula that relates real world quantities to the digital quantities of images

Specific object tracking has not been incorporated in the initial tests conducted for this phase of algorithm development. Therefore, objects measured by the Blob Analysis block cannot be related to physical simulated pellets. In considering this, the average height for all the objects measured was used for the ‗Object height‘ parameter in Equation 46. The height of an object corresponds to the height of the bounding box (Chapter 2, Section 3.1.1.4) surrounding a blob. It was chosen to validate the Blob Analysis by comparing calculated distance to object to actual distance to object. This was done since the latter distance was accurately known. In proving the validity of the Blob Analysis‘ size estimation, the following quantities were used for the various variables in the equation:

 Focal length = 4.1 mm  Real height of the object = mean of 11mm taken  Image height = 480 pixels  Object height = 28 (random sample taken)  Senor height = 2.4mm

The result of substituting these value into Equation 46, delivered a result of 322mm. This number is very close to the actual distance between the camera and the falling objects, as stated earlier. The error can be attributed to that fact that a mean was used for the real height of the objects. Randomly picking an object as physical sample would in this case

87

Stellenbosch University https://scholar.sun.ac.za

produce a different answer every time. However, the validity and accuracy of the Blob Analysis block has been verified. With regards to counting of the objects in a sequence of images, the Blob Analysis performed satisfactorily. The results of the counting procedure of the Blob Analysis were compared to a manual, frame-by-frame analysis conducted of the same sample footage. The manual analysis entailed counting the number of objects in each frame, and comparing this amount to the amount segmented and counted by the Blob Analysis. It was also noted where the Blob Analysis failed to count objects. This occurred mainly due to the objects not being segmented or due to overlapping of objects leading to the overlapping objects being counted as one. As stated before the total number of simulated pellets used in the tests were 136. The Blob Analysis block produced a count of 123. All cases of failing to count an object can be drawn to incorrect segmentation of images. Two of the counted objects were the result of the thresholding function splitting objects. Therefore, the actual amount of objects counted is 121. From this result, it is clear that future work should focus on improving the segmentation of images using a thresholding function. The summarised results of the counting procedure of the Blob Analysis, compared to manual analysis of the footage, can be seen in Table 16.

Table 16: Results of manual and algorithm analysis of sample footage.

# Frames containing objects 55 # Objects in frames 137 # Objects identified 123 # Objects incorrectly counted 2 Total # missed 16

In terms of pellet size estimation, the Blob Analysis also performed satisfactorily. To relate estimated size to actual size, actual pellet sizes were manually measured. Due to a lack of control over the orientation of the falling pellets in the tests, the average diameter was calculated by taking the average of the differences between the largest and smallest diameters of the simulated pellets. The results of the size estimation of the Blob Analysis can be seen in Figure 24. The figure displays the estimated pellet width, with actual minimum, maximum and mean pellet values also indicated on the graph. The actual width and height of the objects identified were calculated using Equation 41. For analysis and comparison procedures, the estimated object width was used. This is due to many blobs touching the upper and lower image borders, and consequently not representing a true pellet height estimation. Due to a wide image frame, this never occurred on the left and right mage borders, further supporting the use of the blob width measurements in subsequent analyses. From this graph it is clear that the segmentation algorithm disregards darker edge pixels when segmenting objects, producing smaller estimated blob sizes.

88

Stellenbosch University https://scholar.sun.ac.za

Figure 24: Estimated pellet size of the simulated pellets. Estimated pellet width is plotted in this case.

5.5.2 Important considerations for subsequent project phase

5.5.2.1 External factors affecting algorithm performance Before any conclusion on the effectiveness of individual and groups of techniques are drawn, some external factors to the algorithms need to be considered. The importance of adequate lighting during image acquisition cannot be overstated. From the earliest tests the need existed to simulate an increase in reflectance of the objects in the image, as well as a need to increase overall image contrast. Even though techniques such as histogram matching and contrast stretching were utilised, the lack of adequate contrast in images persisted to be a problem in consequent processing. Image capturing settings on the image capturing hardware also need to be addressed and optimised in future work. The effects on images and interaction between concepts such as shutter speed, aperture, and ISO, need to be understood and managed in order to obtain crisp, high contrast images. The experience gained with these concepts during Site Visit 1 provides the platform for better image acquisition in future tests. Along with the concepts mentioned above, image pixel density, measured in pixels per inch (ppi), needs to be increased as far possible, due its effect on image sharpness.

5.5.2.2 Future algorithm investigations From the algorithm development conducted during Phase 1, a broad understanding of a variety of image processing techniques had been obtained. This included an understanding of the mathematical concepts these techniques were built on, as well as implementation considerations of the individual techniques, and combinations of various techniques. It was found that the effectiveness of the algorithm could be greatly increased by implementing performance criteria for every test conducted, as well as every technique employed. This would be done in order to determine its effectiveness, but more importantly,

89

Stellenbosch University https://scholar.sun.ac.za

to have common ground for comparing techniques. Examples exist in the Literature Review of specific performance criteria for specific techniques used. The right combination of criteria need to be establish that will aid in generating an effective, problem specific solution. It has also been seen during the tests conducted during Phase 1, that incorrect thresholding can lead to splitting of objects. Addressing this, in terms of pre-processing and processing, it has been proven by the results in Section 5.5.1.2, more accurate thresholding can be achieved in low contrast images by eliminating the low intensities associated with the background. Thus further work could be done to refine thresholding based on the image histogram. An algorithm needs to be developed that can identify background pixels more effectively and omit these pixels from further analyses. In line with the previous point made, it is important to note that object intensity plays an important role in this regard. Here a possible solution would be to improve the illumination of the OOI. However, alternative solutions can exist such as the application of alternative intensity transformation functions. Therefore, image contrast enhancement remains an area to be investigated, especially when processing low contrast images, such as the ones describe in this phase.

One such solution that was investigated, is the use of morphological image processing, as mentioned in Section 2.2.5, as a pre-processing procedure. Its effectiveness to smooth an image and remove background intensities and thus increase image contrast, especially when non-uniform lighting was present, contributed to better results in terms of image segmentation. It was determined however that further work should be done in determining the optimal combinations of preceding processing techniques, as well as different disk sizes used in the opening operation. It has been proven that various combinations of techniques lead to satisfactory results in terms of object identification and sizing. However, as can be seen from the results in Figure 20, segmentation can be improved in order to produce more accurate sizing results. This statement refers specifically to the way the algorithms process pixels on the edges of the pellets where pixel intensities tend to be lower. A possible solution could be obtained through the use of the watershed segmentation method (Section 3.1.1.2), which was first described in Beucher & Lantuejoul (1979). This technique ‗grows‘ an area to segment from an initially determined seed region or spot. Typically, this seed region refers to the highest intensity pixels in a given area. The region is grown along declining intensity levels, until another watershed area is encountered. With regards to the Blob Analysis function used for size estimation, the following can be concluded:

 Investigating ways to increase contrast between blobs and background to prevent the algorithm from splitting blobs.  Using a size filter that will ensure that only blobs above or below a certain size is counted. The average size of blobs present, as well as average desired blob size needs to be known for this implementation.  Broken blobs could be united by using a median ‗smoothing‘ filter to ‗colour in‘ the gaps between pieces of the same blob.  A Gaussian pyramid is another operation aimed at filling the holes or spaces between blobs, in order to unite separated blob parts.

90

Stellenbosch University https://scholar.sun.ac.za

Finally, in terms of partially visible particles, work can be done in terms of accurate particle delineation in the case of overlapping objects. Literature suggests methods to estimate partially visible particles by completing the expected shape it represents in the captured image. Similarly, a method has to be established for dealing with objects that are partially obscured by the image borders.

5.6 SITE VISIT 1 The aim was with conducting a site visit was to obtain first-hand experience of the system around which the study is centred, and secondly to obtain footage that is representative of actual pellet-in-process footage. This footage would be used to develop, refine and customise the custom pellet size estimation algorithm for a pelletizing process such as the one at Bokamoso pelletizing plant. Site Visit 1 was conducted in October 2014, during which the abovementioned information and footage was collected.

5.6.1 Choosing suitable pellet-in-process footage The pelletizing process needed to be evaluated to identify adequate areas from which suitable pellets-on-transport-medium footage could be obtained. Various sections of the pelletizing process between the actual pelletizing step and the sintering step were evaluated for this purpose. The viability was based on two factors: Firstly, the option of a clear view of the pellets moving through the pelletizing process. Secondly, the accessibility to the specific part of the pelletizing system, specifically in terms of the viability to place imaging equipment near the specific area in order to capture footage of the process. Three areas were identified from which suitable footage could be obtained. The three areas that were identified were the roller screen, the roller feeder, and the sintered pellet discharge area. Footage of the pellets moving through each section was taken with all three the above mentioned cameras. After initial footage analysis, it was determined that all further footage would be obtained from the roller feeder section. It was considered to be the only viable option for the collection of suitable footage for pellet size estimation purposes. The decision was based on the following:

 Due to the nature of operation of the pelletizer drum and the varying composition of the pellet slurry entering the pelletizer, there is an inconsistent and very low percentage of green pellets exiting the pelletizing drum that conform to the desired size and moisture content specifications. Consequently a large amount of material that is redirected as rework material is being discharged from the pelletiser drum onto the roller screen. This causes a large percentage of the conforming pellets to be obscured and covered by pellet slurry. The conclusion was made that footage from the roller screen would not be suitable to determine the size distribution of the pellets produced by the pelletizing plant.  The physical layout and operating conditions of the sintered pellet discharge section was judged to be unsuitable for the collection of process footage for size estimation purposes. Firstly, the view of the pellets being discharged from the sintering section was partially obscured by the chute enclosures. This complicated efforts to position the imaging hardware so that a clear full shot of the discharge could be obtained. Secondly, the nature of the discharge was not suitable for size estimation purposes. Pellets moved through the sintering ovens on a metal sheet conveyor, stacked 30cm high. When discharged, large bundles of conglomerated pellets would break loose

91

Stellenbosch University https://scholar.sun.ac.za

from the pellet bed in an avalanche fashion and fall down the chute. This meant that many of the individual pellets could not be seen and were isolated, inhibiting effective size estimation. Only after reaching the free-falling stage of the chute had the bundles completely broken up. Finally, without adequate heat resistant enclosures and insulation, the heat radiated from the sintering ovens and discharged pellets posed a risk to the imaging capturing devices and the associated lighting equipment. Therefore, considering the above three points, the sintered pellet discharge section was judged to be unsuitable for capturing suitable pellets-in-process footage.  In terms of the roller feeder, an evenly spread pellet feed having a favourable size distribution was two of the major factors for the roller feeder being chosen as the favoured section for capturing pellet-in-process footage. The favourable size distribution of these pellets are ensured by the effects of the roller screen, as well as final screening on the roller feeder itself. Specific reference is made to the fact that the roller screen rollers are spaced 8mm form each other, enforcing a minimum pellet size specification of 8mm. Furthermore, the fact that the pellets that move through this section effectively make up the final product of the pelletizing plant, also contributed to the roller feeder being the favoured section for capturing pellet-in- process footage. By monitoring the size distribution of these pellets, a good idea of the size distribution can be obtained of the pellets produced by the pelletizing plant.

5.6.2 Hardware used to capture footage Two commercially available cameras were used to capture the footage. These cameras belonged to the PM&S research group at Stellenbosch University. The specific models are the Canon Legria HFM 36 digital video camera, and the Canon EOS 500D. It was arranged with Bokamoso that onsite lighting equipment would be used for lighting during footage capturing. The lighting provided consisted of two fluorescent T8 lamps with the appropriate fitting and diffuser, and a LED spotlight of unknown power output. The lighting however proved to be inadequate for use as lighting for footage on which image analysis operations are to be conducted. The effects on the footage will be discussed in the following sections.

5.6.3 Technical specifications regarding capturing of footage

5.6.3.1 Imaging equipment The following camera parameters were varied and paired in all possible combinations: Canon Legria

 Shutter speed: 1/125s; 1/250; 1/500; 1/1000; 1/2000s  Aperture: Auto Adjustment  ISO: Auto Adjustment  Recording resolution (MBPS): 1440x1080 (1.5MP); 1920x1080 (2MP)

Canon EOS 500D

 Shutter speed: 1/400; 1/500; 1/640; 1/800; 1/1000; 1/1250; 1/1600; 1/2000  Aperture: Auto Adjustment  ISO: 1600; 3200

92

Stellenbosch University https://scholar.sun.ac.za

 Recording resolution (MBPS): 2352x1568 (3.7MP); 3456x2304 (8MP); 4752x3168 (15MP)

5.6.3.2 Lighting In terms of lighting, a maximum Lumen output of 32 000 lux could be achieved in the centre of the imaged area. This was achieved with the lighting equipment being placed 1m above the objects of interest. Figure 25 shows one of the lighting and camera set-up configurations used to obtain actual process footage. It shows footage being taken at the roller feeder, with the fluorescent T8 lamps and mounting, the LED spotlight, and the Canon Legria digital camera are all visible in the figure.

Figure 25: Camera and lighting configuration used to obtain footage during Site Visit 1.

5.6.3.3 Pellet specifications Pellets at the roller screen have a size distribution ranging from fines to as large as 200mm diameter agglomerated chunks of pellet slurry. These sizes varied greatly due to high variation in the pellets produced by the pelletizer drum. Factors such as water content and bentonite content in the pellet slurry, as well as pelletizer drum lining and pelletizer drum angle affect pellet size produced (Gloy 2015). The pellets in the roller feeder section had a size distribution of 6-25mm. The rollers were turning at 120rpm. With a roller diameter of 130mm (120mm diameter mild steel roller and 5mm Polyurethane coating), the maximum speed of the pellets translated to 0.817m/s. This assumption takes into account that pellets would need to stick to rollers and be propelled forward, without any slipping. Pellets travel through the sintering ovens on a metal sheet conveyor known as the sinter belt. After breaking apart from the pellet bed that rests on the sinter belt, pellets free-fall down a discharge chute. Pellets are visible through openings in the chute enclosure. Therefore sintered pellets rapidly accelerate for the period in which they are visible through the chute enclosures. Pellets at the sintered pellet discharge were not physically measured to obtain a size distribution at the time of footage capturing. However, sintered pellet samples from Bokamoso proved that the sintered pellets also had a size distribution of about 6-25mm. These samples were gathered for analysis on the simulated pelletizer section, as discussed later in this section.

5.6.4 Data analysis Initial tests on the footage indicated that the inadequate lighting, as mentioned in Section 5.6.2, proved to be a significant factor deterring effective analysis of the footage. The

93

Stellenbosch University https://scholar.sun.ac.za

presence and concentration of shadows and darker regions to one side of the image, highlights the negative effect a lack of illumination from all directions can have on the suitability of images for image analysis purposes. These shadows cause an image intensity values distortion, making intensity range analyses difficult. Figure 26 is an example of an intensity range image of the onsite footage gathered during Site Visit 1. It shows the intensity value distribution distortion due to inadequate lighting. In this case, the illumination from the light source is concentrated on the left of the image, casting shadows to the right of the objects, and creating a dark region to the top right of the image.

Figure 26: Uneven illumination present in onsite footage obtained during Site Visit 1

From experience and literature examined, the computational resources required to solve the problem of inadequate illumination, as indicated in the above image, would be solved with far less effort by using adequate lighting. The decision was therefore taken postpone the analysis of onsite footage until after Site Visit 2. The scope of the site visit included the testing of a concept computer vision size estimation algorithm on actual process footage. It therefore entailed setting up a concept lighting arrangement to provide adequate lighting for capturing actual process footage suitable for object size estimation.

5.7 CONCLUSIONS DRAWN FROM PHASE 1 It can be concluded that the primary goal of the algorithm development phase had been achieved to satisfactory extent, given the use of only simulated pellet footage. It was proven that the median filter is an effective tool to eliminate the effects of random noise in an image, producing a smoother image and image histogram. This reduction in noise prevents the subsequent contrast enhancement processes from enhancing the noise, which otherwise would have reduced the effectiveness of the image segmentation algorithm. It is clear from the preliminary results achieved that the contrast stretching is an effective technique to increase overall image contrast. Its effectiveness when applied to low contrast images can be increased dramatically if a critical intensity range can be obtained that highlights the intensities of interest. If these intensities are stretched, increased result can be obtained in the subsequent thresholding procedures. Furthermore, it can be concluded that Otsu‘s thresholding method isolated objects to a satisfactory extent, by accurately isolating almost 90% of the objects in the images. The method however reduced the size of the objects by not correctly thresholding entire objects. It is clear that work needs to be done to ensure that edge pixels are included in the thresholded object. Finally, it can be said that the Blob Analysis technique is an accurate method in counting the amount of objects isolated by the thresholding procedure, as well as an effective size

94

Stellenbosch University https://scholar.sun.ac.za

estimation technique. The differences in the estimated pellet sizes and actual pellets sizes can be attributed to the incorrect segmentation of the image, prior to size estimation. This is an area of concern, and must be investigated.

95

Stellenbosch University https://scholar.sun.ac.za

6 PHASE 2 – EXPERIMENTAL (LAB-SCALE) VALIDATION AND REFINEMENT OF PELLET SIZE DISTRIBUTION ESTIMATION ALGORITHM

6.1 OUTLINE OF EXPERIMENTAL PROCEDURE Similar to the procedure used during Phase 1 of the study, the first step of the procedure entailed obtaining footage of actual chrome pellets on which to test the effectiveness of the particle size estimation algorithm. The footage had to have the qualities listed in Chapter 5. Once again, the various settings of the image capturing device, such as exposure and shutter speed, were varied during different test runs. Each of these runs would then be analysed by the same algorithm in order to establish the effects the various parameters have on the effectiveness of the algorithm. In addition to the simulated pelletizing process footage, actual process footage would also be analysed during Phase 2. The footage was gathered during Site Visit 2, which was conducted in June 2015. A similar process was followed in obtaining the footage as described above. Once again the settings of the image capturing device were varied in order to analyse the suitability of the various parameters for the pellet size estimation done during this study. In terms of algorithm development and testing, each test run would be used as input to various algorithms consisting of a combination of different procedures and techniques within the DIP field. The algorithms developed and tested in Phase 2 would incorporate and assess the suitability of the recommended alterations stemming from the algorithm analyses results obtained during Phase 1. Using various performance metrics, the various combinations of procedures could be evaluated in order to develop a suitable conceptual algorithm for particle size estimation. Finally the pellet size fraction imaged during the test runs had to be taken into account. Initially, only footage of the size fraction 13.2 mm < x ≤ 16 mm was used to conduct algorithm analyses on. This was done in order to establish a baseline algorithm capable of performing satisfactorily in terms of size estimation accuracy and repeatability. The algorithm would then be validated by testing on other size fractions, including a mixed size fraction sample.

6.2 HARDWARE AND MATERIAL CONSIDERATIONS

6.2.1 Simulated Pellet-in-process Footage

6.2.1.1 Experimental set-up requirements In analysing simulated pellet-in-process footage, it would allow for the development, refinement and testing of the size estimation algorithm and system on footage obtained under controlled conditions, having similar characteristics to the actual process at Bokamoso pelletizer plant. In order to replicate the actual pelletizer environment, a scale model of a section of the pelletizer circuit at Bokamoso would be built. One of the objectives of Site Visit 1 was to identify candidate areas in the plant that at the time were considered suitable for the implementation and operation of an image analysis-

96

Stellenbosch University https://scholar.sun.ac.za

based pellet size-distribution estimation system. Considering this objective, the following characteristics were identified as characteristics of a suitable location:

 The pellet sample (pellet layer visible to the imaging system) is an accurate representation of the population of pellets passing through and being processed at the specific point of viewing, i.e. the entire population of pellets produced by the system. o Pellets are distributed evenly on the transport medium (roller- or conveyor system). o In the case of stacked pellets, the pellets form a shallow bed of material, preferably a single layer of pellets.  Sample pellets are an accurate representation of the actual pellet production in the plant. o The pellet samples being viewed are the pellets that have gone through all the screening steps and are the actual pellets being fed into the furnace of the pelletizer plant. These pellets ultimately form the output of the pelletizer plant.  Adequate positioning of the imaging equipment for higher quality images that aid to simplify the image processing procedure. o A camera and lighting system could be installed close enough to the pellets that needed to be observed. Vertical height of the imaging system above the pellets, as well as a perpendicular orientation to the pellets, was of specific concern. The distance between the optical equipment and observed object has an influence on image clarity, object clarity and definition, and the amount of external lighting needed to highlight and adequately expose the objects of interest.  Ease of access and implementation of the hardware of the size estimation system. o For maintenance and any operational alterations that needed to be made after commissioning of the system. Taking into account the above characteristics, it was concluded that the roller feeder was the most suitable section of the plant at which a size distribution estimation system could be implemented. The following observations were the major factors that led to this conclusion:

 The pellets that pass over the roller feeder are fairly uniform in size, following prior screening by the roller screen.  The pellets that pass over the roller feeder are intended to be the final production output of the pelletizer plant, since these pellets are fed into the ovens for sintering. Therefore, monitoring the size distribution of these pellets, give a good indication of the pellet size distribution being produced at Bokamoso.  Pellets are fed evenly onto the roller feeder by a running conveyor, and move over the roller feeder in a single layer. In isolated instances a sudden surge in pellet feed from the moving conveyor results in a shallow, multi-layer pellet bed (two to three layers) moving over the roller feeder. Being random and few, these occurrences are however considered to be insignificant in terms of altering the information that can be drawn about the entire pellet population.  Imaging equipment can be easily placed perpendicularly over the pellets, ensuring minimal distortion of the true shape of the pellets being viewed by cancelling out any source of image distortion due to angle of view. Following the above conclusion, the decision was made that a roller system had to be designed and built to replicate the roller feeder installed at Bokamoso. Replicating the roller

97

Stellenbosch University https://scholar.sun.ac.za

feeder would provide the opportunity and means to design, test and validate a pellet size- distribution estimation system that could meet the requirements of the project objectives.

6.2.1.2 Lab-scale set-up

Design Considerations The main objective with building the roller system was to replicate the roller feeder section of the actual pelletizing process at Glencore‘s Bokamoso pelletizing plant. With this objective in mind, the simulated roller feeder specifications were chosen to be as close as possible to the actual system specifications. A summary of the specifications of the actual roller feeder at Bokamoso pelletizer plant is indicated in Table 17, summarised form the Roller Feeder specifications in Gloy (2015).

Table 17: Actual Roller Feeder Specifications (Adapted from Gloy, (2015))

Component Specification Description

Rollers Outside Diameter (OD) 120mm Rollers Material Type Mild Steel Rollers Colour Mild Steel Grey Rollers Length 3800mm

Rollers Rotation Speed 120rpm (adjustable) Rollers Coating Material Polyurethane Rollers Coating Thickness 5mm

Entire Roller Feeder Tilt Angle (non-adjustable) 15° (non-adjustable)

The specifications in Table 17 that are of interest are:

 Rollers outside diameter  Rollers colour  Rollers rotational speed  Roller tilt angle

The coating applied to the rollers as mentioned in Table 17 prevents green pellets from sticking to the rollers and prevents build-up of agglomerate material on the rollers. Since sintered pellets would be used in the tests conducted on the lab-scale version of the Roller Feeder, the above mentioned problems would not be present. Therefore the polyurethane coating was not considered as a design attribute for the lab-scale set-up. A second factor that influenced the design specifications was the possibility of developing a roller system while incorporating entire or parts of similar existing systems within the Department of Process Engineering at Stellenbosch University. Such a system existed in the form of a roller and chain drive system with attached motor and variable speed drive (VSD), available in the department of Process Engineering. Ten 25mm diameter round bars with rubber sleeves spaced 100mm apart were used as rollers. These round bars were mounted in pillar block bearings and driven by a chain and sprocket system connected to a 500W 3- phase motor. Ball bearings were employed in the chain drive system as chain tensioners to achieve optimal drive through chain tensioning. The roller and drive system was mounted on

98

Stellenbosch University https://scholar.sun.ac.za

an iron frame with approximate dimensions of 1.2m x 1.3m constructed with 40x2mm angle iron. Furthermore, the specifications were chosen while considering the characteristics and requirements of the tests that were identified and intended to be run on the system. These test had to be designed so as to test various operational and performance outputs of both the software and hardware involved in the estimation of pellet size. In terms of hardware, these outputs should include hardware capabilities such as the various camera settings. In terms of software and algorithm settings, sensitivity settings and operational neighbourhood sizes are among the settings to test. A final factor that played a significant role in determining the design specifications of the roller system was the dimensions and requirements of the proposed imaging system. This imaging system would be installed above the roller system to capture and process images of the roller system in operation. The results of the image processing and analysis procedure could then be used to determine the size of the pellets moving over the rollers of the roller system. Some of the significant parameters of the imaging system include an adequate illumination level or intensity of the region of interest, height of the camera above the object of interest, the area of the object or region it is capable of capturing in an image, and the region of interest of the captured image. The Region of Interest (ROI) defines the area of the captured image on which further image analysis procedures will be conducted. It typically includes the part of the object, the entire object, or the area of the observed scene that is to be analysed. Through correspondence with Van der Bijl (2014) it was established that industry size estimation practices on objects having a similar size distribution as the chrome pellets being studied for this project, conduct object size estimation on ROI‘s in the range of 0.25m2 to 1m2. Specific reference is made to the study of coal particles and construction aggregate materials. The rest of this section will give an overview of each of the components of the lab scale Roller Feeder system built at Stellenbosch University. An overview of the design considerations, technical descriptions and final component specifications will be stated.

Roller and frame size incorporating specific ROI From correspondence with Van der Bijl (2014) it was concluded that industrial object size estimation systems use an optimal ROI of between 0.25m2 and 1m2. This considered the decision was made to build a roller system that was big enough to allow for a ROI of at least 1m2. In order to achieve a ROI of these dimensions, the roller area of the simulated roller feeder was designed to have an area of at least 1.2m2. Each dimension would be a minimum of 1m. The rollers at Bokamoso are 120mm OD mild steel round tubing (Table 17). Total OD equals 130mm when the 5mm polyurethane coating is taken into account. Taking this OD into account, eight rollers would be able to be fitted to the existing roller frame mentioned above. When considering industrially available tubing dimension, it was noted that ranges included 118mm and 127mm outside diameter (OD) tubing, in a variety of iron alloys and other metal alloys. Considering the need, the 118mm diameter tubing was chosen as adequate for the specific application. The decision was taken despite the 127mm OD being a closer replication of the 130mm OD of the rollers with a 5mm polyurethane coating applied. The main reasoning behind the choice was the amount of rollers that could be fitted on the roller existing frame mentioned above. Using 118 OD tubing would allow for nine rollers to be fitted on the frame with a maximum spacing of 8mm between the rollers. It was reasoned that the

99

Stellenbosch University https://scholar.sun.ac.za

addition of rollers would aid in the spreading effect of the pellets over the rollers, better replicating the actual roller feeder operational characteristics. Considering the dimensional needs of the simulated roller system stated above, the existing roller system proved to be a viable candidate for use as a base to develop the simulated system on. After initial consideration from the Department‘s workshop personnel, the following was decided:

 The existing steel frame would be used as mounting frame for the simulated Roller Feeder‘s components. Its dimensions allowed for a roller area producing a ROI of 1.56m2. Furthermore the frame dimensions allowed for at least 9 rollers of 118mm OD to be mounted on it separated by 8mm spacing or less.  The 25mm diameter round bars could serve as axils for larger rollers. The decision was taken to increase the spacing between the round bars and mount round tubing onto the bars to obtain rollers similar to those on the actual Roller Feeder at Bokamoso pelletizer plant.  Existing pillar block bearings would be employed as mountings due to their favourable ability to allow axial translations.  The existing chain, sprockets and chain tensioner bearings would be employed as part of the drive system for the new roller system.  A motor would be sourced from the motor stock in the workshop. The motor had to be capable of turning the rollers at a minimum of 120rpm under the loading of at least 20kg of pellets.  The variable speed drive of the existing roller system was capable of driving a 0.55kW motor. Since it was in good working condition, it was considered a viable option, depending on the output capability of the chosen motor.

Drive system A drive system needed to be sourced and developed that would capable of producing a power output large enough to drive the roller system described above, along with a pellet load representative of the pellets on the actual Roller Feeder at Bokamoso pelletizer plant. One option the required power output capacity could be achieved by incorporating a direct drive configuration. A direct drive configuration refers to the use of a motor with a power output capacity larger than the load required to drive the entire system. Alternatively the concept of Mechanical Advantage could be utilised. This method entails the use of a gear system attached to the drive motor. The latter option intuitively requires a motor with a smaller power output capacity, since a gear system could increase the output power the drive system is capable of producing through the principles associated with Mechanical Advantage. An evaluation of available motors within the Department was conducted to find a suitable drive for the lab-scale roller feeder. A 0.55 kW motor with attached gear box was identified as the best suited. Initial trial and error tests with this motor attached to the completed frame and rollers system, and using a sample pellet load of about 20 kg, proved the 0.55 kW motor to be adequate as a drive for the lab-scale roller feeder system. Considering the motivation to avoid unnecessary project expenditures, the decision was taken to use this motor as the drive for the lab-scale roller feeder system.

Lighting Discussions with Van der Bijl (2014) and Petersen (2015) determined that the lay-out of the lighting equipment should be of such nature that the region of interest is illuminated from all

100

Stellenbosch University https://scholar.sun.ac.za

angles and as evenly as possible. This typically aids the particle delineation procedure considerably. The intensity of the light that reaches the OOI, measured in Lumens (lm), also plays a vital role. This is espcially applicable where objects are imaged while in motion: In order to prevent motion blur, moving objects have to imaged using very short exposure times, which in turn requires large amounts of illumination in order to produce the correct amount of exposure, colour rendering and image contrast. It is sighted that for imaging OOI moving at speeds of up to 3m/s, while using exposure times of 1ms and 2ms, industry pratice includes utilising illumation sources capable of emitting 10 000 lumens (Van der Bijl 2014). This measure is dependent on the type of lighting (LED, Fluorescent, Halogen, etc.), as well as the wattage of the lighting equipment. Finally, the Colour Temperature, measured in Kelvin, plays an important role in the lighting equipment‘s ability to render the colour of the OOI correctly. This is also referred to he euipment‘s Colour Rendering Index. It was concluded from the two abovementioned sources that a Daylight Colour Temperature, typically around 4 500 Kelvin, is ideal to give an accurate rendering of the colour of the OOI. Discussions with Lamphouse (2014) concluded that high wattage Metal Hallide or LED lamps would be best suited to achieve the desired lumen output and colour rendering. However, these two options were very expensive at the time of the construction of the lab- scale testing set-up. A suitable alternative at a fraction of the price was the T5 16mm tube lamp or fluorescent channel. Its Colour Temperature of 6500 Kelvin (Daylight Colour Temperature), which produces a very suitable and almost true Colour Rendering of the OOI, as well as the 4 750lm output per tube at a wattage of 54W, was deemed adequate for the purposes of this study.

Imaging hardware In terms of imaging equipment, two commercially available (off-the-shelf) cameras were to be used to capture the footage. These cameras belong to the PM&S research group at Stellenbosch University and were thus easily accessible and readily available for conducting tests. The specific models included firstly a Canon Legria HFM 36 digital video camera. This is the same camera used during Phase 1, and was chosen again due to its satisfactory performance during Phase 1. The second camera was a Canon EOS 500D. This camera was chosen due to its ability to take images with much shorter exposure rates than the Canon Legria HFM 36, and at must higher pixel density and quality in terms data encoding (less compression).

Building and assembling the roller feeder lab-scale set-up The lab-scale roller feeder was built in the workshop of the Process Engineering Department at Stellenbosch University, based primarily on the existing roller system as mentioned in the section: Roller system specifications, mentioned earlier. The completed roller system can be seen in Figure 27. A feeding tray was added that hinged on the main body of the roller system, to ensure an equally spread and continuous feed of pellets onto the roller system. Guard rails were added to the side of the roller system to prevent pellets from running off of the rollers during operation. A collection chute was added onto the end of the roller system to collect all the pellets after operation. In order to illuminate the entire square roller area optimally (evenly from all angles at the desired lumen output and colour temperature), it was concluded that a square array consisting of 4 x T5 fluorescent tube lights would be adequate and in line with industry practices (Van der Bijl 2014). In addtion, Bi-focal reflector wings were mounted on the

101

Stellenbosch University https://scholar.sun.ac.za

battens, along with the tube lights, in order to control the spreading of the emitted light. These reflectors, effectively focusses and concentrates all the emitted light onto the ROI. The lighting arrangement was mounted on a square frame supported 1m above the roller system with four square tubing supports. The height enabled an area of almost 1m2 to be adequately and evenly illuminated, and an area of 0.5m2 to be imaged as the ROI, as suggested by Van der Bijl (2014). The supports slide within square tubing mountings fitted to the square lighting frame. These mountings are fitted at a 15° angle, ensuring that the source of the illumination of the roller system is kept at a right angle to the roller system‘s incline. Adjustable lock pins enable the height of the lighting arrangement above the roller system to be adjusted. A square plate was mounted in the middle of the square lighting frame, thus being at the same height as the lighting equipment. This plate would be used to mount the camera and/or any other required image capturing or processing hardware. This configuration would ensure that the imaging capturing region seen and focused on by the imaging equipment is equal to the optimally illuminated ROI. This plate is visible in Figure 28. Figure 29 shows the lighting configuration when illuminated.

Figure 27: The completed roller system and lighting arrangement.

Figure 28: The lighting arrangement, with the camera mounted on the square plate visible in the centre of the figure.

102

Stellenbosch University https://scholar.sun.ac.za

Figure 29: Lighting arrangement with the lights switched on.

Quality test for correct operation During initial tests of the roller system, the following observations and necessary adjustments (where possible) were made:

 Due to the use of sintered pellets instead of green pellets, pellets would get stuck between rollers due to fact that the pellet shape could not deform as it moved over the pellets. In the actual process at Bokamoso, green pellets would deform as they moved over and between the rollers. In order to prevent this from happening, rollers were spaced only 1mm apart, instead of 8mm apart, as is the case with the actual Roller Feeder at Bokamoso Pelletizing Plant. It was not considered a problem, since the main objective of lab-scale roller feeder was not to screen pellets, but rather provide an accurate representation of actual pellet-on-roller conditions.  A VSD output Hertz of ±25Hz corresponded to the actual roller feeder‘s rpm at Bokamoso, i.e. 120rpm. However, due to the lab-scale system‘s roller diameter being less than the actual roller feeder‘s roller diameter (118mm vs 130mm), an increased rpm and thus VSD output Hz was needed to reach a linear roller velocity of 0.817m/s. During initial test runs, the optimal speed of the rollers, with a typical pellet sample loading fed onto the rollers, was determined to be equivalent to a VSD output Hertz of 30Hz. This was mainly due to the pellets getting stuck or jammed between the rollers at VSD Hertz lower than 30Hz. With this loading, 30Hz VSD output translated to ±140 rpm. For the roller diameter of 118mm, this translated to roughly 0.865m/s linear roller velocity. This meant a maximum difference of about 0.048m/s in linear roller velocity when compared to the linear velocity of the actual roller feeder‘s rollers. While assuming the unlikely scenario of zero slip of the particles on the rollers, this also translates to a maximum difference of about 0.048m/s in pellet velocity. This ±6% difference in velocity was deemed insignificant, and thus the operating VSD output Hz for further tests was chosen to be 30Hz.  It was also noted that the VSD has a ramp up period in its start-up procedure. This in order to protect the motor against pulling excess current that might damage the motor. Therefore, it was noted to wait for the rollers to be at optimal rpm, before any pellets would be loaded onto the roller system.  Caution also had to be taken not to load all the pellets at once, as this presented the problem of overloading the roller feeder and could also contribute to jamming pellets between rollers. In both cases the roller system would stop, leading to the test run to 103

Stellenbosch University https://scholar.sun.ac.za

being nullified, and having to remove pellets manually from the system. Therefore, when loading the pellets onto the roller system, caution had to be taken to evenly feed the pellets onto the system. In essence, this also corresponded to actual pellet flow on the Roller Feeder at Bokamoso.  It was noted that there exists a limited contrast between the rollers (background) and the pellets (OOI). This was seen in the difficulties it created for the algorithm in delineating the OOI from the background. It was necessary to ensure optimal lighting was provided using the available system, in order to mitigate and minimise this detrimental effect.  Finally, it was noted that the vertical movement of pellets (rolling and bouncing over the rollers) affected the focus of the imaging equipment, and also possibly biased the size estimated. The latter being due to the distance of the pellets to the imaging equipment that would constantly change. This was considered to be an inherent problem to the type and design of the actual and lab-scale roller feeder set-ups. Its effects would be determined after a more thorough analysis of the results from tests conducted on the simulated and actual process footage.

6.2.1.3 Test material considerations and preparations In order to effectively simulate the roller feeder operated at Bokamoso, ferrochrome pellets of the same shape, size and colour as those produced at Bokamoso needed to be fed over the simulated roller feeder. At Bokamoso, green pellets are fed over the Roller Feeder and into the ovens for sintering. Green pellets are very brittle and deform or disintegrate easily if handled too frequently. For this reason, the decision was taken to procure a sample of sintered pellets produced at Bokamoso. In terms of shape and size the sintered pellets correlate satisfactorily with the green pellets. The colour and surface reflectance of the green pellets are however different from the sintered pellets. This is mainly due to the moisture content of the green pellets and its absence in the sintered pellets. With referral to the outcomes of Phase 1, it was considered that object shape and contrast in colour between object (pellet) and background (transport medium) would be the main attributes by which pellets would be identified. Considering these two criteria, the colour difference between the green pellets and sintered pellets was judged to be negligible in terms of the development and operation of the size estimation algorithm. The above statement was considered especially true in terms of the initial development phases. Furthermore, the initial development phases were concerned with determining the viability of using basic MATLAB© algorithms contained within the Image Processing Toolbox and the Computer Vision Toolbox for object identification and sizing. Any alterations necessary due to the effects of the differences between the sintered and green pellets, would be addressed in the final development phases (i.e. Round 2 of Phase 1). The sample pellets obtained from Bokamoso comprised 100kg of sintered pellets. In order to conduct tests to estimate pellet size distribution, the pellets needed to be sorted and separated into various size distributions. The sorting operation was done using a traditional agglomerate sieving technique, also known as a Sieve Analysis. A sieve analysis can typically be executed on any kind of granular material and is used to assess the particle size distribution of such a material. During the procedure, sieves of incremental sizes were placed on top of each other, the smallest fraction being at the bottom. Pellets were placed into the top sieve, with the sieve stack then being placed on a mechanical vibrator or shaker. The vibrations from the mechanical shaker cause the pellets to change orientation and position within a particular sieve. This movement increases the chances of the pellets that are equal to or smaller than the sieve grade, to fall through the openings and into the next, smaller grade sieve. The

104

Stellenbosch University https://scholar.sun.ac.za

procedure is procedurally demonstrated by the flow diagram in Figure 30. Figure 31 shows the equipment set-up used to conduct a sieve analysis. Here three sieves are stacked on top of one another, and placed on the mechanical vibrator.

Mechanical vibrations Mixed size distributed Stacked sieves of result in different pellet pellets sample loaded incremental fractions fractions accumulating onto topmost sieve on different sieves

Figure 30: Procedural representation of sieving operation to grade sintered pellets into various size fractions

Figure 31: Sieve analysis equipment set-up.

The following sieve size fractions were used to grade and sort the ferrochrome pellet sample:

 6.7 mm  8 mm  9 mm  13.2 mm  16 mm  19 mm  26 mm

The above fractions effectively produced 8 groups of pellets within the following limits, with x denoting the pellet fraction: 1. x ≤ 6.7 mm 2. 6.7 mm < x ≤ 8 mm 3. 8 mm < x ≤ 9 mm 4. 9 mm < x ≤ 13.2 mm 5. 13.2 mm < x ≤ 16 mm 6. 16 mm < x ≤ 19 mm 7. 19 mm < x ≤ 26 mm 8. 26 mm < x

To obtain different size fractions, the sample pellets were divided into 20 batches of roughly the same size. Each of the batches was then sieved for 5 minutes, or until all pellets stayed stationary within a specific sieve, i.e. no more pellets were falling through any sieves. After each batch was sieved, sieved pellets were added to the respective size fractions.

105

Stellenbosch University https://scholar.sun.ac.za

During the sieving, small amounts of chrome dust accumulated in the fines tray at the bottom of the sieve stack. It was concluded that this was the result of abrasion of the chrome pellets due to the continuous collision with other pellets and the harder metal sieves during the sieving process. It was consequently decided to minimise the usage of the sample pellets (for sieving and for acquiring Simulated Pellet-in-process Footage), in order to minimise the abrasion of the pellets. Sieving would thus only take place once, with the capturing of footage having to be well planned and executed so as to minimise the number of re-runs required to acquire acceptable Simulated Pellet-in-process Footage. These measures would ensure that abrasion of the sample pellets would not influence sieve size distributions and thus size estimation results. Figure 32 shows an example of three different pellet size fractions. Figure 33 shows the approximate pellet size distribution (volume based) of the ferrochrome pellet sample obtained from Bokamoso after conducting the sieve analysis. In order to determine the efficiency and accuracy of the size estimation algorithm, the pellets need to be counted for an accurate size distribution of the physical sample pellets to be known. This was not done for Round 1 of analyses, but was done for Round 2 (as defined in Section 6.4), specifically for the analyses of mixed size fractions/distributions.

Figure 32: Three different size fractions of the pellet sample obtained from Bokamoso.

Pellet Sample Size Distribution

0.35 0.3 0.25 0.2 0.15

Volume % 0.1 0.05 0 0-6 6-8 8-9 9-13 13-16 16-19 19-26 Size Fraction

Figure 33: Approximated pellet size distribution of the ferrochrome sample obtained from Bokamoso.

6.2.2 Actual process footage In terms of image capturing devices and lighting, the same hardware used for the simulated pellet-in-process footage was used for the actual process footage captured during Site Visit 2. Due to logistical difficulties, the T5 tube lights were flown to Bokamoso Pelletizing Plant,

106

Stellenbosch University https://scholar.sun.ac.za

where a similar square mounting frame was built on which to mount the lights. This is depicted in Figure 34.

Figure 34: The construction of a lighting mounting frame at Bokamoso Pelletizing Plant, during Site Visit 2.

6.3 EXPERIMENTAL PROCEDURE FOR ACQUIRING TEST FOOTAGE

6.3.1 Execution of test runs

6.3.1.1 Simulated pellet-in-process footage To acquire video streams for analysis by the algorithm, the following procedure describes the operation of the lab-scale roller set-up and the capturing of simulated pellet on rollers footage: 1. The chosen pellet size fraction would be placed on the feeding tray and spread out evenly. The spreading of the pellets were done in order to represent the actual system at Bokamoso, which has equipment that spreads pellets evenly over the rollers in order to facilitate the screening process of the roller feeder, as well as the equal spreading of pellets onto the sintering for effective and even sintering. 2. A catching or retrieving tray was placed below the roller feeder‘s collection chute to collect all the pellets after a successful test run. 3. The VSD output hertz should be set on 30Hz. 4. The camera settings should be set to ensure the appropriate shutter speed, exposure and pixel density settings are applied for the specific test run. 5. The roller system would be started by pressing the Run button on the VSD. 6. When the rollers have reached full speed, the camera should be set to start recording. After starting the camera, the pellets are fed uniformly onto the rotating rollers. Caution should be taken not to load all the pellets at once, as this would possibly lead to the rollers jamming due to pellets getting stuck between two rollers. 7. The rollers should be run until almost all the pellets have moved over the rollers and into the collecting tray. Note that some pellets may remain spinning on the rollers in one place, despite continuous rotation of the rollers. 8. The camera should be stopped at this point. 9. The rollers system should be stopped by pressing the Stop button on the VSD. 10. All the remnant pellets should be manually cleared off of the roller system and placed into the collecting tray.

6.3.1.2 Actual process footage The lighting frame was mounted at the identified areas where suitable footage would be acquired. Two areas where identified, the Roller Feeder (as with Site Visit 1), and an incline conveyor carrying sintered pellets to a sintered pellet storage bin. From these two areas two

107

Stellenbosch University https://scholar.sun.ac.za

types of pellet-in-process footage would be gathered: pellets-on-roller footage (similar to the simulated pellet-on-roller footage gathered), and footage of pellets being transported on a conveyor belt. The first type of footage was important in order to continue with the plan set out to analyse footage of pellets on rollers, as discussed in Chapter 1. It was important to compare and validate results obtained using the simulated process footage to results using actual pellet- on-roller footage. Figure 35 depicts the action of capturing actual pellets-on-rollers footage.

Figure 35: Capturing of actual pellets-on-rollers footage at Bokamoso Pelletizing Plant

However, in terms of the second type of footage, footage of pellets being transported on a conveyor belt, the reasoning stemmed from initial tests conducted using simulated pellet-on- roller footage. The observations that are specifically applicable, which were made during the preliminary tests and described in Section 6.2.1.2, are repeated below:

 Limited contrast between the rollers and the pellets created difficulties in delineating the OOI from the background.  Vertical movement of pellets affected the focus of the imaging equipment, and also might bias the size estimated due to the distance of the pellets to the imaging equipment would constantly change. In order to further depict the differences between the two types of actual process footage, the above disadvantages are included in a comparison between the two footage options. Table 18 shows this comparison.

Table 18: Comparison between Roller Feeder footage and pellets-on-conveyor footage.

Advantages Disadvantages

Roller • Evenly spread pellet • Vertical movement of pellets – Feeder feed perpendicular to image plane • Favourable size • Glare due to moisture in green pellets distribution of pellets • Low contrast between background • Representation of final (rollers) and pellets due to pellet material product deposited on rollers • Favourable for process control due to early detection

108

Stellenbosch University https://scholar.sun.ac.za

Sintered • No reflection due to • Piled pellets may lead to inaccurate pellets on moisture in pellets representation due to effects such as the conveyor • No pellet movement Brazil nut effect belt relative to background • Pellets viewed include bottom layer (conveyor) pellets, having a different colour than the newly sintered pellets • Heat convection and radiation from sintered pellets can potentially damage equipment

Due to the advantages listed that could mitigate the observations listed above, it was chosen to conduct tests on a second type of footage, namely the pellets-on-conveyor footage. Due to pellets being transported on top of each other, OOI would only have to be distinguished from each other, and not also from the background. In addition, the vertical would not be present, since the OOI would not be moving relative to the conveyor.

6.3.2 Imaging hardware parameters The following camera parameters were varied and paired in all possible combinations, for both the simulated process footage and the actual process footage.: Canon Legria

 Shutter speed: 1/125s; 1/250; 1/500; 1/1000; 1/2000s  Aperture: Auto Adjustment  ISO: Auto Adjustment  Recording resolution (MBPS): 1440x1080 (1.5MP); 1920x1080 (2MP)

Canon EOS 500D

 Shutter speed: 1/400; 1/500; 1/640; 1/800; 1/1000; 1/1250; 1/1600; 1/2000  Aperture: Auto Adjustment  ISO: 1600; 3200  Recording resolution (MBPS): 2352x1568 (3.7MP); 3456x2304 (8MP); 4752x3168 (15MP)

In the case of the footage being captured using the Canon Legria HFM 36, each test run video acquired using the aforementioned procedure was then edited using standard editing software. This allowed for only the footage showing pellets rolling over the conveyor to be imported into the algorithm for analysis. In the case of still images being captured using the Canon EOS 500D, images were imported into the algorithm as is.

6.4 ALGORITHM DEVELOPMENT AND DATA ANALYSIS In order to facilitate the reporting, the results of analyses conduct on simulated pellet-in- process footage will be divided into two sub-sections: Round 1 of analyses, and Round 2 of analyses. This format will aid in presenting the methodology and thought process behind the development of the pellet size estimation algorithm. It also coincides with the scheduled reporting that was required by the sponsors of the study during its execution.

109

Stellenbosch University https://scholar.sun.ac.za

6.4.1 Round 1 of analyses

6.4.1.1 Description of algorithm used During Round 1 of analysis, the main objective was to evaluate and compare the effectiveness of two different size estimation procedures applied to the PSE algorithm. These two methods are the polygon fitting Blob Analysis method, and the spherical fitting Hough Transform. In terms of pre-processing procedures, the algorithm also utilised edge detection in the form of the Sobel edge detector to identify OOI. This after it was established that this procedure was more effective than Otsu‘s global thresholding procedure in accurately identifying and isolating OOI.

6.4.1.2 Development and application of algorithm Being a graphical programming environment for modelling, simulating and analysis of dynamic systems (The MathWorks Inc. 2014), Simulink®‘s graphical block diagramming tool interface presented an user-friendly platform to simulate any desired DIP procedure to be tested. Due to the ease of implementation, the steps that made up the procedures or algorithms within the developed SE algorithm, were represented by specific blocks that formed part of the Simulink® library of functional blocks. As such, these blocks were connected within workspace to form the desired DIP algorithm. The Simulink® workspace allowed for video or still images to be imported into the algorithm from die hard drive of the computer. Therefore the footage to be analysed would be imported into the algorithm using one of the functional blocks from the Simulink® library, and then analysed by executing the remainder of the steps in the algorithm. The output was then drafted within an excel document for further analyses at a later stage.

6.4.1.3 Size validation As described in Section 6.2.1.3, pellets were sorted into various size fractions using the sieve analysis method. However, individual pellets sizes within a pellet fraction are not known, and vary considerably due to the irregular shape of the pellets. This entails that pellet sizes within fractions would vary in every image. Due to pellet orientations changing every instant, a method was needed to relate actual and estimated pellet size for instantaneously captured pellet footage, i.e. for every individual image. To relate actual pellet size with an estimated size, a pixel-to-mm ratio was established. This ratio related pixels in an image to the actual millimetre distance they represented. Roller width was used as a reference measurement. Using a pixel measuring tool in MATLAB©‘s Image Editor function, a pixel length of roller width was obtained. Figure 36 shows an enlarged image of this measurement being done. The ratio developed for the initial tests reported on in this paper used a 260 pixel representation for the 114.3 mm width of the rollers. This resulted in a pixel-to-mm ratio of 260:114.3, indicated by Equation 47. Equivalently, this ratio can be written as 2.27 pixels/mm, or as 0.44 mm/pixel. Using this ratio, the actual sizes of pellets in sample images could be determined. These actual pellet sizes would serve as a validation for estimated pellet sizes in both the methods discussed below.

Equation 47: Pixel-to-mm ratio used in pellet size estimations reported on in this paper.

110

Stellenbosch University https://scholar.sun.ac.za

Figure 36: Roller width measurement using MATLAB©’s Imdistline function.

Actual pellet sizes of pellets in sample images were obtained using the same procedure as described above. Pellet dimensions were measured in pixels using MATLAB©‘s Imdistline function, from which an actual size was obtained using the ratio established above. Figure 37 shows an enlarged image of actual pellet size measurements being taken. Table 19 is a summary of the measurements shown in Figure 37. Column 1 indicates the sample number. Column 2 and 3 contain the measurements in number of pixels. Two random measurements were taken for every pellet. Using the ratio of Equation 47, the millimetre equivalent lengths were obtained. These represent the actual lengths over the measured distances obtained by using the Imdistline measuring tool.

111

Stellenbosch University https://scholar.sun.ac.za

Figure 37: Actual pellet size measurements using MATLAB©’s Imdistline function.

Table 19: Summary of actual dimensions of sample image pellets.

Imdistline measurement Pixels mm Equivalent Sample Measurement 1 Measurement 2 1 2

1 34 32.12 14.95 14.12 2 35.27 37.21 15.51 16.36 3 29.62 29.82 13.02 13.11

Averaging the two measurements obtained per pellet, and then the three resulting measurements, produces an average 2-dimensional pellet size of 14.51 mm for the pellets in Figure 37. Considering that the pellets in Figure 37 form part of the 13.2 mm < x ≤ 16 mm size fraction, this result seems to correspond with sieve analysis pellet fractions. This result can be used in later validations of the various size estimation techniques.

6.4.1.4 Simulated pellet-in-process footage Figure 38 is an example of an intensity image of the lab-scale roller feeder operation, intended to simulate actual roller feeder operation at Bokamoso. The footage used for analysis in this specific case was of pellets within the size fraction 13.2 mm < x ≤ 16 mm. All analyses reported on in this round of analyses were conducted on footage of this pellet size fraction. Running tests on the type of footage shown in Figure 38, using the initial algorithm that was developed in Phase 1 of the study, highlighted one major obstacle to its efficient operation. This being the contrast in image intensity levels between the upper parts of the rollers and the areas between rollers. This contrast is pointed out in Figure 39, with the arrows indicating the two extreme intensity ranges.

112

Stellenbosch University https://scholar.sun.ac.za

Figure 38: Cropped intensity image obtained from lab-scale roller feeder operation.

Figure 39: Contrast in image intensity highlighted.

The above contrast reduced the algorithm‘s ability to isolate pellets using contrast between background and pellets. Alternative methods needed to be investigated to isolate the objects of interest, i.e. the pellets. To mitigate this effect, it was decided to isolate a single roller from the image and analyse the pellets moving over the specific roller. The area of the roller was chosen in such a manner that the region of highest intensities was isolated, so as to maximise the contrast between the roller (background) and the pellets (objects of interest). The isolated region is indicated by the red shading in Figure 40.

Figure 40: High intensity region of single roller isolated within an image.

Figure 41 shows an example of a pre-processed image of a single roller with pellets.

Figure 41: Pre-processed image of a single isolated roller.

Isolating the above indicated region allowed the use of the two mentioned size estimation methods to estimate pellet size:

 The Blob Analysis method (as used in Phase 1 of the algorithm development)  The Hough Transform

With regards to accurate pellet size estimation, the following section discusses some of the first successful results that were obtained using the abovementioned methods. Individual

113

Stellenbosch University https://scholar.sun.ac.za

examples of successfully executed pellet size estimation procedures are provided and discussed. These examples serve as an indication of the basic capabilities of the size estimation algorithm at the time of execution of Round 1 of analyses of Phase 2 of the algorithm development.

Blob analysis Figure 42 shows the flow diagram of the steps take in the algorithm to determine pellet size using the Blob analysis function in MATLAB©.

Figure 42: Flow diagram of algorithm using the Blob analysis to determine pellet size

The following steps can be identified:

 Acquire image: Capture image using imaging hardware  Crop image: Isolate a single roller as discussed in the previous section  Smoothing filter: Apply a 3x3 median filter to average out local intensities, reducing contrast variation within regions in an image  Contrast adjustment: Increase contrast between regions in an image  Morphological closing: Additional noise removal, increasing contrast between object of interest and background  Smoothing filter: 3x3 median filter application to average out local intensities to further reduce contrast variation within isolated regions in an image  Sobel edge detection: Detect edges of regions to isolate pellets (OOI)  Fill holes: In order to identify pellets, circular objects are highlighted by ‗filling‘ circular edge regions with white pixels  Blob analysis: Apply the Blob analysis function to determine the area represented by the pellet in number of pixels, from which pellet size can be deduced with methods discussed in Chapter 3

Steps two to eight of the abovementioned steps are indicated graphically in Figure 43. Starting at the second step (image cropping), the sequential bars in the figure each represent one of the sequential steps described above.

114

Stellenbosch University https://scholar.sun.ac.za

Figure 43: Graphical representation of the first five steps of pellet size estimation using the Blob analysis

The sizes of the identified pellets are determined by calculating a blob area for each pellet. This area refers to the number of pixels each blob (pellet) is represented by in the image. From this area, pellet size can be deduced by relating pixel size to actual object size, as related by the pixel-to-mm ratio of Equation 47.

Size estimation with Blob Analysis One of the outputs of the Blob Analysis represents the dimensions of the bounding box of each of the objects identified by this method. A bounding box refers to the smallest possible 2-dimensional square that encloses all the points of a specific object. It therefore has an x- and y-dimension for an x-y axis system. The bounding box dimensions of the Blob Analysis are given in terms of the x-y coordinate system used for standard image processing in MATLAB©. The objects measured in Figure 37 were isolated using the Blob Analysis technique, producing the image shown in Figure 44. The bounding boxes of each object are visible in Figure 45.

Figure 44:Objects identified using Blob Analysis.

Figure 45: Objects with associated bounding boxes superimposed.

115

Stellenbosch University https://scholar.sun.ac.za

Table 20 shows the results of the size analysis conducted on the pellets in Figure 44 using the Blob Analysis method. The first two columns indicate the bounding box measurements of the three objects. Columns 3 and 4 indicate the equivalent estimated size in millimetre, after applying the pixel-to-mm ratio of Equation 47.

Table 20: Results of pellet size estimation using the Blob Analysis method.

Blob analysis (Bounding box) Pixels mm Equivalent Measurement Measurement 1 (width) 2 1 (x = width) 2 (y = height) (height) 34 34 14.95 14.95 34 37 14.95 16.27 30 32 13.19 14.07

Averaging the two measurements obtained per pellet, and then the three resulting measurements, produces an average 2-dimensional pellet size of 14.73 mm for the pellets in Figure 37. Considering that the pellets in Figure 37 form part of the 13.2 mm < x ≤ 16 mm size fraction (refer to Section 6.2.1.3), this result indicates that preliminary size estimation seems to correspond with sieve analysis pellet fractions. This result is also very similar to the 14.51 mm obtained using MATLAB©‘s measurement tool, Imdistline, with an error of 1.52%. Although these initial results seemed promising, further analysis and validation on larger samples and mixed particle size samples would have to be conducted in order to conclusively validate and describe the system‘s accuracy and effectiveness. This would be done during Round 2 of analyses, as discussed in subsequent sections.

Hough Transform Figure 46 shows the flow diagram of the steps take in the algorithm to determine pellet size using the Hough Transform.

Figure 46: Flow diagram of algorithm using the Hough Transform to determine pellet size

The following steps can be identified:

 Acquire image: Capture image using imaging hardware  Crop image: Isolate a single roller as discussed in the previous section

116

Stellenbosch University https://scholar.sun.ac.za

 Smoothing filter: Apply a 3x3 median filter to average out local intensities, reducing contrast variation within regions in an image  Contrast adjustment: Increase contrast between regions in an image  Morphological closing: Additional noise removal, increasing contrast between object of interest and background  Smoothing filter: 3x3 median filter application to average out local intensities to further reduce contrast variation within isolated regions in an image  Sobel edge detection: Detect edges of regions to isolate pellets (OOI)  Fill holes: In order to identify pellets, circular objects are highlighted by ‗filling‘ circular edge regions with white pixels  Hough transform: Apply circular Hough Transform to determine the approximated equivalent diameter circle that encompasses the identified pellets, measured in pixels

The abovementioned steps are indicated graphically in Figure 47. Starting at the second step (image cropping), the sequential bars in the figure each represent one of the sequential steps described above.

Figure 47: Graphical representation of pellet size estimation using the Hough Transform

117

Stellenbosch University https://scholar.sun.ac.za

Figure 48 and Figure 49 indicate preliminary results obtained using the method described above. Figure 48 shows resulting image after applying an edge detection algorithm (Sobel edge detector) to Figure 41, and then a filling operation to highlight the identified pellets. Figure 49 shows the result of applying the circular Hough transform to the image in Figure 48. The circles that are superimposed on the identified pellets represent the approximated equivalent radii circles of these pellets. These radii are measured in pixels, and can be related to mm by applying the pixel-to-mm ratio of Equation 47.

Figure 48: Result of applying edge detection and a filling function to the image in Figure 14.

Figure 49: The result of applying the circular Hough transform to the image in Figure 23.

Size estimation with Hough Transform One of the outputs of the Hough Transform method is the radii of the circle that encompasses each circular object in the image. Figure 50 shows these circles on the objects identified in Figure 37.

Figure 50: Encompassing circles indicated on the objects identified using the Hough Transform method.

Table 21 shows the results of the size analysis done using the circular Hough Transform and which is represented in Figure 50. The first two columns indicate radii of the encompassing circles, as well as the associated diameter. Columns 3 indicate the equivalent estimated size in millimetre, after applying the pixel-to-mm ratio of Equation 47.

Table 21: Results of pellet size estimation using the Hough Transform method.

Hough Transform (Sensitivity = 90) Pixels mm Equivalent Radii Diameter Diameter 16.99 33.98 14.94 16.84 33.69 14.81 15.64 31.27 13.75

Averaging the measurements obtained per pellet, produces an average 2-dimensional pellet size of 14.5 mm for the pellets in Figure 50. Considering that the pellets in Figure 50 form part of the 13.2 mm < x ≤ 16 mm size fraction, this result indicates that preliminary size estimation using the circular Hough Transform seems to correspond with sieve analysis pellet fractions. This result is also very similar to the 14.51 mm obtained using MATLAB©‘s measurement tool, Imdistline, with and error of 0.07%. And the result is very similar to the 14.73 mm obtained using the Blob Analysis method. As in the case of the Blob Analysis method, these initial results seemed promising. However, further analysis and validation on larger samples and mixed particle size samples would

118

Stellenbosch University https://scholar.sun.ac.za

have to be conducted in order to conclusively validate and describe the system‘s accuracy and effectiveness. Similar to the Blob Analysis, this would be done during Round 2 of analyses, with the discussions of this process following in subsequent sections.

6.4.1.5 Important considerations for subsequent project phase In terms of replicating the actual pelletizing process at Bokamoso, the lab-scale roller feeder set-up proved to be adequate. Its parameters resembled those of the actual roller feeder at Bokamoso on a scale that is satisfactory for the planned tests and analyses. Specific reference is made to the roller dimensions and colour. Furthermore, under operational conditions, its simulation of pellet movement over rollers, as observed at Bokamoso, has proved to be adequate for the required tests and analysis. Pellets are propelled forward over the rollers by a combination of the effects of gravity, pellets pushing against each other, and the rollers driving pellets forward. With regards to identifying and sizing individual pellets from simulated pelletizer footage, the algorithm that was developed as part of Round 1 of analyses was partially successful. The following areas were identified in which it was successful:

 Successful incorporation and adaptation of image pre-processing procedures developed during the earlier project phases.  Identifying and isolating individual pellets during the analysis of simulated pellets-on- roller footage.  It has the capability of determining an estimated size of the pellet, through the two sizing methods investigated.

However, it is important to note the following:

 Segmented images of individual rollers were analysed due to the difficulties in analyses caused by the contrast in image intensity values of images of multiple rollers.  Analysis was only conducted on one size fraction of pellets.

6.4.2 Round 2 of analyses Round 2 of analyses was aimed at improving the algorithm developed during Round 1, by incorporating the considerations and the conclusions drawn from Round 1 of analyses. Three types of footage were analysed, as described below, and depicted in Figure 51:

 Lab-scale Roller Feeder set-up  Actual Roller Feeder footage (from Site Visit 2)  Actual Sintered Pellet-on-conveyor footage (from Site Visit 2)

Figure 51: Three different types of footage analysed during Round 2 of data analyses in Phase 2: Lab-scale Roller Feeder (left), Actual Roller Feeder (middle), Actual process Sintered Pellet-on-conveyor (right).

The main alterations made to the algorithm included in the first place the application of a bilateral filter for improved filtering. This type of filter is especially popular due to its ability to

119

Stellenbosch University https://scholar.sun.ac.za

perform smoothing of image regions, while preserving sharp intensity changes usually associated with object edges. In addition, this filter proved to further reduce unwanted image noise and produce smoother and more uniform intensity regions around the entire image. The following set of procedures indicates the inclusion of the bilateral filter into the existing pre-processing algorithm:

 Image converted to a grey-scale image  Median filter applied to grey-scale  Morphological closing applied to Median filtered image  Contrast adjustment over entire intensity range of Median filtered image  Bilateral filter applied to the resulting grey-scale image Figure 52 displays the sequential output of applying the above procedure to simulated process footage.

Figure 52: Sample of the above procedure applied to simulated pellet-on-roller footage

The second alteration that was employed was the watershed transform, used specifically to improve segmentation and thus separation of pellets. Figure 53 presents an example of the segmentation results obtained by applying the Watershed transform to pellets-on-conveyor footage.

Figure 53: Example of the application of the Watershed transform to separate touching OOI (pellets).

120

Stellenbosch University https://scholar.sun.ac.za

6.4.2.1 Description of algorithm used Captured footage would be imported into MATLAB© and/or Simulink®, after which it would be cropped to a specified size so that the resulting image would only include a region containing objects of interest. In this way, unnecessary items such as background structures, were eliminated from the images that had to be processed for particle size estimation. By incorporating the previously mentioned alterations to the algorithm, i.e. the Bilateral filter and the Watershed transform, the following algorithm for particle size estimation was developed and would be applied to the cropped image: 1. Grey-scale conversion 2. Median filtering 3. Morphological closing 4. Contrast adjustment over entire intensity range 5. Bilateral filtering 6. Contrast enhancement 7. Edge detection 8. Morphological edge closing 9. Object/blob/region isolation/filling 10. Image cleaning 11. Splitting of touching regions (Watershed algorithm) 12. Repeat 8-11 (x2) 13. Combine region images The image that was produced following the execution of step 11 in the above sequence is an image with delineated OOI, similar to Figure 53b, however probably containing less OOI. It was proven during the algorithm development that if the already identified OOI were removed from the original edge image, a repetition of steps 8-11 would allow for additional OOI to be identified and delineated. Thus the first delineated image would be stored and subtracted from the original edge image (i.e. a negative of this image would be superimposed on the original edge image), after which the second delineated image would be added to the first to produce the final delineated image. Figure 53b is an example of such an image. Following the successful application of the algorithm, specific region properties would be calculated for each of the regions (OOI) delineated in the combined delineated image obtained in step 13. Each of the properties represent a characteristic of the individual OOI that would be required and used either to filter out incorrectly identified or incomplete OOI (such as OOI that have been split up incorrectly). Or the properties would be used to establish an estimation of the size of each of the OOI. These properties would then be collectively used to establish an estimated particle size distribution for the entire population. The four properties are listed below. A short description of each property is stated along with key output parameters calculated when a property is extracted: 1. Eccentricity o Measure of region‘s circularity . 0 = circle, 1 = line segment 2. Extent o Ratio of pixels in region to pixels in bounding box 3. Bounding box o Smallest rectangle containing region . Centroid . Height & Width (pixels) 4. Fitted Ellipse

121

Stellenbosch University https://scholar.sun.ac.za

o Ellipse that has the same second moment as region . Major & Minor Axes lengths (pixels)

Filters Upon extraction of the various region properties, some of these properties were used to distinguish between actual pellets identified and non-OOI artefacts. This was done by applying various thresholds, also termed as filters, to two of the properties. The filters related to specific values of the properties, which were determined through the observation of the results of this problem specific application, as well as through case study observations during the Literature Review (Chapter 1). The following filters were developed during this study:

 Eccentricity: 0.9 o A value <0.9 would correspond to a circular object, most likely an OOI (chrome pellet). o As depicted in Figure 54, a value >0.9 typically corresponded to illumination strips on rollers segmented by the algorithm.

Figure 54: Illumination strips incorrectly identified as OOI, but filtered using the Eccentricity filter.

 Extent: >0.6 o A value >0.6 would typically indicate that identified object was correctly identified as a chrome pellet. o Figure 55 depicts typical scenarios where the OOI identified, is not a chrome pellet. As can be seen from the figure, a value <0.6 typically correlated to: . Spaces between OOI . Clustered pellets identified as single pellets (not separated by Watershed algorithm) . Shadows or spaces between pellets added to pellet profile . Oddly shaped pellets

Figure 55: Incorrectly identified objects, filtered by the Extent filter.

In order for an identified object to be accepted as a correctly identified OOI (chrome pellet), both the above filters had to be satisfied. After passing the two criteria, the Fitted Ellipse and

122

Stellenbosch University https://scholar.sun.ac.za

Bounding Box region properties were applied in order to establish an estimated particle size for the identified pellet.

Size estimation After the successful application of the filters, an ellipse was fitted over the OOI in order to establish an estimated particle size in terms of number of pixels. The minor axis length was chosen to represent the estimated particle size. This due to the consideration that is represented the closest relation to the Sieve Size that would otherwise be considered as the particle size in a manual sizing operation. The Sieve Size can also be described as the smallest pellet dimension which allows the pellet to pass through a 2D square plain.

6.4.2.2 Development and application of algorithm Despite the successful use of Simulink® for algorithm development during Round 1 of analyses, difficulties were faced when implementing more complicated and custom procedures to the algorithm in the Simulink® modelling space. In order to bypass these difficulties, the decision was taken to continue and complete the algorithm development in a MATLAB© script file. This change increased the ease with which these more complicated procedures could be implemented, altered, and integrated into the overall algorithm. The same functionality was available in terms of allowing for video or still images to be imported into the algorithm from die hard drive of the computer, and then sequentially processed and analysed. The output was again drafted within an excel document for any further analyses as well as the graphical and tabular representation of the results of the analyses.

6.4.2.3 Size validation The above method estimated particle size in terms of pixels. In order to relate this pixel value to a particle size stated in mm, a pixel to mm ratio was developed. This ratio would help to relate the area represented by a single image pixel to the actual (real-life) dimensions (mm) of the part of the object captured in the image pixel. Different ratios were developed for the various types of footage analysed. Two different references were used for the various ratios that were developed:

 Roller width: this reference was used for estimating particle size during the analyses of the lab-scale footage. It is the same method that was utilised during Round 1 of analyses (Section 6.4.1).  Measuring tape captured in footage: this reference was used for estimating particle size during the analyses of the footage captured during Site Visit 2. Footage mentioned here refers to both Roller Feeder & Sintered Pellets Conveyor footage. Figure 56 shows how a measuring tape was captured in an image to obtain a reference mm size for each pixel contained in the image. This technique was applied to both the pellet-on-roller footage (left) and the pellet-on-conveyor footage (right).

123

Stellenbosch University https://scholar.sun.ac.za

Figure 56: Establishing a reference to estimate particle size from Site Visit 2 footage.

The final two steps in the algorithm had two objectives: visual presentation of the size estimation results and data generation for final analyses. The first step was aimed at visually presenting the results of the particle identification procedure. A boundary was drawn around the objects that were successfully identified as OOI. This boundary coincided with the actual delineated particle boundary, as delineated by the algorithm, which was used to estimate particle size. Each particle was also numbered for ease of identification and user observation. These two features were superimposed onto the specific image (or video frame) on which the size estimation was conducted. Each image was saved into a pre- defined storage folder on a specified mass storage device, along with the associated size estimation output data, as discussed in the following paragraph. This output image could then be accessed at will and used for possible user interaction and quality checks. The second step was aimed at providing the output of the size estimation to a Microsoft Excel spread sheet for the final data analyses. The data was saved in a table format in a new excel spread sheet with a custom name corresponding to the type of data evaluated. This name typically included the following:

 Type of footage (Simulated pellet-in-process footage or actual process footage)  The type of image capturing device used  The parameters of the most important imaging hardware settings used to capture the footage (ISO, Resolution, Shutter speed and Aperture) The table consisted of the various identified OOI and their corresponding region properties, as listed in Section 6.4.2.1.

6.4.2.4 Data capturing and processing During Round 2 of analyses, the following data model was used to collect data for processing by the developed algorithm:

 10 sample frames would be drawn from the footage of each test run o For video footage (Canon Legria), these frames would be drawn randomly from the video roll, and imported into the algorithm for analysis. o For still camera footage (Canon EOS 500D), 10 consecutive images were captured during each test run, and used in the same order during analyses. From the data imported into the excel file (refer to preceding section), the sample Mean and root-mean-square-error (RMSE) were calculated for all samples. In specific cases, a Chi- Square test was conducted in order to establish the goodness of fit of the estimated data to the sample data. The mean of the sample was calculated as follows:

124

Stellenbosch University https://scholar.sun.ac.za

 Lab-scale roller feeder set-up: mean of the size fraction was taken to represent the mean of the sample. This will henceforth be referred to as the sieve mean.  Actual (on-site) process footage: assumed to be 12mm. This corresponds to the desired pellet production specification, as stated in Section 5.2.4. This will henceforth be referred to as the assumed mean. The results of the aforementioned analyses were then presented graphically in terms of sieve analysis and estimated pellet sizes and size fractions, with the Mean, RMSE and Chi- Square test results also indicated.

6.4.2.5 Simulated pellet-in-process footage Figure 57 provides an example of a visual representation of the output of the particle identification procedure. The objects that were identified and which passed the filtering procedure, have been numbered and outlined, with the outline corresponding to the actual delineated particle boundary, as delineated by the algorithm.

Figure 57: Sample output of the analyses done on simulated pellet-in-process footage.

Firstly, the algorithm was applied to the three most prominent size fractions of the sample pellets available for tests at Stellenbosch University. These included the following:

 9-13.2mm  13.2-16mm (as in the case of the Round 1 of analyses, discussed in Section 6.4.1)  16-19mm The following three sub-sections will provide typical representations of results obtained when conducting size estimation on the three types of pellet fractions mentioned above. For all three examples, the shutter speed of the camera was set at 1/1000. Following these three sections, a section of consolidated results will compare the size estimation done on footage gathered using shutter speeds of 1/1000 and 1/2000.

Individual size fractions The following results were obtained from analyses of simulated pellet-in-process footage, captured with the Canon Legria HFM 36 video camera.

125

Stellenbosch University https://scholar.sun.ac.za

9-13.2mm Size fraction The following results are typical of the results that were obtained for the 9-13.2mm size fraction, with the following camera settings:

 Shutter speed: 1/1000  ISO: Auto Adjustment  Resolution: 1440x1080 (1.5MP)  Aperture: Auto Adjustment Figure 58 displays the estimated pellet size of individual pellets identified in a single sample frame (image) of the specified footage. The mean was taken to be the average of the size distribution, i.e. 11.1mm sieve mean, while the sieve maximum and minimum values were taken as the boundary values of the size fraction of the pellet distribution under consideration.

Estimated Pellet Size 20 18 16 14 Sieve Max 12 Sieve Mean 10 Estimated Mean

Pellet Pellet size (mm) 8 Sieve Min 6 Estimated Pellet Size 4 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 Pellets identified

Figure 58: Estimated pellet size for the 9-13.2mm size fraction

Figure 59 displays the estimated pellet size distribution corresponding to the results contained in Figure 58.

126

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size Distribution 20 18 19 19 16 18 18 14 16 12 10 8 6 9 Frequency 4 7 2 0 0 0 0 0 1 2 4 2 1 0 0 0 0

Estimated Size Fractions

Figure 59: Estimated pellet size distribution for 9-13.2mm size fraction

For the above results, the estimated mean was 10.87mm, which was -0.23 from the sieve mean of 11.1mm. The RMSE was calculated at 2.24mm.

13.2-16mm Size fraction The following results are typical of the results that were obtained for the 13.2-16mm size fraction, with the following camera settings:

 Shutter speed: 1/1000  ISO: Auto Adjustment  Resolution: 1440x1080 (1.5MP)  Aperture: Auto Adjustment Figure 60 displays the estimated pellet size of individual pellets identified in a single sample frame (image) of the specified footage. The sieve mean was taken to be the average of the size distribution, i.e. 14.6mm, while the sieve maximum and minimum values were taken as the boundary values of the size fraction of the pellet distribution under consideration.

127

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size 20 18

16 14 Actual Max 12 Predicted Mean 10 Actual Mean

Pellet Pellet size (mm) 8 Actual Min 6 Predicted Pellet Size 4 1 8 15 22 29 36 43 50 57 64 71 Pellets identified

Figure 60: Estimated pellet size for the 13.2-16mm size fraction

Figure 61 displays the estimated pellet size distribution corresponding to the results contained in Figure 60.

Estimated Pellet Size Distribution 16 14 12 14 10 11 10 8 9 8 6 7 Frequency 4 2 0 0 0 0 0 1 0 1 4 2 2 2 0 0 0

Estimated Size Fractions

Figure 61: Estimated pellet size distribution for 13.2-16mm size fraction

For the above results, the estimated mean was 12.99mm, which was -1.61 from the sieve mean of 14.6mm. The RMSE was calculated at 2.79mm.

16-19mm Size fraction The following results are typical of the results that were obtained for the 16-19mm size fraction, with the following camera settings:

 Shutter speed: 1/1000  ISO: Auto Adjustment  Resolution: 1440x1080 (1.5MP)  Aperture: Auto Adjustment

128

Stellenbosch University https://scholar.sun.ac.za

Figure 62 displays the estimated pellet size of individual pellets identified in a single sample frame (image) of the specified footage. The mean was taken to be the average of the size distribution, i.e. 17.5mm, while the sieve maximum and minimum values were taken as the boundary values of the size fraction of the pellet distribution under consideration.

Estimated Pellet Size 26 24 22

20 18 Actual Max 16 Predicted Mean 14 12 Actual Mean 10 Pellet Pellet size (mm) Actual Min 8 Predicted Pellet Size 6 4 1 8 15 22 29 36 43 50 Pellets identified

Figure 62: Estimated pellet size for the 16-19mm size fraction

Figure 63 displays the estimated pellet size distribution corresponding to the results contained in Figure 62.

Estimated Pellet Size Distribution 10 9

8 9 7 8 8 6 5 6 4 5 5 3 Frequency 2 3 1 0 0 0 0 0 0 0 2 2 2 1 2 0 0

Estimated Size Fractions

Figure 63: Estimated pellet size distribution for 16-19mm size fraction

For the above results, the estimated mean was 14.79mm, which was -2.71 from the sieve mean of 17.5mm. The RMSE was calculated at 4.41mm.

Consolidated Results The results portrayed in Table 22 include a summary of the mean values of 10 randomly chosen samples in all the various categories listed in the table. The values in parentheses equal the difference between the estimated means and the sieve mean, as stated below the

129

Stellenbosch University https://scholar.sun.ac.za

size fraction in the heading of the various columns. The table also compares the two different shutter speed settings, namely 1/1000 and 1/2000.

Table 22: Consolidated results of the three pellet size fractions – Mean

Pellet Size Fraction Shutter Speed 9-13.2mm 13.2-16mm 16-19mm (mean = 11.1) (mean = 14.6) (mean = 17.5) 1/1000 10.64 (- 0.46) 12.88 (- 1.72) 15.59 (- 1.91)

1/2000 10.62 (- 0.48) 12.84 (- 1.76) 15.22 (- 2.28)

Table 23 contains a summary of the RMSE values for 10 randomly chosen samples in all the various categories listed in the table. The table also compares the two different shutter speed settings, namely 1/1000 and 1/2000.

Table 23: Consolidated results of the three pellet size fractions – RMSE

Shutter Pellet Size Fraction Speed 9-13.2mm 13.2-16mm 16-19mm 1/1000 2.41 3.22 4.17

1/2000 2.33 3.11 3.73

Figure 64 is a graphical representation of the values displayed in Table 23. This representation highlights the upward trend in the RMSE value, as the fraction changes to one containing larger pellets.

RMSE - Lab-scale Pelletizer Footage 4.5

4

3.5 9-13.2mm 3 13.2-16mm

Average Average RMSE 2.5 16-19mm

2 500 1000 1500 2000 2500 Shutter Speed (1/x of a second)

Figure 64: RMSE plotted for the values displayed in Table 23

Observations from individual size fraction tests The most notable observation that can be made from the above results is that the algorithm, when applied to the individual size fractions, under-predicts pellet size. Specific reference is

130

Stellenbosch University https://scholar.sun.ac.za

made to each estimated mean being below the sieve mean of the size fractions. However, it should also be noted that due to the normal distribution of the sample pellet population with mean of 12mm (refer to the established size distribution of the sample pellets as displayed in Section 6.2.1.3), each size fraction‘s sieve mean is not necessarily the median of the size fraction. This theory can be a reason for the increasing deviation of the estimated mean to the sieve mean, as displayed in Table 22. The application of the algorithm to a mixed size distribution pellet sample should indicate whether the above theory is correct, or whether the algorithm has a tendency to under-predict pellet size.

Mixed size distributions Due to the observation made in the previous section regarding the possible skewed size distribution in each size fraction, the effectiveness of the algorithm had to be tested on a more accurately described particle sample. Also, its effectiveness had to be evaluated over a larger particle size distribution of which the distribution was known. In order to accomplish these two objectives, a mixed size distribution pellet sample was compiled. The mixed size distribution of pellets was created by randomly picking a specified number of pellets from each of the size fractions already available from previous tests. The three most prominent size fractions, 9<=x<13.2, 13.2<=x<16 and 16<=x<19, were chosen to form part of the mixed size distribution sample. 1000, 800, and 600 pellets were randomly picked from each of the mentioned size fractions respectively. This led to the size distribution tabled in Table 24:

Table 24: Size distribution of the mixed size distribution pellet sample.

Size # Pellets in Fraction of Fraction Size Fraction total pellets 9<=x<13.2 1000 0.42 13.2<=x<16 800 0.33 16<=x<19 600 0.25

Since the number of pellets in each size fraction component of the mixed distribution was known, a weighted mean could be calculated. This was done to increase the accuracy of the sieve analysis data, as presented, and as used to evaluate the accuracy of the estimated data. The individual median of each of the fraction, 11.1mm, 14.6mm and 17.5mm respectively, were multiplied by a weight factor representative of its share of the entire sample of pellets. The resulting average of the mixed size distribution was 13.87mm. The same procedure that was utilised during the test runs on the individual size fractions was utilised for analyses of the footage of the mixed size distribution. In addition the Chi2- test was performed on the estimated data in order to establish the goodness of fit to the sieve analysis data. Also, in keeping with the methodology of the previous tests, the results obtained with the shutter speeds set at 1/1000 and 1/2000 will be compared to one another in the rest of the section.

Shutter Speed: 1/1000 The results that follow are typical of the results that were obtained for the mixed size distribution. The camera settings below were applied during the capturing of the footage:

 Shutter speed: 1/1000  ISO: Auto Adjustment  Resolution: 1440x1080 (1.5MP)  Aperture: Auto Adjustment

131

Stellenbosch University https://scholar.sun.ac.za

Figure 65 displays the estimated pellet size of individual pellets identified in ten frames randomly sampled from a video sequence of the specified footage. The sieve maximum and minimum values were taken as the boundary values of the size fraction of the pellet distribution under consideration, being 9mm and 19mm respectively.

Estimated Pellet Size 30

25

20 Actual Max 15 Actual Mean

10 Predicted Mean

Pellet Pellet size (mm) Actual Min 5 Predicted Pellet Size 0 1 27 53 79 105 131 157 183 209 235 261 287 313 339 365 391 417 443 469 495 521 547 573 Pellets Identified

Figure 65: Estimated pellet size for the mixed size distribution with shutter speed at 1/1000

Figure 66 displays the estimated pellet size distribution corresponding to the results contained in Figure 65.

Estimated Pellet Size Distribution 80 70 74 60 66 50 59 40 51 54 45 47 30 36

Frequency 20 29 10 0 0 0 0 0 0 0 6 20 23 23 16 9 10 2 7 0 0

Figure 66: Estimated pellet size distribution for mixed size distribution

Figure 67 displays the pellet size distribution obtained if the estimated pellet sizes of Figure 65 are sorted into the individual size fractions making up the mixed size distribution (refer to Table 24).

132

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size Distribution 250

200 199 150 173

100 106 Frequency 50 75 26 0 x<9 9<=x<13.2 13.2<=x<16 16<=x<19 x>=19 Estimated Size Fractions (mm)

Figure 67: Estimated pellet size distribution of the components of the mixed size distribution

Figure 68 relays this plot over a plot of the sieve analysis (expected) particle size distribution of the mixed size distribution.

Estimated Size Distribution vs Sieve Size Distribution

0.500

0.400

0.300

0.200 Sieve Size Fractions Estimated Size Fractions 0.100

0.000 Fraction of totalpellets x<9 9<=x<13.2 13.2<=x<16 16<=x<19 x>=19 Pellet Size Fractions (mm)

Figure 68: Sieve (expected) size distribution vs estimated size distribution.

Using the information of Figure 68, the following Chi2-test was performed on the estimated data. Table 25 displays the data used for the test. The Observed values refer to the estimated particle sizes. The Expected values are calculated by multiplying the ratios of each of the size fraction components of the mixed size distribution (refer to Table 24) with the total number of particles identified by the algorithm. To conduct the Chi2-test, the estimated size fractions are to be evaluated against the sieve size fraction components. However, due to the algorithm under- and over estimating pellet sizes, any estimated pellets falling outside the sieve size fractions are grouped together in two additional size fraction components that are listed under the Observed category, namely x<9 and x>=19. These two fractions do not from part of the Expected categories that correspond to the make-up of the mixed size distribution. And due to the nature of the Chi2- test, calculating the Chi2-term does not accommodate empty Expected categories. Therefore the decision was taken to conduct the Chi2-test by only taking into account the size

133

Stellenbosch University https://scholar.sun.ac.za

estimation results that correspond to the three size fraction components that the mixed size distribution is made up of. By only taking these fractions into account, the total number of pellets identified is adjusted to the total displayed in the column titled ―Total # pellets‖. The other columns correspond to the various components of the mixed size distribution. The Chi2 term for each of the size fraction components is calculated in the final row. It must be noted that due to omitting the under- and overestimated categories from the Chi2- test, the results of the test are consequently more optimistic about the accuracy of the size estimation capabilities of the size estimation algorithm. Although the test provides some idea of the goodness of fit of the estimated data to the sieve analyses data, and thus the algorithm‘s estimation accuracy, the abovementioned fact needs to be taken into account, along with the actual under- and overestimated results, when critically evaluating the estimation capabilities of the algorithm. For the purposes of this study, the results of the Chi2-test will be used as an indicative measure and not an absolute measure.

Table 25: Data used for Chi2 test conducted on the mixed size distribution analysis.

x<9 9<=x<13.2 13.2<=x<16 16<=x<19 x>=19 Total # pellets Observed 26 173 199 106 75 478 Expected 0 199.17 159.33 119.5 0 478 (fo - fe)^2/fe 3.44 9.88 1.53

By adding the Chi2 terms together as the X2 (calc.) term, the following can be summarised as the outcome of the test. df refers to Degrees of freedom, while X2 refers to the Chi2 term related to an α of 0.05 and df of 2.

 H₀ & H₁ H0 (null-hypothesis) = Data meets the expected distribution; H1 (alternative hypothesis) = Data do not meet the expected distribution  α (alpha) 0.05  df 2  X2 5.99  X2 (calc.) 14.84

 Result: Reject H0 The conclusion can thus be drawn that the estimated size distribution differs from the one expected based on the sieve size distribution from sieve analysis and manual distribution of the pellets.

Shutter Speed: 1/2000 In terms of the footage captured with shutter speed set at 1/2000, the following results are typical of the results that were obtained for the mixed size distribution, with the following camera settings:

 Shutter speed: 1/2000  ISO: Auto Adjustment  Resolution: 1440x1080 (1.5MP)  Aperture: Auto Adjustment Figure 69 displays the estimated pellet size of individual pellets identified in ten frames randomly sampled from a video sequence of the specified footage. The sieve maximum and

134

Stellenbosch University https://scholar.sun.ac.za

minimum values were taken as the boundary values of the size fraction of the pellet distribution under consideration, being 9mm and 19mm respectively.

Estimated Pellet Size 35

30

25

20 Actual Max Actual Mean 15 Predicted Mean 10 Pellet Pellet size (mm) Actual Min 5 Predicted Pellet Size 0 1 17 33 49 65 81 97 113 129 145 161 177 193 209 225 241 257 273 289 305 321 337 353 Pellets Identified

Figure 69: Estimated pellet size for the mixed size distribution with shutter speed at 1/2000

Figure 70 displays the estimated pellet size distribution corresponding to the results contained in Figure 69.

Estimated Pellet Size Distribution 40 35 37 30 33 34 25 31 32 31 28 27 20 24 15 21

Frequency 10 13 5 0 0 0 0 0 0 0 3 11 10 5 3 7 1 0 0

Figure 70: Estimated pellet size distribution for mixed size distribution

Figure 71 displays the pellet size distribution obtained if the estimated pellet sizes of Figure 69 are sorted into the individual size fractions making up the mixed size distribution (refer to Table 24).

135

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size Distribution 140 120 122 100 100 80 60 76

Frequency 40 39 20 16 0 x<9 9<=x<13.2 13.2<=x<16 16<=x<19 x>=19 Estimated Size Fractions (mm)

Figure 71: Estimated pellet size distribution of the components of the mixed size distribution

Figure 72 relays this plot over a plot of the sieve (expected) particle size distribution of the mixed size distribution.

Estimated Size Distribution vs Sieve Size Distribution

0.500

0.400

0.300

0.200 Sieve Size Fractions Estimated Size Fractions 0.100

0.000 Fraction of totalpellets x<9 9<=x<13.2 13.2<=x<16 16<=x<19 x>=19 Pellet Size Fractions (mm)

Figure 72: Sieve (expected) size distribution vs estimated size distribution

Using the information of Figure 72, the following Chi2-test was performed on the estimated data. Table 26 displays the data used for the test. The columns and rows are similar to those contained in Table 25. And due to over- and under prediction of estimated pellet size, the same procedure is followed as was followed in the previous section.

Table 26: Data used for Chi2 test conducted on the mixed size distribution analysis

x<9 9<=x<13.2 13.2<=x<16 16<=x<19 x>=19 Total # pellets Observed 16 122 100 76 39 298 Expected 0 124.17 99.33 74.5 0 298 (fo - fe)^2/fe 0.038 0.004 0.03

136

Stellenbosch University https://scholar.sun.ac.za

The following can be summarised as the outcome of the test.

 H₀ & H₁ H0 = Data meet the expected distribution H1 = Data do not meet the expected distribution  α (alpha) 0.05  df 2  X2 5.99  X2 (calc.) 0.07

 Result: Accept H0 The conclusion can thus be drawn that the estimated size distribution is similar to the one expected based on the sieve size distribution from sieve analysis and manual distribution of the pellets

Observations from mixed size distribution tests The first thing to conclude when analysing the results of the mixed size distribution analyses is that the algorithm somewhat over predicts particle size. This is mainly attributed to the fact that an average of the major- and minor axes of the best fitting ellipse is taken as the estimated size for a particle. Due to the sieve size typically corresponding to the smallest diameter of the particle, and the estimated size being compared to this size as though it is the true size of the particles, this over estimation can be accounted for. Secondly, it should be noted that the 1/2000 shutter speed footage correlated better to the sieve distribution, as determined by the results of the Chi2-tests conducted on the two sets of data (1/1000 and 1/2000). It is derived that the increased shutter speed produces more crisp footage, with less image blur due to the motion of the particles, and thus a truer, and less elongated representation of each particle is presented in the footage.

6.4.2.6 Actual process footage (Site Visit 2)

Roller feeder footage Figure 73 provides an example of a visual representation of the output of the particle identification procedure. The results stem from analyses conducted on footage obtained using the Canon Legria HFM 36 video camera. The objects that were identified and which passed the filtering procedure, have been numbered and outlined, with the outline corresponding to the actual delineated particle boundary, as delineated by the algorithm.

137

Stellenbosch University https://scholar.sun.ac.za

Figure 73: Sample output of the analyses done on actual pellet-on-roller footage of the Roller Feeder at Bokamoso

The following results are typical of the results that were obtained for the actual process footage of pellets-on-rollers at the Roller Feeders at Bokamoso. The following camera settings were applicable:

 Shutter speed: 1/1000  ISO: Auto Adjustment  Resolution: 1440x1080 (1.5MP)  Aperture: Auto Adjustment Figure 74 displays the estimated pellet size of individual pellets identified in a single sample frame (image) of the specified footage. The mean was taken to be the average of the size distribution, i.e. 12mm, i.e the assumed mean.

138

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size 16 14 12 10 8 Actual Mean 6 Predicted Mean

Pellet Pellet size (mm) 4 Predicted Pellet Size 2 0 1 15 29 43 57 71 85 99 113 127 141 155 169 183 197 211 225 239 253 267 281 295 309 323 Pellets identified

Figure 74: Estimated pellet size for the actual pellet-on-roller footage of the Roller Feeder at Bokamoso

Figure 75 displays the estimated pellet size distribution corresponding to the results contained in Figure 74.

Estimated Pellet Size Distribution 70 60 50 60 59 40 46 44 30 41

Frequency 20 24 21 10 0 5 11 8 2 1 1 1 0 0 0 0 0 0

Estimated Size Fractions

Figure 75: Estimated pellet size distribution for the actual pellet-on-roller footage of the Roller Feeder at Bokamoso

For the above results, the estimated mean was 6.3mm, which was -5.7 from the assumed mean of 12mm. The RMSE was calculated at 6.15mm.

Observations from actual roller feeder footage It was noted that as in the case of the individual size fractions of the simulated process footage, there is a general under-prediction of pellet size. However, upon closer examination of the results, two tendencies were noticed that contribute to the under prediction with regards to actual roller feeder footage:

 Inaccurate pellet identification prevails, especially where parts of the pellets are omitted from the identified pellet profile. This is clearly visible in Figure 76.

139

Stellenbosch University https://scholar.sun.ac.za

 There exists a tendency of the algorithm to identify smaller pellets and omit larger pellets, as depicted in Figure 77.

Figure 76: Parts of the pellets are frequently omitted from the estimated pellet profiles

Figure 77: The tendency of the algorithm to identify smaller pellets and omit (miss) larger pellets

Sintered pellet on conveyor footage Figure 78 provides an example of a visual representation of the output of the particle identification procedure. The results stem from analyses conducted on footage obtained using the Canon EOS 500D digital camera. The objects that were identified and which passed the filtering procedure, have been numbered and outlined, with the outline corresponding to the actual delineated particle boundary, as delineated by the algorithm.

140

Stellenbosch University https://scholar.sun.ac.za

Figure 78: Sample output of the analyses done on actual pellet-on-conveyor footage at Bokamoso

The following results are typical of the results that were obtained for the actual process footage of sintered-pellets-on-conveyor at Bokamoso. The following camera settings were applicable:

 Shutter speed: 1/1000  ISO: 3200  Resolution: 2352x1568 (3.7MP)  Aperture: F4.0 Figure 79 displays the estimated pellet size of individual pellets identified in a single sample frame (image) of the specified footage. The assumed mean was taken to be the average of the size distribution, i.e. 12mm.

141

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size 28 26 24

22 20 18 16 14 Assumed Mean 12 10 Estimated Mean 8 Pellet Pellet size (mm) 6 Estimated Pellet Size 4 2 0 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 162 169 176 Pellets identified

Figure 79: Estimated pellet size for the actual pellet-on-conveyor footage at Bokamoso

Figure 80 displays the estimated pellet size distribution corresponding to the results contained in Figure 79.

Estimated Pellet Size Distribution 30 25

20 24 15 18 16 10 13 14 Frequency 11 11 10 5 8 8 8 9 0 7 6 4 4 4 2 0 0

Estimated Size Fractions

Figure 80: Estimated pellet size distribution for the actual pellet-on-conveyor footage at Bokamoso

For the above results, the estimated mean was 10.74mm, which was -1.26 from the assumed mean of 12mm. The RMSE was calculated at 4.73mm. What should be noticed, especially when considering Figure 81 (a repeat of Figure 80, with the emphasis added), is the presence of fines in the sample footage. The presence of fines can negatively impact the estimated pellet size distribution.

142

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size Distribution 30 25 20 24 15 18 10 16 14 13 11 11 Frequency 5 0 8 7 6 4 8 10 8 9 4 4 2 0 0

Estimated Size Fractions

Figure 81: Estimated pellet size distribution for the actual pellet-on-conveyor footage at Bokamoso (emphasis added)

Figure 82 highlights the IO that make up the parts of the distribution that are highlighted in Figure 81.

Figure 82: Highlighting the presence of fines as part of the IO

When the fines <4mm were omitted from the evaluation, the resulting estimated size distributions are displayed in Figure 83 and Figure 84.

143

Stellenbosch University https://scholar.sun.ac.za

Estimated Pellet Size 28 26 24

22 20 18 16 14 Actual Mean 12 10 Predicted Mean 8 Pellet Pellet size (mm) 6 Predicted Pellet Size 4 2 0 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106 113 120 127 134 141 148 155 Pellets identified

Figure 83: Estimated pellet size for the actual pellet-on-conveyor footage at Bokamoso – fines of <4mm omitted

Estimated Pellet Size Distribution 30 25

20 24 15 18 16 10 14 Frequency 11 11 10 5 8 8 9 7 6 4 4 4 2 0 0

Estimated Size Fractions

Figure 84: Estimated pellet size distribution for the actual pellet-on-conveyor footage at Bokamoso – fines of <4mm omitted

For the above results, the estimated mean was 11.76mm, which was -0.24 from the assumed mean of 12mm. The RMSE was calculated at 3.61mm.

Observations from actual roller feeder footage From the above results concerning the sintered pellet-on-conveyor footage, it can be concluded that the algorithm produces fairly accurate size estimation. This is evident from the estimated mean correlating well with the assumed mean. Especially when the effect of fines, which has been proven to have a considerable effect on estimated pellet size distribution, is omitted from the analyses. It is known from Gloy (2015) that fines are considered a detrimental problem for downstream processes, especially referring to the reduction process in the SAF. The ideal operating conditions would not allow fines to be present in the process, and therefore should not form part of the desired size distribution of the produced pellets. Therefore, it is considered valid to omit these fines from analyses conducted in this study.

144

Stellenbosch University https://scholar.sun.ac.za

It should further be noted that false identification of spaces between pellets and clustering of pellets is also present. The effects of these falsely identified OOI are however mitigated in that they are eliminated from the size estimation through the various filters that have been discussed in Section 6.4.2.1. Here, extent and eccentricity are of specific importance for the effective mitigation of this adverse occurrence.

145

Stellenbosch University https://scholar.sun.ac.za

7 PHASE 3 – CONCEPTUAL CONTROL FRAMEWORK DEVELOPMENT

This Chapter will be dealing with the development of the Conceptual Control Framework for the Pelletizing process at Bokamoso Pelletizing Plant. While assuming accurate estimation of pellet size distribution of the pellets produced by the FeCr pelletizing plant, the Chapter will be indicating in what manner this output of the pellet size estimation system, developed in Chapters 4 - 6, could be incorporated and used as part of an existing control system of a FeCr pelletizing plant. Its focus should be on providing monitoring and diagnosis of the pelletizing circuit operation in order to improve automatic process control, and to maintain the produced pellet size distribution at a desired and optimal pellet size distribution by eliminating production excursions. Achieving these two factors should not only aid in improving the overall pelletizing process in terms of smooth production, process efficiency and process safety. It will also enable it to provide a more suitable quality pellet feed for the subsequent reduction process in SAFs for a more profitable FeCr reduction operation. The chapter will have the following structure:

 Outline the roadmap for the design of a conceptual control framework.  Conduct a process mapping of the pelletizing circuit. This is done to establish the current as-is state of the plant. During this process, the various variables and measured variables (MVs) of the system will be evaluated to establish which have a significant effect on the controlled variable. Also, the variables that will be incorporated into the conceptual control framework, termed the critical MVs, will be identified and evaluated for suitability for this purpose.  Finally, the conceptual control framework will be developed. This will be done by evaluating the critical MVs for inclusion into the framework, as well as establishing decision criteria (actions) which will provide the required control to produce the desired outputs of the system under consideration.

7.1 ROADMAP FOR CONCEPTUAL CONTROL FRAMEWORK DESIGN Taking into account the process of constructing a controller as stated by Bernard (1988) and key considerations stated by Machunter & Pax (2007) (refer to Chapter 2.3.2), the following roadmap was used as guideline in designing the conceptual control framework: 1. Define all the Process Variables during a Process Mapping procedure a. Define what to control (controlled variable) i. This refers to the Pellet size distribution, with the Mean of size distribution of specific interest. b. Define scope of control. i. This refers to the region or section of the pelletizing plant in which the critical measured variables fall (as identified in the following step). 2. Analysis of critical measure variables a. Define measurements (measured variables) and associated sensors i. It is of specific interest to define and describe process variables that critically affect the controlled variable ii. The placement of the sensors is also of importance, since this will affect aspects such as the time delay for disturbances and alterations to propagate through the system. 3. Development of conceptual control framework

146

Stellenbosch University https://scholar.sun.ac.za

a. Establish and describe variables chosen for inclusion into the framework b. Define final control elements. i. These refer to the equipment or instruments that would be mechanically adjusted to bring about a change in the process in order to manipulate manipulated (measured) variables. These control elements therefore have direct causal relationships to measured variables. c. Define and discuss control actions with regards to critical MVs and specific states of the system. d. Discussion of control algorithms used in the proposed framework i. Different types of control algorithms ii. Different types of Control algorithm enhancements

Since the objective of this study was to develop a conceptual control framework, the development of the conceptual control framework did not involve simulations of system operation for various operating scenarios, or at specified operational states or critical performance parameters. The further study or development of such a control framework, aimed at implementation, would have to involve such system simulation and circuit performance analysis.

7.2 PROCESS MAPPING To establish a clear understanding of, and outline the critical MVs of the pelletizing process, a process mapping procedure was undertaken for the pelletizing plant. Another output of this procedure would be a Piping & Instrumentation Diagram (P&ID) of the focus area within the pelletizing process, around which the conceptual control framework would be developed. This is also referred to as the scope of control, as mentioned in Section 7.1. In terms of the critical MVs, information from Gloy (2015) and Outotec Oyj (2015b) was used to identify all measured variables in the focus area.

7.2.1 Critical measured variables In determining the critical measured variables, all the input variables in the pelletizing process were evaluated in terms of the effect these variables have on pellet size. Those variables that were considered to have a significant effect on pellet size were further analysed and used during the development of the conceptual control framework. The affect these variables have on pellet size is summarised in Figure 85. In the figure, each variable is represented by a different colour. The intensity of the colours corresponds to the quantitative measure of the variables. As such, colour saturation relates to an increase in a given variable‘s quantity and colour dilation relates to a decrease in the quantity of a given variable. As the measure of the various variables changes, its effect on the pellet size produced in the pelletizing process, as indicated on the y-axis, can be deduced.

147

Stellenbosch University https://scholar.sun.ac.za

Grain Size Distribution (Concentrate)

Grain Size Distribution (Filter Cake)

Bentonite

Moisture Content of Pellets

Level in Mixer (Mixing Efficiency)

Screen Gap Size (Size of Circulation)

Drum Rotation Speed Pellet Size Drum Angle

Figure 85: Process variables that have a significant effect on pellet size produced by the pelletizing system.

From Figure 85, the effect most variables have on pellet size can be easily interpreted. In terms of the grain size distribution of the filter cake and the grain size distribution of the concentrate, it should be noted that both too small and too large size distributions have a negative effect on pellet size. If too large, binding of the raw material granules does not happen adequately, with pellets breaking apart. If too small, too small pellets are formed that are not suitable for the sintering process, or for the subsequent reduction process in the SAFs.

7.2.2 Establishing the focus area The focus area was chosen to be the section of the pelletizing circuit starting from the Proportioning stage to the Sintering stage. This section automatically includes the mixing plants and the pelletizing stages. While establishing the boundaries of the focus area, two criteria was used. The first criterion emphasised that the focus area should include as many of the measured variables deemed as critical measured variables. Firstly, these variables are deemed to have a significant effect on pellet size, and thus pellet size distribution. Secondly, these variables where identified as being easy to manipulate within the context of a pelletizing process in order to induce a change in the process. The second criterion used in choosing the focus area, is concerned with the time delay of changes to propagate through the system and produce the desired change/-s to the controlled variable. This P&ID shown in Figure 86 is a representation of the focus area chosen for this study, in its current state as part of the pelletizing system at Bokamoso. It also indicates the various process handles that are currently being utilised in the current control process at Bokamoso.

148

Stellenbosch University https://scholar.sun.ac.za

PELLETS

SINTERED

FURNACE

SINTERING SINTERING

AR

SAMPLE

MANUAL MANUAL

MOISTURE MOISTURE

PELLETIZER

AR

SAMPLE

MANUAL MANUAL

MOISTURE MOISTURE

MIXER

AR

SAMPLE

MANUAL MANUAL

GRAIN SIZE SIZE GRAIN

AR

SAMPLE

MANUAL MANUAL

MOISTURE MOISTURE

FC

FT

-WEIGHT-

FC

IN

FEEDER

FC

LOSS-

DISC FEEDERDISC

WATER BENTONITE FILTER CAKE FILTER Figure 86: Current state of the plant – proportioning, mixing, pelletizing and sintering.

149

Stellenbosch University https://scholar.sun.ac.za

7.2.3 Fixed variables These refer to the process variables that cannot be changed or to those which are very seldom changed due to the design of the process and the various equipment utilised in the process. The following variables are considered to be fixed variables:

 Level in mixer (which affects mixing efficiency)  Screen gap size (which affects recycle load size)  Drum angle (which affects the pelletizing process and thus the size of pellets produced)  Drum rotation speed (it also affects the pelletizing process and thus the size of pellets produced)

7.2.4 Disturbance variables The following variables were considered to be disturbance variables. This is on the account that these variables affect the controlled variable in an adverse way and cannot be changed in the focus area of pelletizing process. Last mentioned is true due to these variables being influenced by external factors, such as the mining process of the raw material. Or because these variables are altered in stages that are far upstream from the focus area, as is the case of the milling stage (raw material grinding). The two variables classified as disturbance variables are:

 Grain size distribution of raw material concentrate  Grain size distribution of filter cake

7.3 ANALYSIS OF CRITICAL MEASURED VARIABLES The measured process variables identified as critical MVs in Section 7.2.1, are further analysed to establish their effect on the controlled variable, the pellet size distribution. Table 27 summarises this analysis. The primary variables are so named due to the significant effect each has on the controlled variable, and due to the ease with which it can be manipulated to induce a desired effect. Secondary variables are so listed due to the delay in the feedback of the sampling results that establish the current value for these variables. However, if future work can produce real time or near real time feedback on the current state of these variables, these variables can prove valuable to the effective control of pellets size distribution.

Table 27: The effect of various critical measured variables on the controlled variable.

Effect on controlled variable - pellet size Measured Process Variable distribution

No direct effect. Indirect effect due to Filter cake (concentrate) flow rate composition Bentonite flow rate High = Small pellets; Low = Large pellets Primary Primary variables Water addition flow (mixer pan) High = Large pellets; Low = Small pellets Grain size of filter cake High = Small pellets; Low = Small pellets

Moisture content of filter cake High = Large pellets; Low = Small pellets Moisture content of feed mixture High = Large pellets; Low = Small pellets after mixer variables Secondary Secondary Moisture content of pellets after High = Large pellets; Low = Small pellets pelletizing

150

Stellenbosch University https://scholar.sun.ac.za

Table 28 contains a summary of the sensors used to measure each of the variables analysed in Table 27.

Table 28: Sensors used for variable measurements.

Measured Variable Sensor

Filter cake (concentrate) mass flow Belt scale Bentonite mass flow (flow rate) Loss-in-weight feeders Primary Primary

variables Water addition volumetric flow (mixer pan) Control valve μm Grain size of filter cake Laboratory sample

% Moisture content of filter cake Laboratory sample % Moisture content of feed mixture after mixer Infrared analyser variables Secondary Secondary % Moisture content of pellets after pelletizing Infrared analyser

7.4 DEVELOPMENT OF CONCEPTUAL CONTROL FRAMEWORK From the previous analyses, the variables listed in Table 29 and Table 30 were chosen for application in the conceptual control framework as the primary control variables. These two tables need to interpreted in conjunction. Table 29 indicates the reason for the variables being chosen as primary control variables, while Table 30 states the effect these variables have on the controlled variable as well as the final control element used to alter each variable.

Table 29: Process variables chosen for use in conceptual control framework.

Process Reason for choosing variable variable Bentonite flow Acts as binder; Absorbs excess water in filter cake; Easily added to filter rate cake slurry Water flow rate Acts as binder; Easily added to filter cake slurry Ratio of bentonite Importance for effective operation and execution of downstream and filter cake processes % water to Importance for effective operation and execution of downstream concentrate processes

Table 30: Effect of chosen variables on controlled variable and the respective final control elements.

Process Effect on controlled variable (pellet Final Control Element variable size mean) Bentonite flow Adding bentonite decreases pellet size Loss-in-weight feeders rate and absorbs/decreases water % Adding water increases pellet size and Water flow rate Control valve in supply line increases water % Ratio of bentonite Binding of pellets; Sintering of pellets; Loss-in-weight feeders, Disc and filter cake Pellet strength feeder % water to Binding of pellets; Sintering of pellets; Control valve in supply line, concentrate Pellet strength Disc feeder

151

Stellenbosch University https://scholar.sun.ac.za

It is important to note that the above tables translate to two variables and two ratios that are considered to be the primary control variables in the conceptual control framework. It should also be emphasised that the ratios noted in the above tables are especially important for effective operation of downstream processes. This is true not only for the pelletizing plant, but also for the furnace plant. Therefore, whilst manipulating certain variables to improve the control of the controlled variable, these ratios need to be maintained as far as possible. The following two tables, Table 31 and Table 32, suggest control actions regarding the two primary manipulated variables at specified states of the controlled variable. These states of the controlled variables effectively translate to different possible process states. In these two cases, the first process state represented is one where a small pellet size distribution is being produced. The second process state represented is one where a large pellet size distribution is being produced. The tables indicate the bentonite and water percentages as a part of the total slurry to be fed to the pelletizer. The actions listed were determined with the following criteria in mind:

 Addition of bentonite decreases pellet size.  Addition of water increases pellet size.  Maintain desired/optimal levels (ratios) of variables.  Addition of Bentonite contributes more to material costs than the addition of water. Therefore, the addition of bentonite needs to be reduced where and if possible.

Table 31: Control actions regarding the two primary manipulated variables with a small pellet size process state.

Bentonite % Small Pellet Size Low Desired High Low Water+ Ben- Ben-

% Desired Water+ Ben- Ben-

Water Water High N/A Ben- Ben-

Table 32: Control actions regarding the two primary manipulated variables with a large pellet size process state.

Bentonite % Large Pellet Size Low Desired High Low Ben+ Ben+ N/A

% Desired Water- Water- Water-

Water Water High Water- Water- Water-

Considering the actions indicated in the above two tables, it is important to remember the final control elements, as listed in Table 30, that will be used to bring about these actions (alterations to the control variables). From the above analyses, the preliminary control measures listed in Table 33 are proposed for incorporation into a control framework for the system and variables described in the various sections of this Chapter. Along with the basic control strategy of feedback control, cascade control and feedforward control are implemented as control enhancements in the control measures tabulated below. The latter are incorporated to mitigate the disadvantages associated with feedback control, as mentioned in chapter 2.3.3. The two enhancements

152

Stellenbosch University https://scholar.sun.ac.za

were chosen due to their relative ease and wide spectrum of implementation, and their effective and efficient control performance.

Table 33: Preliminary control measures for incorporation into the conceptual control framework for the pelletizing circuit at Bokamoso.

Manipulated variable Control algorithm Feedback control with cascade control on Bentonite flow rate loss-in-weight-feeder Feedback control with cascade control on Water flow rate control valve Feedforward ratio control Ratio of bentonite and filter cake

Feedforward ratio control % water to concentrate (filter cake mixture)

In order to aid the above control measures, specific alarms were put in place to indicate any deviation outside of the specified standard deviation of the controlled variable, i.e. mean pellet size distribution. These alarms, along with the abovementioned control measures, are consequently incorporated into the current control system (refer to the P&ID in Figure 86), in order to establish the proposed conceptual control framework for the pelletizing plant at Bokamoso. The P&ID depicted in Figure 87 is a representation of the current pelletizing circuit control with the conceptual control incorporated. The conceptual control alterations are indicated in red.

153

Stellenbosch University https://scholar.sun.ac.za

PELLETS

SINTERED

AT

FURNACE

SINTERING SINTERING

AT

AR

SAMPLE

MANUAL MANUAL

MOISTURE MOISTURE

PELLETIZER

L

AA

AR

AC

SAMPLE

MANUAL MANUAL

MOISTURE MOISTURE

H

AA

MIXER

AR

SAMPLE

MANUAL MANUAL

GRAIN SIZE SIZE GRAIN

AR

SAMPLE

MANUAL MANUAL

MOISTURE MOISTURE

R

FT

FT

FT

-WEIGHT-

FC

IN

FC

FEEDER

FC

DISC FEEDERDISC

LOSS-

WATER

BENTONITE FILTER CAKE FILTER

Figure 87: The proposed conceptual control framework for the pelletizing circuit at Bokamoso. The proposed conceptual control measures are incorporated into the current control system and clearly shown.

154

Stellenbosch University https://scholar.sun.ac.za

8 CONCLUSIONS AND RECOMMENDATIONS

The following chapter will summarise the outcome of the study, as discussed in this thesis, and comment on its success in achieving the objectives set out in Chapter 1. It will also discuss the major findings made with regards to the execution of the study, and make recommendations, in the case of similar future studies, for improving on the results achieved.

8.1 DEVELOPMENT OF A PARTICLE SIZE ESTIMATION ALGORITHM

8.1.1 Literature Study The objective was to develop a Particle Size Estimation Algorithm suitable for implementation as part of a conceptual particle size distribution estimation sensor. In the first part of this development, the critical literature review was instrumental in laying a solid foundation in terms of the fundamental concepts associated with DIP and PSE. The focus was on techniques typically used for size estimation of objects in digital images. It also aided in creating a clear picture of the current techniques and technologies that are used for PSE in industries, especially mineral beneficiation industries.

8.1.2 Development of a PSE algorithm

8.1.2.1 Objectives with developing the PSE algorithm In the second and third part, an algorithm was developed, tested and validated that could successfully identify, delineate, separate, and uniquely describe similarly shaped objects that are irregular and spherical in form. In the first instance the algorithm incorporated procedures that are typically associated with standard DIP procedures. It however also incorporated methods suited specifically for the delineation and description of similarly shaped spherical pellets. Finally, algorithms that have been proven to be successful in similar PSE applications in industry were then incorporated to conduct PSE on the particles under consideration. The result of incorporating all the above was the final version of the algorithm, which is discussed in Chapter 6.4.2. During the development of the algorithm, it was imperative to compare various versions and variations of the algorithm in order to obtain the most suitable problem specific solution. It was also imperative to compare different hardware and hardware settings in obtaining the images on which the algorithm would be applied. Alternatives with regards to the aforementioned would then be applied to footage of different forms of the OOI, and to different environments in which the OOI occurred. The different types of footage refer specifically to simulated process footage and actual process footage. This would allow for the testing, modification and validation of an industry suitable size estimation algorithm solution. Finally, various performance indicators had to be employed to establish the algorithm‘s accuracy, reliability and suitability as a pellet size estimation algorithm. The development of the above algorithm in MATLAB©, a widely used and available software in industry, lends itself to compatibility with a variety of operating systems and hardware configurations. Thus the developed algorithm will be readily available for implementation and alteration on these configurations.

155

Stellenbosch University https://scholar.sun.ac.za

8.1.2.2 Systematic development of the PSE algorithm

Phase 1 In order to familiarise the author with the basics of PSE with DIP, Phase 1 of the algorithm development was aimed at developing a conceptual DIP PSE algorithm which had to be able to identify and estimate the size of moving particles. Even though the size estimation of the conceptual algorithm was not very accurate, the conceptual development proved invaluable in highlighting key factors that contribute to the success of any DIP PSE operation. Most notably was the impact of adequate lighting and hardware configuration and settings. In terms of procedures that form part of the algorithm and which are critical to the success of the algorithm, the following aspects were highlighted:

 Increased contrast enhancement between particles and background.  Effective thresholding in order to separate areas of interest from image background.  Accurate segmentation, especially concerning pixels representing the edge of OOI.  And accurately dealing with partially obscured and overlapping particles. It was furthermore established that the technique of size estimation by polygon fitting, specifically 2-dimensional square fitting, as executed by use of the Blob Analysis function in MATLAB©, was a viable candidate for PSE in subsequent algorithm development. Finally, it was also highlighted that performance criteria applied to various procedures within the algorithm would be required to enhance the efficiency and accuracy of the algorithm. Also part of Phase 1 of the algorithm development was the collection of field data of the actual environment for which the algorithm would be developed. Namely, the Bokamoso FeCr pelletizing plant at Glencore‘s Wonderkop Smelter. The Site Visit served to familiarise the author with the environment, as well as provide sample footage to be used during further algorithm development. Even though the footage proved to be of too poor a quality to be used in analysis, it did provide insight into the requirements related to image capturing that would prove invaluable in subsequent image acquisition.

Phase 2 Phase 2 was primarily aimed at completing the development of a problem specific algorithm solution, based on adequate sample footage in the form of simulated process footage as well as actual process footage.

Simulated pelletizing circuit In order to accomplish the former, the actual pelletizer environment had to be replicated to allow for the acquisition of simulated process footage. For this purpose it was necessary to build a scale model of a section of the pelletizer circuit at Bokamoso. The decision to replicate the Roller Feeder was considered to be adequately justified, with reasons used to justify this decision summarised in Chapter 6.2.1, provide the reasoning behind the decision:

 The pellets that pass over the roller feeder are fairly uniform in size.  In terms of process monitoring and control, the pellets in this stage of the process are deemed most suitable as input for a process control framework.  Pellets are fed evenly onto the roller feeder by a running conveyor, and move over the roller feeder in a single layer. Therefore, conclusions drawn could be considered as an accurate representation of the process‘ characteristics.  Imaging equipment can be easily placed perpendicularly over the pellets, ensuring minimal distortion of the true shape of the pellets being viewed.

156

Stellenbosch University https://scholar.sun.ac.za

With the above in mind, the replication of the roller feeder commenced. In terms of background colour, i.e. the rollers, the replica proved to be satisfactorily similar. Noticeable differences between the replica and actual systems included the polyurethane coating applied to the actual rollers at Bokamoso (to prevent green pellets from sticking to the rollers and prevents build-up of agglomerate material on the rollers), and the slight difference in roller diameter. These two differences were however validated and not considered to be significant for the purposes of acquiring sample footage. The only major difference between actual set-up and the simulated set-up considered to be significant, was the use of sintered pellets on the simulated set-up instead of green pellets. Due to constraints with regards to repeatability, green pellets was not a feasible option. Although at first it was deemed to be acceptable to not use green pellets, it proved to be a key influencing factor in the effectiveness of the PSE algorithm when applied to actual process footage which did entail the presence of green pellets. This will be elaborated on in later discussions. In terms of operation, the replica roller feeder system proved to be adequate in simulating the operation of the actual roller feeder. The speed of the rollers was adjusted to simulate the speed of the actual roller feeder, with pellet movement consequently simulated. And the size of the feeder proved adequate to enable full frame images to be taken entirely constituting pellet on roller footage.

Simulated footage – Round 1 Having established adequate pre-processing procedures during Phase 1 of algorithm development, Round 1 of tests of Phase 2 focused on evaluating size estimation algorithms. Thus the main objective of Round 1 of analyses was to evaluate and compare the effectiveness of two different size estimation procedures applied to the PSE algorithm, i.e. polygon fitting through Blob Analysis method, and spherical fitting through the Hough Transform. Additionally, and in order to establish the effectiveness of these algorithms, it was important to establish an efficient performance criterion against which size estimation results could be validated. This need resulted in the establishment of size validation through the pixel-to-mm ratio. The initial results obtained using the two size estimation methods were considered adequate in proving the validity for further investigation of the algorithm developed up to that particular stage in the project. This is both in terms identifying and delineating particles, and conducting size estimation of the identified particles. With regards to the latter, it is that noted that size estimation with polygon fitting was accurate to an error of 1.52%, and size estimation with spherical fitting had an error of 0.07%. However, further analysis and validation on larger samples and mixed particle size samples, i.e. multiple size fractions, would have to be conducted in order to conclusively validate and describe the system‘s accuracy and effectiveness. Another important factor noted during this round of analyses, was the difficulty faced in analysing images in which multiple rollers were present. This was mainly due to the contrast in image intensity levels between the upper parts of the rollers and the areas between rollers. This contrast was a major obstacle to the efficient operation of the algorithm, reducing the algorithm‘s ability to isolate pellets using contrast between background and pellets. This led to three major conclusions being drawn: 1. An adaptation of the images used in analyses had to be investigated:

157

Stellenbosch University https://scholar.sun.ac.za

In order to overcome the mentioned difficulties, a solution was proposed that entailed the algorithm to isolate (segment) a single roller from the image and only analyse the pellets moving over the specific roller. The area of the roller would be chosen in such a manner that the region of highest intensities was isolated, so as to maximise the contrast between the roller (background) and the lower intensities associated with the pellets (objects of interest). This solution proved to be successful in mitigating the effect and allowing for particle size estimation to be conducted successfully and accurately. Although in terms of implementation as part of the final algorithm, such a solution would entail an additional set-up operation in that a preferred image region for analyses would have to be selected and programmed into the algorithm. 2. An adaptation of the size estimation algorithm had to be investigated: In order to improve on the foregone solution, especially with algorithm application versatility in mind, the need had arisen to investigate and implement alternative methods to segment and isolate OOI from the image background. Otsu‘s global thresholding procedure, utilised in Phase 1 of algorithm development, was adequate for the type of footage used during Phase 1 due to the uniformity in intensity levels of image areas that were considered as background. This image characteristic however changed when simulated roller feeder footage was utilised. This change led to the abandoning of the use of a global thresholding algorithm, with focus shifting to an algorithm capable of identifying local changes in intensity values. As such, an edge detection algorithm was utilised that produced spherical edge maps, with each spherical edge map representing an OOI, i.e. pellet. Following the successful detection of particle edges, a filling algorithm would fill these spherical edge areas, and thus completing the process of identifying pellets in simulated process footage. 3. The suitability of the Roller Feeder as a suitable area, within a FeCr pelletizing circuit, for the implementation and operation of an image analysis-based pellet size- distribution estimation system, had to be re-evaluated. However valid the motivations in Section 6.2.1.1, and even though the Roller Feeder had initially satisfied all the important criteria associated with a suitable area, the analyses of Round 1 had proven that there were inherent difficulties in terms of image analyses associated with this specific area. Subject to further analyses in Round 2, it was preliminarily concluded that the roller feeder was probably not the optimal choice of area for PSE in the actual FeCr pelletizing circuit. It was consequently decided that during the next site visit, to be conducted following the completion of Round 1 of analyses, alternatives areas would be investigated for image acquisition. This was deemed necessary especially when taking into consideration the eventual implementation of a PSE system at Bokamoso pelletizing plant. This area would not only have to satisfy the aforementioned need, but also still comply with the requirements of a control framework that would have to use the results of PSE conducted on the images obtained at the specific area.

Simulated footage – Round 2 Round 2 of analyses was aimed at improving the algorithm developed during Round 1, by incorporating the considerations and the conclusions drawn from the Round 1 of analyses. Two significant alterations were made to the algorithm that was used in Round 1:

 Application of a bilateral filter for improved filtering. This due to its ability to perform increased smoothing of image regions (more uniform intensity regions) and further reduce undesirable image noise, while preserving sharp intensity changes usually associated with object edges.

158

Stellenbosch University https://scholar.sun.ac.za

 Application of the watershed transform, used specifically to improve segmentation and thus separation of pellets. Three types of footage were analysed for algorithm development:

 Lab-scale Roller Feeder set-up  Actual Roller Feeder footage (from Site Visit 2)  Actual Sintered Pellet-on-conveyor footage (from Site Visit 2) In order to aid in accurate identification, delineation and analyses of OOI, two filters were developed that would aid in eliminating incorrectly identified OOI. Thus ensuring that only the correct OOI, i.e. the pellets, were further analysed. The eccentricity and extent filters proved effective and valuable in this regard for the simulated process footage and actual process footage respectively. For the simulated process footage it was found that delineated objects typically included illumination strips on the rollers, delineated due to very similar localised image intensities. The eccentricity filter would ensure that only circular objects be analysed for size estimation purposes, with rectangular objects disregarded. In terms of actual process footage, specifically the pellets-on-conveyor footage, it was found that delineated objects typically included oddly shaped pellets. These objects were typically formed due to pellets being grouped together as one, spaces between pellets identified as a pellet, and spaces between pellet and the actual pellets being grouped together. The extent filter would ensure that only circular objects be analysed for size estimation purposes, with oddly shaped objects disregarded. The decision to continue the development of the algorithm in a MATLAB© script file is considered to be of great value. It increased the ease with which the more complicated procedures could be implemented, altered, and integrated into the overall algorithm. Furthermore, the development and storage of a visual representation of the analysed images, with inferred data such as the delineated OOI being highlighted and superimposed upon the original images, added value in terms of validation and representation of the output of the SE algorithm. The output of SE results into a Microsoft Excel file also allowed for the final analyses and graphical representation of results, aiding in interpretation of the results. The analyses would include calculation of a sample mean and RMSE, as well as a Chi- Square test to establish the goodness of fit of the estimated data to the sieve analysis data. In terms of size estimation, it was important to obtain a method of size estimation that would not introduce error due to the orientation (rotationally invariant) of the OOI in the 2D image plane. For this reason it was decided to use the rotationally invariant ellipse fitting method to determine particle size. It prevented error associated with the rotationally varying polygon fitting (blob analysis) method. In addition, it also prevented the slight error associated with the single measure spherical fitting Hough transform method, even though the last mentioned is also rotationally invariant. Finally, Round 2 of the analysis continued to conduct size validation through a pixel to mm ratio. The ratio was developed by obtaining a reference mm size for each pixel contained in the image. This was done by capturing an object of known length in the image and relating its pixel size to its actual size. In terms of size estimation on simulated process footage the most notable observation that was made from the results was that the algorithm, when applied to the individual size fractions, under-predicts pellet size. Even though it only slightly under estimated pellet size for the 9-13.2mm size fraction, this under prediction increased for the subsequent larger size fractions. However, it was noted that due to the normal distribution of the sample pellet

159

Stellenbosch University https://scholar.sun.ac.za

population with mean of 12mm, each size fraction‘s sieve mean is not necessarily the median of the size fraction. This theory explained the increasing deviation of the estimated mean to the sieve mean as the size fraction increased. When analysing the results of the mixed size distribution analyses, the first thing to notice is that the under prediction had resulted in the algorithm somewhat over estimating particle size. This is mainly attributed to the fact that an average of the major- and minor axes of the best fitting ellipse is taken as the estimated size for a particle. Due to the sieve size typically corresponding to the smallest diameter of the particle, this over estimation can be accounted for. It is thus recommended for future studies to investigate methods in which this error, exaggerated by irregularly shaped particles, can be mitigated. Secondly, it should be noted that the 1/2000 shutter speed footage correlated better to the sieve size distribution, as determined by the results of the Chi2-tests conducted on the two sets of data (1/1000 and 1/2000). It is derived, as expected, that the increased shutter speed produces more crisp footage, with less image blur due to the motion of the particles, and thus a truer, and less elongated representation of each particle is presented in the footage. When considering that the simulated process footage contained particles moving at a maximum of 0.865m/s, it is noted that with a 1/1000 shutter speed, the particle would be elongated by approximately 0.865mm. This elongation is derived from, and equivalent to, the distance travelled by the particle during the time that it takes the camera‘s shutter to open and close to capture the image. This elongation then contributes to an inaccurate estimation of particle size. However, with a shutter speed of 1/2000, this elongation would be reduced to a maximum of 0.433mm, halving the error associated with the movement of the particles. The use of faster shutter speeds is thus considered imperative for increased particle size estimation accuracy, and is thus recommended for any future studies of a similar nature with similar process and OOI characteristics. In terms of analyses on actual process roller feeder footage it is noted that there is a general under-prediction of pellet size. This under prediction is however attributed to the algorithm is not being effective in dealing with the contrast in image intensity levels. These contrasts are present in two main areas: between the upper parts of the rollers and the areas between rollers, as well as the contrast in intensity levels associated with the pellets. The latter is due to the reflection on the pellets caused by the presence of moisture in the pellets. As a result, two tendencies that led to under prediction were noticed during closer examination of the results:

 Inaccurate pellet identification prevails, especially where parts of the pellets are omitted from the identified pellet profile.  There exists a tendency of the algorithm to identify smaller pellets and omit larger pellets. Due to the above, it is concluded and further underlined that the roller feeder is not a suitable area in the actual process for the implementation of a PSE system. If for the fact of more effective process control this area is still preferred above a downstream area, an algorithm should be developed that is able to deal with the contrast in intensity levels mentioned above. For the purposes of this study, i.e. FeCr pellet size estimation, the abovementioned difficulties faced with analyses of roller feeder footage led to the conclusion that the least cumbersome methodology and most accurate results for actual process footage would be achieved with the analysis of sintered pellets on conveyor footage. Furthermore underlining this conclusion is the fact that if the results for this type of footage are analysed, it can be

160

Stellenbosch University https://scholar.sun.ac.za

concluded that the algorithm produces particle size estimation results of acceptable accuracy. This is evident from the estimated mean correlating well with the assumed mean. Even more so when the effect of fines, which has been proven to have a considerable effect on estimated pellet size distribution, is omitted from the analyses. And since fines are considered a detrimental problem for downstream processes, especially referring to the reduction process in the SAF, the ideal operating conditions would not allow fines to be present in the process. It is therefore considered that these fines should not form part of the desired size distribution of the produced pellets, and therefore deemed valid to omit these fines from analyses conducted in this study. It should however be noted that false identification of spaces between pellets and clustering of pellets is also present when analysing this type of footage. It has however been proven that the effects of these falsely identified OOI can be, and are, mitigated in that they are eliminated from the size estimation results through the various filters that have been incorporated in the algorithm, i.e. the extent and eccentricity filters. Finally, and even though it is recommended to analyse and measure sintered pellets on conveyor footage, it is noted that only surface pellets are analysed in this type of footage. It follows then that any inferences made regarding the entire population based on the size estimation results from this type of analysis, may be erroneous. It is therefore recommended that methods have to investigated and applied to the algorithm that would enable the algorithm to correctly interpret surface pellet data and subsequently correctly infer information regarding the entire population.

8.2 DEVELOPMENT OF A CONCEPTUAL PELLETIZING-CIRCUIT CONTROL FRAMEWORK The critical literature review had two main objectives associated with the development of a conceptual pelletizing-circuit control framework:

 Identifying and investigating various control structures and control strategies viable for use in the mineral beneficiation environment.  Investigate the process dynamics and control of a typical FeCr ore pelletizing process. Fundamentals were outlined regarding control structures and strategies that were deemed to be applicable to the specific problem of this study. Specific reference is made to feedback, feedforward, and cascade control. Furthermore, the two site visits at Bokamoso pelletizing plant, as well as the interaction with the personnel at the plant (Gloy 2015) was critical for gaining insights into the process dynamics of a FeCr pelletizing plant in general, but also specifically for Bokamoso. The information gathered and understanding gained formed the basis for the development of the conceptual control framework. With the above knowledge base as foundation, the development of a conceptual control framework could commence. A roadmap for the development process was outlined which entailed the following two main actions:

 Conduct a process mapping of a pelletizing circuit, in particular the Bokamoso pelletizing circuit. Various process variables and measured variables (MVs) of the system were evaluated while their individual effect on the controlled variable was discussed. Critical MVs were also identified and evaluated for suitability for incorporation in the control framework.

161

Stellenbosch University https://scholar.sun.ac.za

 Development of the conceptual control framework. This was done by evaluating the critical MVs for inclusion into the framework, as well as establishing decision criteria (actions) which will provide the required control to produce the desired outputs of the system under consideration. Following the evaluation of measured process variables, the variables that were considered to have a significant effect on pellet size were classified as critical measured process variables. This classification also allowed for the establishment of a focus area within the pelletizing circuit in which the control framework would be developed. This was based on two criteria: The first emphasised that the focus area should include as many of the measured variables deemed as critical measured variables. The second was concerned with the time delay of changes to propagate through the system and produce the desired change/-s to the controlled variable. Out of the critical measured variables, primary and secondary variables were chosen based on the significant effect each has on the controlled variable, and the ease with which it can be manipulated to induce a desired effect. Secondary variables were so named due to the delay in the feedback used to establish the current value for these variables. From the analysis of primary and secondary variables, four primary control variables were chosen for use in the control framework. These four variables translated into two variables, Bentonite flow rate and water flow rate, and two ratios, ratio of bentonite and filter cake and % water to concentrate. The two ratios, being essential for effective operation of downstream processes, would have to be maintained (as far as possible) while manipulating certain variables to improve the control of the controlled variable. Having chosen the primary control variables, the effects on the controlled variable (i.e. pellet size mean) of manipulating these variables and ratios were tabulated. The associated final control elements for these variables were also identified. Having established the effects on the controlled variable, control actions were suggested regarding the two primary manipulated variables. This was done at specified states of the controlled variable. Two states were evaluated: the first state being production of a small pellet size distribution, and the second being large pellet size distribution production. Finally, these actions lead to the assigning of proposed control measures (structures and strategies) to each of the primary manipulated variables. These actions, along with associated alarms, were subsequently incorporated into the current pelletizing circuit control system in order to establish the proposed conceptual control framework for the pelletizing plant at Bokamoso. Since the objective of this study was to develop a conceptual control framework, the development of the control framework did not involve simulations of system operation for various operating scenarios, or at specified operational states or critical performance parameters. The study also did not investigate ways in which to make available live or historic plant monitoring and diagnosis data to plant operators. Specific reference is made to the systems and equipment that would be required to gather and store sensor and controller data, as well as options relating to providing live graphical user interfaces regarding real time plant operation to assist in effective system operation and maintenance. The further study or development of such a control framework, aimed at implementation, would have to involve such process simulation, circuit performance analysis and systems and hardware investigation. Also, the implementation of such a control framework would have to ensure that all seven major categories of objectives and goals applicable to a control system are achieved (Marlin 2000). Over and above the increased production related objectives, as addressed by the conceptual control framework and the abovementioned simulations of optimisation, such a

162

Stellenbosch University https://scholar.sun.ac.za

study would have to ensure safety of people such as operators, equipment protection, and mitigation of environmental impacts.

163

Stellenbosch University https://scholar.sun.ac.za

LIST OF REFERENCES

Advanced Explorations, I., 2008. Iron Ore Products. Available at: http://www.advanced- exploration.com/industry/io_products/. Al-Thyabat, S. & Miles, N.J., 2006. An improved estimation of size distribution from particle profile measurements. Powder Technology, 166(3), pp.152–160. Al-Thyabat, S., Miles, N.J. & Koh, T.S., 2007. Estimation of the size distribution of particles moving on a conveyor belt. Minerals Engineering, 20(1), pp.72–83. Andersson, T. & Thurley, M.J., 2011. Minimizing profile error when estimating the sieve-size distribution of iron ore pellets using ordinal logistic regression. Powder Technology, 206(3), pp.218–226. Available at: http://dx.doi.org/10.1016/j.powtec.2010.09.021. Andersson, T. & Thurley, M.J., 2007. Visibility Classification of Pellets in Piles for Sizing Without Overlapped Particle Error. In 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications. pp. 508–514. Banta, L., Cheng, K. & Zaniewski, J., 2003. Estimation of limestone particle mass from 2D images. Powder Technology, 132(2–3), pp.184–189. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0032591003000615 [Accessed July 25, 2014]. Barati, M., 2008. Dynamic simulation of pellet induration process in straight-grate system. International Journal of Mineral Processing, 89(1–4), pp.30–39. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0301751608001397 [Accessed April 23, 2014]. Bernard, J.A., 1988. Use of a Rule-Based System for Pro cess Control. IEEE Control Systems Magazine, 8(5), pp.3–13. Beucher, S. & Lantuejoul, C., 1979. Use of watersheds in contour detection. In International Workshop on image processing: Real-time Edge and Motion detection/estimation. p. 2.1-2.12. Beukes, J.P., Dawson, N.F. & Van Zyl, P.G., 2010. Theoretical and practical aspects of Cr(VI) in the South African ferrochrome industry. In The Twelfth International Ferroalloys Congress. pp. 53–62. Van der Bijl, L., 2014. Discussion with industry practitioner on DIP and PSE of agglomerate particles. Canny, J., 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis And Machine Intelligence, PAMI-8(6). Chen, J., Yu, J. & Zhang, Y., 2014. Multivariate video analysis and Gaussian process regression model based soft sensor for online estimation and prediction of nickel pellet size distributions. Computers & Chemical Engineering, 64, pp.13–23. Available at: http://linkinghub.elsevier.com/retrieve/pii/S009813541400012X [Accessed April 13, 2014]. Cowey, A., 1994. Mining and metallurgy in South Africa - A pictorial history, Mintek in association with Phase 4, Randburg. van Dalen, G., 2004. Determination of the size distribution and percentage of broken kernels of rice using flatbed scanning and image analysis. Food Research International, 37(1), pp.51–58. 164

Stellenbosch University https://scholar.sun.ac.za

Development, T.P., 2009. Samarco 3 Pelletizing Plant Project. Glastonbury, R.I. et al., 2015. Comparison of physical properties of oxidative sintered pellets produced with UG2 or metallurgical- grade South African chromite: a case study. The Journal of The Southern African Institute of Mining and Metallurgy, 115(August), pp.699–706. Glastonbury, R.I. et al., 2010. Cr(VI) generation during sample preparation of solid samples – A chromite ore case study. Water SA, 36(1), pp.105–110. Gloy, A., 2015. Roller Feeder Specifications. Gonzalez, R.C. & Woods, R.E., 2008. Digital Image Processing 3rd ed., Upper Saddle River, New Jersey: Pearson Education, Inc. Guoying, Z., Hong, Z. & Ning, X., 2011. Mining Science and Technology ( China ). Mining Science and Technology (China), 21(2), pp.239–242. Available at: http://dx.doi.org/10.1016/j.mstc.2011.02.013. Haavisto, O. & Hyötyniemi, H., 2011. Reflectance spectroscopy in the analysis of mineral flotation slurries. Journal of Process Control, 21(2), pp.246–253. Available at: http://dx.doi.org/10.1016/j.jprocont.2010.10.015. Hamzeloo, E., Massinaei, M. & Mehrshad, N., 2014. Estimation of particle size distribution on an industrial conveyor belt using image analysis and neural networks. Powder Technology, 261(July), pp.185–190. Available at: http://dx.doi.org/10.1016/j.powtec.2014.04.038. Harayama, M. & Uesugi, M., 1992. On-line measurement of average pellet size with spatial frequency analysis. In 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation. San Diego, CA: IEEE, pp. 1613–1618. Available at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=254358 [Accessed April 23, 2014]. Harman, C.N. & Rama Rao, N.S.S., 2007. Use of sintered pellets in production of high carbon ferro chrome. In International Ferro-Alloys Congress (INFACON) XI. pp. 67–74. Heydari, M. et al., 2013. Iron ore green pellet diameter measurement by using of image processing techniques. In 2013 21st Iranian Conference on Electrical Engineering (ICEE). Ieee, pp. 1–6. Available at: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6599563. Itoh, H. et al., 2008. Aggregate size measurement by machine vision. Journal of Terramechanics, 45(4), pp.137–145. Available at: http://dx.doi.org/10.1016/j.jterra.2008.09.001. Jones, R., 2015. Pyrometallurgy in Southern Africa. Available at: http://www.pyrometallurgy.co.za/PyroSA/.html [Accessed December 20, 2015]. Kinnunen, I. & Mäkynen, A., 2011. Image Based Size Distribution Measurement of Gravel Particles. In IEEE Instrumentation and Measurement Technology Conference. pp. 275– 280. Koh, T.K. et al., 2009. Improving particle size measurement using multi-flash imaging. Minerals Engineering, 22(6), pp.537–543. Available at: http://dx.doi.org/10.1016/j.mineng.2008.12.005. Kwan, A.K.H., Mora, C.F. & Chan, H.C., 1999. Particle shape analysis of coarse aggregate using digital image processing. Cement and Concrete Research, 29, pp.1403–1410. Lamphouse, 2014. Discussion on Ligting Equipment.

165

Stellenbosch University https://scholar.sun.ac.za

Liao, C.W. & Tarng, Y.S., 2009. On-line automatic optical inspection system for coarse particle size distribution ☆. Powder Technology, 189(3), pp.508–513. Available at: http://dx.doi.org/10.1016/j.powtec.2008.08.013. Lin, C.L. & Miller, J.D., 1993. The development of a PC, image-based, on-line particle-size analyzer. Minerals and Metallurgical Processing, 10(1), pp.29–35. Ljungqvist, M.G. et al., 2011. Image analysis of pellet size for a control system in industrial feed production. PLoS ONE, 6(10). Available at: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3198772&tool=pmcentrez&re ndertype=abstract [Accessed April 23, 2014]. MacHunter, R.M.G. & Pax, R.A., 2007. Studies of the dynamic characteristics of plant circuits for the development of control strategies. In The 6th International Heavy Minerals Conference “Back to Basics.” pp. 133–138. Available at: http://www.saimm.co.za/Conferences/HMC2007/133-138_MacHunter.pdf [Accessed April 23, 2014]. Maerz, N.H., 1998. Aggregate sizing and shape determination using digital image processing. In Center for Aggregates Research (ICAR) Sixth Annual Sympostium Proceedings. pp. 195–203. Makela, P. & Krogerus, P., 2015. Process for the manufacture of ferrochrome. , pp.1–9. Malek, A.A. et al., 2010. Region and boundary segmentation of microcalcifications using seed-based region growing and mathematical morphology. In Procedia - Social and Behavioral Sciences. pp. 634–639. Marlin, T.E., 2000. Process Control: Designing Processes and Control Systems for Dynamic Performance 2nd ed., McGraw-Hill. Mertens, G. & Elsen, J., 2006. Use of computer assisted image analysis for the determination of the grain-size distribution of sands used in mortars. , 36, pp.1453– 1459. Mkwelo, S.G., Nicolls, F. & De Jager, G., 2005. Watershed-based segmentation of rock scenes and Proximity-based classification of watershed regions under uncontrolled lighting conditions. Transactions of the SAIEE, 96(1), pp.28–34. Montenegro Rios, A. et al., 2011. Machine Vision for Size Distribution Determination of Spherically Shaped Particles in Dense-Granular Beds, Oriented to Pelletizing Process Automation. Particulate Science and Technology, 29(4), pp.356–367. Available at: http://www.tandfonline.com/doi/abs/10.1080/02726351.2010.503262 [Accessed April 23, 2014]. Mora, C.F., Kwan, A.K.H. & Chan, H.C., 1998. Particle size distribution analysis of coarse aggregate using digital image processing. Cement and Concrete Research, 28(6), pp.921–932. Mukherjee, D.P. et al., 2009. Ore image segmentation by learning image and shape features. Pattern Recognition Letters, 30(6), pp.615–622. Available at: http://dx.doi.org/10.1016/j.patrec.2008.12.015. Murtagh, F. et al., 2005. Grading of construction aggregate through machine vision : Results and prospects. Computers in Industry, 56(8–9), pp.905–917. Oikarinen, P. & Pelttari, J., 2007. Process operating manual for pelletizing and sintering plant, Outal, S., Jeulin, D. & Schleifer, J., 2008. A new method for estimating the 3D size-

166

Stellenbosch University https://scholar.sun.ac.za

distribution-curve of fragmented rocks out of 2D images. Image Analysis and Stereology, 27(2), pp.97–105. Outal, S., Schleifer, J. & Pirard, E., 2004. Evaluating a calibration method for the estimation of fragmented rocks 3D-size-distribution out of 2D images. In FRAGBLAST 9 - 9th International Journal for Blasting and Fragmentation. pp. 221–228. Outotec Oyj, 2015a. Ferrochrome. Available at: http://www.outotec.com/en/Products-- services/Ferrous-metals-and-ferroalloys-processing/Ferrochrome/ [Accessed December 20, 2015]. Outotec Oyj, 2013. Outotec ® Pellet Size measurement system. Outotec Oyj, 2015b. Outotec steel belt sintering. Pandey, P., Lobo, N.F. & Kumar, P., 2012. Optimization of Disc Parameters Producing More Suitable Size Range of Green Pellets. International Journal of Metallurgical Engineering, 1(4), pp.48–59. Perez, C.A. et al., 2011. International Journal of Mineral Processing Ore grade estimation by feature selection and voting using boundary detection in digital image analysis. International Journal of Mineral Processing, 101(1–4), pp.28–36. Available at: http://dx.doi.org/10.1016/j.minpro.2011.07.008. Petersen, P., 2015. Discussion on Lighting Specifcations. Podczeck, F. & Newton, J.M., 1995. The evaluation of a three-dimensional shape factor for the quantitative assessment of the sphericity and surface roughness of pellets. International Journal of Pharmaceutics, 124(2), pp.253–259. Presles, B., Debayle, J. & Pinoli, J.-C., 2012. Size and shape estimation of 3-D convex objects from their 2-D projections: application to crystallization processes. Journal of microscopy, 248(2), pp.140–55. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23078115 [Accessed July 25, 2014]. Rao, P.V.T., 1994. Agglomeration and Prereduction of Ores. In 4th Refresher Course on Ferro Alloys. Jamedepur, pp. 1–15. Available at: http://eprints.nmlindia.org/5783. Ren, C. et al., 2011. Determination of particle size distribution by multi-scale analysis of acoustic emission signals in gas-solid fluidized bed. Journal of Zhejiang University- SCIENCE A (Applied Physics & Engineering), 12(4), pp.260–267. Richard, P., 2015. Overview of the global chrome market. In 1st INDINOX Conference. International Chromium Development Association. Riekkola-vanhanen, M., 1999. Finnish expert report on best available techniques in ferrochromium production, Rosato, A. et al., 1987. Why the Brazil nuts are on top: Size segregation of particulate matter by shaking. Physical Review Letter, 58(10), pp.1038–1040. Salinas, R.A., Raff, U. & Farfan, C., 2005. Automated estimation of rock fragment distributions using computer vision and its application in mining. IEE Proceedings - Vision, Image and Signal Processing, 152(1), pp.1–8. Available at: http://kar.kent.ac.uk/12311/. Shahin, M.A., Symons, S.J. & Poysa, V.W., 2006. Determining Soya Bean Seed Size Uniformity with Image Analysis. Biosystems Engineering, 94(2), pp.191–198. Shapiro, L. & Stockman, G., 2001. Computer Vision, Prentice Hall.

167

Stellenbosch University https://scholar.sun.ac.za

Shen, L. et al., 2001. Velocity and size measurement of falling particles with fuzzy PTV. Flow Measurement and Instrumentation, 12(3), pp.191–199. Sommer, G., 1992. A Contemplative Stance on the Automation of the Mining, Mineral, and Metal Processing Industry (MMM) - An IFAC Report. Automatica, 28(6), pp.1273–1278. Suk, M. & Chung, S.-M., 1983. A new image segmentation technique based on partition mode test. Pattern Recognition, 16(5), pp.469–480. The MathWorks Inc., 2014. Simulink. Available at: http://www.mathworks.com/products/simulink/ [Accessed November 13, 2014]. Thurley, M.J., 2013. Automated Image Segmentation and Analysis of Rock Piles in an Open- Pit Mine. In 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2013). pp. 1–8. Thurley, M.J., 2006. On-line 3D surface measurement of iron ore green pellets. In International Conference on Computational Intelligence for Modelling, Control and Automation. pp. 229–235. Thurley, M.J., 2002. Three Dimensional Data Analysis for the Separation and Sizing of Rock Piles in Mining. Thurley, M.J. & Andersson, T., 2008. An industrial 3D vision system for size measurement of iron ore green pellets using morphological image segmentation. Minerals Engineering, 21(5), pp.405–415. Thurley, M.J. & Ng, K.C., 2005. Identifying , visualizing , and comparing regions in irregularly spaced 3D surface data. Computer Vision and Image Understanding, 98, pp.239–270. Treffer, D. et al., 2014. In-line implementation of an image-based particle size measurement tool to monitor hot-melt extruded pellets. International Journal of Pharmaceutics, 466(1–2), pp.181–189. Available at: http://dx.doi.org/10.1016/j.ijpharm.2014.03.022. Wang, W., 2006a. Image analysis of particles by modified Ferret method — best-fit rectangle. , 165, pp.1–10. Wang, W., 2007. Image Analysis of Size And Shape of Mineral Particles. In Fourth International Conference on Fuzzy Systems and Knowledge Discovery. pp. 41–44. Wang, W., 2006b. Online Aggregate Particle Size Measurement on A Conveyor Belt. In International Conference on Pattern Recognition. pp. 1032–1035. Wang, W., 2008. Rock Particle Image Segmentation and Systems. In P.-Y. Yin, ed. Pattern Recognition Techniques, Technology and Applications. Shanghai: InTech, pp. 197– 226. Available at: http://www.intechopen.com/books/pattern_recognition_techniques_technology_and_ap plications/rock_particle. Wang, W.-X. & Bergholm, F., 2005. Online particle size estimation on one-pass edge detection and particle shape. In Fourth International Conference on Machine Learning and Cybernetics. pp. 18–21. Wang, W., Luo, W. & Tang, J., 2006. Particle Measurement by Image Processing and Analysis. In International Conference on Computational Intelligence and Security, 2006. pp. 1861–1864. Wang, W.X. & Li, L., 2006. Continuous Measurement of Aggregate Size and Shape by Image Analysis of a Falling Stream. In 6th World Congress on Intelligent Control and Automation. pp. 5093–5097.

168

Stellenbosch University https://scholar.sun.ac.za

Yamaguchi, S. et al., 2010. Kobelco Pelletizing Process. Kobelco Technology Review, (29), pp.58–59. Yao, Q., Zhou, Y. & Wang, J., 2010. An Automatic Segmentation Algorithm for Touching Rice Grains Images. , pp.802–805. Zelin, Z. et al., 2012. International Journal of Mining Science and Technology Estimation of coal particle size distribution by image segmentation. International Journal of Mining Science and Technology, 22(5), pp.739–744. Available at: http://dx.doi.org/10.1016/j.ijmst.2012.08.026. Zhang, Z. et al., 2013. Analysis of large particle sizes using machine vision system. Physicochemical Problems of Mineral Processing, 49(2), pp.397–405.

169