MORGAN NILSEN PhD Thesis Production Technology 2019 No. 27

Monitoring and control of laser beam butt joint Laser beam welding is a growing area within production technology. It enables high Monitoring and control of laser production rates due to the ability to automate the process, and the high automation level facilitates on-line monitoring and control. Laser beam welding is currently used in MONITORING AND CONTROL OF LASER BEAM BUTT JOINT WELDING beam butt joint welding various industries, ranging from small scale manual welding to fully automated welding in the automotive, aerospace and heavy manufacturing industry. This work addresses robotized laser beam welding, where an industrial robot is used for manipulation, and the focus is on finding sensor solutions robust enough to be used in real industrial situ- ations for welding of complex parts with limited space for sensor equipment. The objec- tive is to study different monitoring and control systems applied to laser beam welding of Morgan Nilsen squared-butt joints, that enable production of high-quality seams. Two different cases are studied, one where the gap between the work pieces is close to zero and therefore hard to detect, and one when the gap between the work pieces is varying. Different monitoring and control solutions that show promising results for improving the welding quality in laser beam welding of squared-butt joints are proposed.

Morgan Nilsen Morgan received his bachelor’s degree in Electrical Engineering from University West, Sweden in 1999, and earned his master’s degree in Mechatronics at De Montfort University, UK in 2000. After eleven years in the automotive industry he started to work at University West as a research engineer and in 2014 he started his PhD studies. His research interests are within monitoring and control of laser-based processes. 2019 NO.27

ISBN 978-91-88847-23-2 (Print) ISBN 978-91-88847-22-5 (PDF)

116883_Omslag.indd 1 2019-02-21 15:44:22

Tryck: BrandFactory, februari 2019 PhD Thesis Production Technology 2019 No. 27

Monitoring and control of laser beam butt joint welding

Morgan Nilsen Department of Engineering Science University West SE-461 86 Trollhättan Sweden Telephone +46 (0)52 – 022 3000 www.hv.se

c Morgan Nilsen, 2019. ISBN 978-91-88847-23-2 (Printed version) ISBN 978-91-88847-22-5 (Electronic version)

Typeset by the author using LATEX.

Trollhättan, Sweden 2019 To Nella

Acknowledgements

This work was carried out at the Department of Engineering Science, Univer- sity West, Trollhättan, Sweden. There, I am employed as an engineer and I have the opportunity to perform my PhD studies. The work presented in this thesis is financed by Vinnova and the participating companies in the projects "RobIn" and "VarGa". First I would like to thank my supervisors Anna-Karin Christiansson and Fredrik Sikström. Anna-Karin, always supportive, positive, full of ideas and patiently improving my texts. Fredrik, full of ideas and always supporting in experiments and interesting discussions. Also, I would like to thank Antonio Ancona for his support, great ideas and interesting discussions during our joint work. I would also like to thank all my engineering colleagues for their support. Mattias Ottosson for his support in measurement systems. Anders Appelgren, Anders Nilsson and Svante Augustsson for support in robot programming. Xiaoxiao Zhang for his support in computer vision. Kjell Hurtig and Mats Högström for their support in welding related issues. This work have been carried out in close collaboration to GKN Aerospace in Trollhättan. I would specially like to thank Jan Lundgren and Jimmy Johans- son for sharing there deep knowledge within laser beam welding, and also for their enthusiasm and support during our joint projects. Finally thanks to my wife Petronella, who always supports me.

Morgan Nilsen Trollhättan, Februari 2019

i ii Populärvetenskaplig Sammanfattning

Nyckelord: Automation, optiska givare, bildbehandling, lasersvetsning

Lasersvetsning är ett växande och viktigt område inom produktionsteknik som möjliggör produktion med hög automatiseringgrad. Lasersvetsning av met- aller innebär att man med hjälp av laserljus med hög effekt sammanfogar olika delar av en komponent. Laserljuset fokuseras till en liten punkt där det blir så varmt så att metallen smälter och på så sätt sammanfogas delarna. Det område där de två komponenterna som skall sammanfogas möter varandra kallas för fog. Det är mycket viktigt att laserljuset träffar på rätt plats i relation till fogen annars kan det uppstå defekter i svetsen som gör den mindre hållbar. I denna avhandling utvärderas tre olika metoder för att säkerställa att laserljuset alltid träffar på rätt plats, detta kallas för fogföljning. Den första metoden bygger på att man använder en optisk givare som kallas fotodiod för att mäta de op- tiska emmissionerna från svetsprocessen. Genom att utföra olika svetsexperi- ment har det visat sig att man kan relatera signalen från fotodioden med hur man placerat sitt laserljus i förhållande till fogen. Den andra metoden som har utvärderats är att använda en kamera som tar bilder av området runt fogen un- der pågående svetsning. Med hjälp av bildbehandling har det visat sig att det är möjligt mäta hur laserljuset är placerat i förhållande till fogen. Den tredje meto- den som har utvärderats är att använda en optisk givare som kallads spektrom- eter. När man smälter metallen med hjälp av laserljuset bildas det ett plasma ovanför den smälta metallen. Man kan med hjälp av spektrometern sedan mäta temperaturen i detta plasma. Det har visat sig i de svetsförsök som genomförts att temperaturen i detta plasma kan relateras till hur man har placerat laserljuset i förhållande till fogen. Ett annat problem som kan uppstå vid lasersvetsning är att gapes storlek mellan de delar som skall svetsas samman varierar. Om gapet blir för stort up- pstår defekter i svetsen på grund av att det fattas material för att fylla gapet. För att åtgärda detta använder man tillsatstråd, men det är viktigt att tillsätta rätt mängd tråd beroende av storleken på gapet. I denna avhandling presenteras två olika metoder för att mäta gapets storlek och styra den mängd tillsatstråd

iii POPULÄRVETENSKAPLIG SAMMANFATTNING som behövs för att fylla gapet. I den första metoden används en spektrome- ter för att mäta de optiska emissionerna från svetsprocessen. Det har visat sig att det finns en korrelation mellan intensiteten hos det utstrålande ljuset och gapets storlek. I den andra metoden används en kamera, LED-belysning och en laserlinje-belysning för att mäta gapets storlek. Med hjälp av bildbehandling och modellbaserad filtrering estimeras gapets storlek och denna information används sedan för att styra den mängd tillsatstråd som adderas. Resultaten från detta arbete visar att de presenterade metoderna fungerar väl under pågående process och kan dels användas för att mäta hur väl man har placerat sitt laserljus i förhållande till fogen och dels mäta gapets storlek. Denna information kan sedan användas för att styra placeringen av laserljuset, för att på så sätt se till att det alltid är placerat i rätt position och även för att styra den mängd tillsatstråd som adderas så att defekter i svetsen kan undvikas.

iv Abstract

Title: Monitoring and control of laser beam butt joint welding Language: English Keywords: Laser beam welding, Optical sensors, Joint tracking, Varying gap ISBN 978-91-88847-23-2 (Printed version) ISBN 978-91-88847-22-5 (Electronic version)

Laser beam welding is one important technology in automated production. It has several advantages, such as the ability to produce deep and narrow welds giving limited heat induced deformations. The laser beam welding process is however sensitive to how the high power laser is positioned with regards to the joint position. Therefore, to achieve a seam without defects, the joint position needs to be measured and controlled. The laser beam welding process is also sen- sitive to variations in joint gap width. Costly joint preparations are required to achieve the tight fit up tolerances needed to produce high quality welds. How- ever, the demand on joint preparation can be somewhat relaxed by allowing the joint gap width to vary and controlling the process. One way of doing this is to control the filler wire feed rate based on joint gap width measurements. This thesis presents experimental studies on how to track closed--butt joints and also how to handle varying square-butt joints in laser beam welding. Different optical sensor systems are evaluated for their performance to estimate the joint position and the joint gap width. The possibility of detecting beam offsets is studied by using sensors systems based on a photodiode and on a spec- trometer. Estimations of the joint position, to be used for closed loop position control, is studied by using a camera and external LED illumination. Varia- tions in joint gap width is evaluated using a spectrometer, a camera and a laser profile sensor. Experimental results show that both the photodiode system and the spectometer system is able to detect beam offsets and that the beam posi- tion can be estimated with sufficient accuracy when welding closed-square-butt joints. It is also shown that the joint gap width can be estimated by the selected sensor systems and that the estimates can be used for controlling the wire feed rate in order to obtain a constant weld geometry and avoid defects related to the gap width.

v vi Appended Publications

Paper A. Optical Methods for in-process monitoring of laser beam welding Presented at the Swedish Production Symposium, SPS, in Gothenburg, Swe- den, October 2014 - Authors: Fredrik Sikström, Morgan Nilsen, Ingemar Eriks- son

Paper B. Detecting beam offsets in laser welding of closed-square-butt joints by wavelet analysis of an optical process signal Published in Optics & Laser Technology 2018 - Authors: Arianna Elefante, Morgan Nilsen, Fredrik Sikström, Anna-Karin Christiansson, Tommaso Mag- gipinto, Antonio Ancona

Paper C. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding Published in Optics & Laser Technology 2017 - Authors: Morgan Nilsen, Fredrik Sikström, Anna-Karin Christiansson, Antonio Ancona

Paper D. Robust vision based joint tracking for laser welding of curved closed- square-butt joints Published in The International Journal of Advanced Manufacturing Technol- ogy 2018 - Authors: Morgan Nilsen, Fredrik Sikström, Anna-Karin Christians- son, Antonio Ancona

Paper E. Monitoring of varying joint gap width during laser beam welding by a dual vision and spectroscopic sensing system Published in Physics Procedia 2017 - Authors: Morgan Nilsen, Fredrik Sik- ström, Anna-Karin Christiansson, Antonio Ancona

Paper F. Adaptive control of the filler wire rate during laser beam welding of squared butt joints with varying gap width Published in The International Journal of Advanced Manufacturing Technol- ogy 2018 - Authors: Morgan Nilsen, Fredrik Sikström, Anna-Karin Christians- son

vii APPENDED PUBLICATIONS

viii Contents

Acknowledgements i

Populärvetenskaplig Sammanfattning iii

Abstract v

Appended Publications vii

Nomenclature xiii

I Introductory Chapters

1 Introduction 1 1.1 Problem description ...... 1 1.2 Research question ...... 3 1.3 Scope and limitations ...... 3

2 Background 5 2.1 Fundamental concepts ...... 5 2.2 Laser beam welding ...... 6 2.2.1 The laser beam welding process ...... 6 2.2.2 Process parameters ...... 8 2.2.3 Laser beam welding system ...... 10 2.3 Optical sensors in laser beam welding ...... 10 2.3.1 Sensor survey ...... 11 2.3.2 Photodiodes ...... 11 2.3.3 Cameras ...... 14 2.3.4 Factors influencing camera monitoring performance ...... 16 2.3.5 Spectrometer ...... 17 2.3.6 Laser profile sensor ...... 18 2.3.7 Sensor fusion ...... 21 2.4 State of the art ...... 22

ix CONTENTS

3 Experimental set-up 25 3.1 Welding equipment ...... 25 3.1.1 Laser beam welding tool manipulation ...... 26 3.1.2 Laser source and laser beam welding tool ...... 26 3.1.3 Filler wire feeder ...... 26 3.2 Monitoring system ...... 27 3.2.1 Triggering system ...... 28 3.2.2 Camera, LED and laser profile system ...... 28 3.2.3 Spectrometer system ...... 31 3.2.4 Photodiode system ...... 31 3.3 Experiments and materials ...... 33 3.3.1 Closed-square-butt joints ...... 33 3.3.2 Varying joint gap widths ...... 35

4 Estimating the joint position during LBW of closed-square- butt joints 39 4.1 Problem description ...... 39 4.2 Proposed solutions ...... 40 4.2.1 Using a camera to estimate the beam offset ...... 40 4.2.2 Monitoring beam offsets using photodiodes and wavelet analysis ...... 47 4.2.3 Correlations between beam offsets and the plasma electron tem- perature ...... 51

5 Laser beam welding of square-butt joints with varying gap width 55 5.1 Problem description ...... 55 5.2 Proposed solutions ...... 56 5.2.1 Correlation between spectrometer signals and joint gap width . . 57 5.2.2 Using a laser profile sensor and/or a camera to estimate the joint gap width ...... 58 5.2.3 Filler wire feed rate control ...... 62

6 Conclusion and contributions 65

7 Future work 69

References 71

Summary of Appended Papers 81

x CONTENTS

II Appended Papers Paper A. Optical Methods for in-process monitoring of laser beam welding Paper B. Detecting beam offsets in laser welding of closed-square-butt joints by wavelet analysis of an optical process signal Paper C. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding Paper D. Robust vision based joint tracking for laser welding of curved closed-square- butt joints Paper E. Monitoring of varying joint gap width during laser beam welding by a dual vision and spectroscopic sensing system Paper F. Adaptive control of the filler wire rate during laser beam welding of squared butt joints with varying gap width

xi xii Nomenclature

Abbreviations

Alloy 718 Nickel based precipitation hardening super-alloy Al Aluminium ANN Artificial Neural Network BP Band pass CW Continuous Wave CCD Charge Coupled Device CMOS Complementary Metal Oxide Semiconductor CO2 Carbon dioxide CWT Continuous Wavelet Transform DFT Discrete Fourier Transform DWT Discrete Wavelet Transform FNN Feed-forward Neural Network FoV Field of View FPS Frames Per Second FWHM Full Width at Half Max FZ Fusion Zone GaN Gallium Nitrides HAZ Heat Affected Zone HDR High Dynamic Range ICI Inline Coherent Imaging InGaAs Indium Gallium Arsenide InGaP Indium Gallium Phosphide IR Infrared LBW Laser Beam Welding LED Light Emitting Diode LP Long pass MAG Metal Active Gas MIG Metal Inert Gas Nd:Yag Neodymium-doped Yttrium aluminum garnet NIR Near Infrared NWPS Normalized Wavelet Power Spectrum

xiii NOMENCLATURE

ROI Region of Interest SHT Standard Hough Transform SS Stainless Steel SVM Support Vector Machine TIG Gas Tungsten UV Ultra Violet VIS Visible WFT Windowed Fourier Transform WPD-PCA Wavelet packet decomposition principle components analysis

Variables

Amn The probability of a particular spectroscopic transition to take place c Speed of light in vacuum d Distance between keyhole and camera ROI dc Core diameter of the fibre delivering the process laser light dF Focused laser beam spot diameter dr Distance from the origin and the line to detect in the SHT dw Filler wire diameter Em Energy of upper level of the radiative decay En Energy of lower level of the radiative decay fc Focal length of the collimating lens in the LBW tool ff Focal length of the focusing lens in the LBW tool gm Statistical weight of the energy level used in Boltzmann method G(s) Transfer function describing the dynamics of the filler wire rate control system

G1(s) The dynamics of the sensing system in the filler wire rate control system G2(s) The dynamics of the filler wire feeder HI Heat input h Planck constant

Imn Intensity of a generic plasma optical emission line J Largest scaling factor for wavelet function kB Boltzmann constant kr Reinforcement factor n Translation factor for wavelet function

Nm The population of the excited state used in Boltzmann method Pi Laser power absorbed by the weld material v Welding travel speed vwi r e Filler wire feed rate vn Nominal filler wire feed rate ∆v The change in wire feed rate based on gap width estimations xiv Wn(s) Discretized CWT 2 Wn(s) Wavelet power spectrum |s | Scaling factor for wavelet function s0 Smallest scaling factor for wavelet function thk Work piece plate thickness Te Welding plasma electron temperature xk Sampled photodiode signal at sample index k xˆ The DFT of x kf k Z Partition function used in Boltzmann method α Parameter in the second order polynomial used in the Hough transform for curves

αy Mean intensity of each row in the ROI used for gap width measurements with the camera

α0y First derivative of αy β Parameter in the second order polynomial used in the Hough transform for curves γ Parameter in the second order polynomial used in the Hough transform for curves δt Sampling time η Non-dimensional time parameter for wavelet function

λmn Generic plasma optical emission line θ Angle between the horizontal axis and the normal from the origin to the line in the SHT ˆ ξk The estimated joint gap width Ψ0(η) Wavelet mother function τd1 Time constant for G1(s) τd2 Dead time for filler wire feeder used in G2(s) τ Time constant for filler wire feeder used in G2(s) ˆ Ψ(sω) The DFT of Ψ(t/s) Ψˆ The complex conjugate of Ψˆ

ω0 Non-dimensional frequency parameter for wavelet function ω Angular frequency used for CWT in Fourier space kf

xv xvi Part I

Introductory Chapters

Chapter 1

Introduction

This thesis considers problems and solutions for increased quality in laser beam welding (LBW) production cells. LBW is an important part of production technology and the focus of this work is on industrial solutions to be used in-process, during the LBW process. This introductory Chapter gives a brief introduction to the addressed problems. More details, including references to work from other researchers, are presented in Chapter 2.

1.1 Problem description

LBW is a growing area within production technology, with an expected com- pound annual growth rate of 5 % during the period 2018-2022 [1]. LBW enables high production rates due to the ability to automate the process, and the high automation level also facilitates on-line monitoring and control. LBW is cur- rently used in various industries, ranging from small scale manual welding to fully automated welding in the automotive, aerospace and heavy manufacturing industry. In 2017, among different laser technologies the fibre laser accounted for nearly 63 % of the market, and it is also expected to dominate the market in the years to come. Monitoring and control of the robotised LBW process is necessary when disturbances are present during the welding process in order to assure high and constant weld quality. Disturbances may occur due to inaccuracies in fixturing, changes in joint geometry, inaccuracies in robot motions and in geometrical tolerances, heat induced distortions, instabilities in shielding gas flow, etc. The issue of monitoring and controlling the LBW process has been addressed by several researchers but even though showing promising results, the industrial implementations are so far limited. One reason might be the harsh environ- ment in the welding area requiring robust sensors that are insensitive to emis- sions (heat, light, spatter, smoke, sound, etc.) from the LBW process. Also,

1 CHAPTER 1. INTRODUCTION it might be needed to integrate the sensors into the LBW in order not to add restrictions in the usability of the tool for industrial applications due to e.g. lack of physical space. In this work, these issues are addressed by investigating how optical sensors, that can be integrated into the LBW tool, can be used for monitoring and control of the LBW process. There are mainly two issues addressed in this work. The first issue is how the focused processing laser beam spot is positioned (beam position) with re- gards to the joint position (this difference is here called beam offset) in closed- square-butt joint configurations, commonly denoted joint (or seam) tracking. In situations when the fit between the parts to be welded together are very tight and there is no misalignment between the parts, commercially available systems might fail to find the joint position. Even a small beam offset can cause lack of sidewall fusion in the resulting weld, see Figure 1.1. This is a serious issue and hard to detect using non-destructive testing. In this work, three different types of optical sensors, namely photodiodes, camera and spectrometer, are in- vestigated for their ability to detect welding behaviour with a beam offset in closed-square-butt joint configurations.

Lack of sidewall fusion

Figure 1.1: Cross section image of a closed-square-butt seam. Welding was conducted with 1 mm beam offset. Lack of sidewall fusion is clearly visible on the left side of the weld waist.

The second issue addressed is how to obtain acceptable weld quality during LBW of square-butt joints with varying joint gap width. By adding filler wire to the LBW process, it becomes less sensitive to variations in joint gap width, and larger gaps can be bridged. Allowing some variation in the gap width en- ables the fit-up tolerances of the parts to be welded to be somewhat relaxed,

2 1.2. RESEARCH QUESTION this can save time and money due to less joint preparation. However, to ob- tain an consistent seam geometry that fulfils the weld specification, when the fit up tolerances are relaxed, the joint gap width needs to be measured and the process needs to be controlled. Two studies have been performed in this work to address this issue. In the first study a spectrometer is used and correlations between the intensity of a selected spectral line and the joint gap width is in- vestigated. A camera, LED illumination and a laser line module is used in the second study to obtain images of the area just in front of the keyhole, where the joint is visible. The joint gap width is estimated from these images and this information is used to control the wire feed rate in a feed forward controller.

1.2 Research question

The overall research question is: How to increase robustness in laser beam welding by in-process monitor- ing and control?

The overall question is refined into the following industrially relevant re- search questions that constitute the subject of this thesis:

Q1: How to robustly track the joint position during laser beam welding of closed-square-butt joints?

Q2: How to robustly estimate the joint gap width and control the filler wire feed rate in laser beam welding of square-butt joints?

1.3 Scope and limitations

The studies in this work are only based on experimental work, no physical modelling has been applied. All sensor data has been captured in-process and has been evaluated either in real-time or off-line. The issues addressed has been investigated by using four different optical sensor systems; photodiodes, camera, laser profile sensor and spectrometer, no other sensor is covered. These sensors are selected on the basis that they are robust, industrially applicable and possible to be seamlessly integrated into the LBW tool. It is assumed, and also confirmed in [2], that commercial systems us- ing laser line triangulation are not able to track the joint position during closed- gap, and zero misalignment butt joint welding. Lately, commercial systems, such as Scansonsic TH6i, have been released

3 CHAPTER 1. INTRODUCTION claiming to being able to track closed (or zero) gap joints. This system, and other commercial systems claiming to handle joint tracking of zero-gap joints have not been evaluated due to lack availability. Only one set-up of welding equipment has been used during the experi- ments, comprising a fibre laser, 1070 nm wavelength, a LBW tool giving a fo- cused beam spot diameter of 1.12 mm and an industrial robot for tool manip- ulation. This set-up was located in the lab at University West (PTC), in an industrial like set-up. Only two materials have been used during the LBW experiments, stainless steel, and Alloy 718. The material properties, such as micro structure, fatigue life, etc. of the welds produced during the experimental work, have not been evaluated in this work. Only control of filler wire feed rate has been considered regarding welding of parts with varying joint gap width. Other control methods, such as control- ling the laser power, welding travel speed and weaving have not been evaluated. The issue of measuring the beam offset is in this thesis referred to as joint tracking, even if the position of the laser beam spot is not yet controlled. Pa- per C uses joint tracing instead of joint tracking to indicate that the position is only measured, not controlled. Technical zero gap and closed-gap are used in this thesis to define a gap width less than 0.1 mm. In some literature, by other researchers, this is also called narrow gap or micro gap.

4 Chapter 2

Background

This chapter covers the fundamental concepts used in this thesis, an introduc- tion to the LBW process, the optical sensor systems considered in this work and the state of the art.

2.1 Fundamental concepts

The joint configuration used in this work is defined as a square-butt joint, see Figure 2.1. In the suggested methods for joint tracking and gap width measure- ments it is assumed that the work piece has a 90 degree square shape at the joint and that no radius is present on the edges. This is a simplification, since when a work piece is produced by laser or water cutting, there will be a small angle of the surface related to the joint, due to beam divergence, and the edge will always have a radius.

Square-butt joint with a gap Closed-square-butt joint

Work piece Work piece Work piece Work piece

Figure 2.1: Illustration of square-butt joint configurations.

A closed-square-butt joint is achieved by placing the work pieces without any gap, as illustrated in Figure 2.1. With machined work pieces the gap will be close to zero. However, with laser or water jet cutting the the gap width and shape will deviate from a perfect square. Sensor usage in relation to LBW is in this thesis referred to as observation, monitoring or control. Observation implies manual observations of the signal from the sensor. In monitoring there is also some kind of manual or automatic

5 CHAPTER 2. BACKGROUND measure taken based on the signal value and in control it is used as input in a controller that controls the behaviour of the process. The type of camera used in this thesis is sensitive to wavelengths related to the visual spectrum. It is sometimes called visual camera, however in this work it is referred to as a camera.

2.2 Laser beam welding

The advantages of LBW compared to the arc welding technologies TIG/Plasma welding is that narrow welds with less heat induced deformation can be achieved [3]. Also, it is possible achieve higher welding travel speeds in thin material. The disadvantage is that LBW requires more accurate fixturing, tighter gap tol- erances and the investment cost is higher. Compared to the wide spread arc welding technologies MIG/MAG, LBW offers higher welding speed, less heat input (HI) giving less deformations, higher precision and welding can be con- ducted both with filler material, and without (autogenous LBW). The main disadvantage is also here the high investment cost.

2.2.1 The laser beam welding process Electromagnetic radiation (e.g. laser light) that hits a surface will be reflected, absorbed and transmitted [3]. For opaque materials, such as metals, there is no transmission, hence reflectivity = 1 - absorptivity. Absorption of the laser light in metals is dependent on the wavelength of the laser light, the temperature of the metal, presence of surface films, angle of incidence and also material and sur- face roughness. Absorptivity on the surface is increased at shorter wavelengths and also when the temperature of the metal increases. Surface films, such as surface oxide, can increase the absorptivity acting as an anti-reflective coating on the surface. Absorption depends on the angle of incidence of the laser beam and at a certain angle of incidence the absorptivity peaks, this angle is called the Brewster angle [3]. The surface roughness of the material has a large effect on the absorption, higher roughness increases the absorption. Table 2.1 shows the absorptivity of several pure metals for two welding processes, 1 µm (fibre laser used in this work) and 10 µm (CO2 laser) [4]. As can be seen in the table, more energy is absorbed with shorter wavelengths and it is also noted that a large portion is reflected. Energy transfer from a laser source is in the form of a beam of electromag- netic radiation [3]. This beam is shaped by optical elements, delivering fibre, lenses or mirrors, to project a small spot on the work piece. The size (diame- ter) of the spot in a fibre optic system (dF ) is dependent on the core diameter of the fibre delivering the process laser (dc ), the focal length of the collimating

6 2.2. LASERBEAMWELDING

Table 2.1: Absorptivity of selected pure metals at 1 µm and 10 µm at room temperature. The values give the portion of incident energy that is absorbed.

Metal Absorptivity at 1 µm Absorptivity at 10 µm Al 0.06 0.02 Cu 0.05 0.015 Fe 0.1 0.03 Ni 0.15 0.05 Ti 0.26 0.08 Zn 0.16 0.03 Carbon steel 0.09 0.03 Stainless steel 0.31 0.09

lens (fc ) and that of the focusing lens (ff ), such that dF = dc ff /fc . The optical path of such a processing laser is shown in Figure 2.2. The Dichroic mirror re- flects light as a function of wavelength. This enables reflection of the processing laser light while light from the work piece area can be transmitted through the mirror onto optical sensors integrated in the LBW tool. Two different modes of LBW exist governed by the heat input level, conduc- tion welding and keyhole welding [3]. During conduction welding the focused laser beam spot heats up the material to its melting temperature creating a melt pool, but the power density is not large enough to cause boiling of the metal. When the power density is increased the material starts evaporating, forming a vapour plume and the laser beam then a hole into the material, this hole is called keyhole (shown in Figure 2.2). The keyhole is stabilized by the pres- sure from the generated vapour. In a stable keyhole, almost all the energy in the beam will be absorbed due to the beam entering into the hole and reflect- ing inside it before it is able to escape. The amount of energy supplied to the work piece (heat input) is in LBW defined by HI = Pi /v, where Pi is the power absorbed by the material and v is the welding travel speed. A plasma plume, comprising excited atomic and ionic species, is also formed inside and above the keyhole as shown in Figure 2.2. LBW can be conducted both with additional filler wire and without filler wire, the latter called autogenous LBW. Adding filler wire gives a number of ad- vantages, such as ability to bridge small gaps, and influencing the metallurgical composition of the fusion zone (FZ) [5]. Also, since you get a wider molten pool, it can reduce the demand on edge preparation, edge misalignment, and beam misalignment. However, by adding filler wire you will have to add an ad- ditional wire feed system, hence you get a more complex welding process with more process parameters to control [6].

7 CHAPTER 2. BACKGROUND

LBW tool Collimating lens

Processing laser Dichroic mirror Optical fibre

Focusing lens

Laser beam

Vapour plume Plasma plume Keyhole Work piece

Figure 2.2: Optical path of the processing laser.

2.2.2 Process parameters

The main process parameters for LBW are related to characteristics of the laser beam, LBW tool motion, shielding gas and material properties [3]. Table 2.2 gives an overview of the process parameters related to LBW and also informa- tion regarding whether the parameter has been controllable or not in the set-up used in the current study. Beam properties include laser power, whether the laser is pulsed or continuous (CW), the size and mode (characteristics of the power distribution) of the laser spot and wavelength of the laser. The motion properties consist of welding travel speed, laser beam focal point, joint geom- etry and gap tolerances. Regarding shielding gas, the gas composition, design of the gas supply system and gas pressure and velocity affect the welding result. Also important is material composition and the surface condition of the ma- terial to weld. Disturbances in the LBW process can cause weld imperfections or weld defects, where imperfection is any deviation from the ideal weld and defect is an unacceptable imperfection [7]. The laser power density is defined by laser power (and also temporal pulse shape during pulsed LBW), laser spot size and beam quality. Together with the welding travel speed it controls the HI and hence the weld penetration and width. The issues in the resulting weld seam that may be caused by laser power density are lack of penetration or the opposite, excessive penetration

8 2.2. LASERBEAMWELDING

Table 2.2: Controllable and non-controllable process parameters in this study.

Process parameter Controllable Non-controllable Laser power X Operating mode (CW or pulsed) X Laser pulse rate X Focused laser spot size and shape X Laser beam quality X Focal point X Depth of focus X Laser beam position on work piece X Welding travel speed X Shielding gas flow X Laser wavelength X Material properties X Joint geometry X Filler wire feed rate X Filler wire pose X

(root dropout) [3]. The position of the focal point and also the shielding gas in- fluences weld penetration depth. The main issues depending on material prop- erties are crack sensitivity, porosity, heat effected zone (HAZ) embrittlement and poor absorption [3]. Porosity is common in material subjected to volatili- sation and could be controlled by the gas shielding system or by controlling the pulse rate or spot size. In butt joints, the gap between the parts must be small enough to ensure that the laser beam does not pass through the joint. On the other hand, when it is very narrow the problem of finding the joint occurs. When adding filler wire to the LBW process, several process parameters are added making it even more complex [6]. The filler wire feed rate must be appro- priate to fill the gap in order to avoid undercut or underfill. The angle between the work piece and the filler wire and the straightness of the filler wire are also important. The position of which the filler wire interacts with the laser beam is also important. If the filler wire is feed to a position where it is not inter- acting with the laser beam it can only be melted by the melt pool. The laser beam should melt both the filler wire and the base material to form the weld. The mechanism of laser power absorption also changes since parts of the laser beam will be reflected by the filler wire making the interaction between the laser beam and the work piece even more complex. With increasing gaps and filler wire feed rate, the welds will be more v-shaped. A larger part of the laser power

9 CHAPTER 2. BACKGROUND will be used to melt the filler wire when the filler wire feed rate is increased, therefore the melting at the root side will be limited.

2.2.3 Laser beam welding system

An industrial LBW system basically consists of the following parts:

Laser source • Main control system • Electrical system • Cooling system • Beam delivery system • Gas system • Work station • LBW tool Manipulator for LBW tool Ventilation Safety system

There are three main principles for motion in a laser system; relative move- ment of laser source and optics, movement of work piece and movement of laser beam. In the welding experiments conducted in this work a six in- dustrial robot is used for LBW tool manipulation. The laser tool is attached to the robot giving a movable LBW tool, and the work piece is fixed. Figure 2.3 shows an image from the laser welding cell used in this work.

2.3 Optical sensors in laser beam welding

This section presents a summary of a literature survey of optical sensor systems for monitoring of LBW. Details regarding the selected sensors for this work; photodiodes, camera, spectrometer and laser profile sensor, are presented as well an introduction to sensor fusion. The section ends with the state of the art of the selected issues covered in this thesis.

10 2.3. OPTICALSENSORSINLASERBEAMWELDING

IRB 4400

LBW tool

Sheilding gas

Work piece

Figure 2.3: The LBW cell used during the experiments conducted in this work.

2.3.1 Sensor survey The issue of finding relevant sensors for different weld defects or features is described in this section. The details of the survey are presented in Paper A. The aim of the survey was to evaluate optical sensor systems to be integrated in a LBW system. Relevant literature was evaluated to find sensors detecting features or weld defect applicable to the process. The features considered here were; gap and misalignment between the parts in the work piece, thickness of the work piece, joint position with regards to the high power laser beam spot position, focal point of the high power laser, weld penetration, and geometry of the resulting seam. The defects considered are; pore formation, spatter and surface defects. A summary of the survey is presented in Table 2.3, references are made either to this work, by stating the appended paper (A-F), or to papers presented by other researchers. Based on this literature survey and also discussion with industrial partners the issue of finding the joint position (joint tracking), and the issue of managing varying joint gap width were selected as the main focus for this work.

2.3.2 Photodiodes A photodiode has a transparent window allowing light to strike the PN junc- tion. It works in reverse bias and the reverse current is proportional to the

11 CHAPTER 2. BACKGROUND

Table 2.3: Different sensors and what feature or defects they are able to detect.

Feature or defect Misalignment Weld penetration Weld geometry Thickness Spatter Surface defect Focal point Gap Width Joint position Sensor Pores Camera E,F C,D [8][9][10] Laser profile FF 11 F sensor [ ] Laser 12 13 14 ultrasound [ ][ ][ ] IR camera [15][16][17] Photodiode B [18][19][20] ICI [21] Spectrometer E C [22][23] incident light intensity [24]. Hence, it is possible to use a photodiode to moni- tor the amount of emitted light from the LBW process by measuring its reverse current. Several different types of photodiodes exist, each sensitive in a cer- tain wavelength range [25]. By placing optical filters, such as bandpass filters, in front of the sensor it is possible to further limit the spectral range reaching the photodiode. A common approach is to monitor the heat radiation from the melt pool by a sensor for the infrared (IR) spectrum, one sensor for the reflected laser beam light, and one for the visible (VIS) spectrum where the ma- jority of the radiations from the vapour plume is emitted [26]. However it has been shown that the infrared radiations from the vapour plume is also picked up by the sensor looking at the melt pool, therefore it has been recommended to subtract the vapour plume signal from the signal from the melt pool [27]. There are several commercial monitoring systems available using photodi- odes, e.g. [28][29]. These systems use a template, acquired during conditions when a high quality weld is produced, to set threshold values for features in the signals (e.g. mean value) and then use a pass or fail decision based on this thresh- old. This approach requires several high quality welds to be produced and used as a template for each specific welding case, which might be a problem in low volume production. Many researchers have addressed the issue of finding correlations between the LBW process and the spectral emissions from the process using photodiodes.

12 2.3. OPTICALSENSORSINLASERBEAMWELDING

Eriksson et al. [26][27] and Olsson et al. [30] evaluated monitoring by photo- diodes using a commercial system from Precitec. In [26] and [27] it is shown that high speed imaging is a powerful tool to judge the possibility to detect a spe- cific defect by the photodiode system. It is also shown that the high frequency of the plasma plume dynamics masks other process dynamics and reduces the effectiveness of photodiode monitoring. It is also shown that monitoring the reflected laser light can be useful to judge the quality of the weld, especially in pulsed LBW. Monitoring of LBW of zink-coated steel using photodiodes is presented in [30], here the difficulty of analysing the raw data from the photodi- odes is addressed and a method of analysing data in a 3D cloud is suggested. It is shown that the variance of reflected laser light is larger during unstable welding conditions. Kawahito et al. [31] developed a feedback control system for full- penetration welding using two photodiodes. Experimental results of welding 0.1 mm thick samples of titanium showed that by controlling the laser power based on the heat radiation intensity measured by one of the photodiodes it was possible to obtain stable weld penetration. Molino et al. [32] developed a FPGA implementation for time-frequency analysis algorithms during photodi- ode monitoring. The system has been evaluated for detection of lack of fusion and porosity, and experimental results show that these could be successfully de- tected. Baik et al. [33] developed a technique for monitoring focus and power variations by chromatic filtering of the thermal radiation from the melt pool captured by two photodiodes at different wavelengths. Experimental results show that the system can detect and distinguish between the effects of focus shifts and power variations. Postma et al. [34] developed a feedback controller to maintain full penetration during welding of mild steel sheets. A photodiode integrated into the laser source measures the light intensity emitted from the melt pool. It is shown that the system can maintain full penetration during welding travel speed changes by controlling the laser power. Rodil et al. [35] presented an approach for defect detection, based on both time and frequency analysis of two photodiode signals. Results show that 97 % of the defects could be detected when welding zinc coated steel tailored blanks. Sibillano et al. [36] monitored the plasma plume during CO2 LBW using a single silicon photo- diode and analysed the fluctuations of the plasma plume with the assumption that it is directly correlated to melt pool and keyhole instabilities. It is shown that by applying a discrete wavelet transform (DWT) on the photodiode signal it is possible to obtain information of changes in the LBW process caused by variation in laser power and shielding gas flow. Colombo and Previtali [37] present an approach where the photodiodes are integrated into the fibre laser source. Experimental results of welding titanium show that changes in laser power and shielding gas flow could be successfully detected. Bardin et al. [8] implemented a closed loop control system for full penetration welding using

13 CHAPTER 2. BACKGROUND photodiodes and a camera. The focus position is calculated based on chromatic aberration of the focus optics measured by two photodiodes at different wave- lengths. It it shown that full penetration welding could be obtained during welding of plates with different thickness by controlling the laser power. Stritt et al. [38] present a photodiode system where reflected laser light is monitored to investigate the threshold between conduction and keyhole welding. Varia- tions of the back-reflected light is used to distinguish between the two states and huge variations in back-reflections were shown during conduction welding due to the reflections from the surface of the melt pool. Several researchers have applied various methods to interpret the photodi- ode signals. You et al. [39] showed that using a low-cost sensor with simple structure as well as integrating feed-forward neural network (FNN) and support vector machine (SVM) based on wavelet packet decomposition principle com- ponents analysis (WPD-PCA), effective estimation and classification on weld- ing status and defects could be realized. Park et al. [40] developed a fuzzy rule base and fuzzy membership functions in a LBW quality evaluation system, us- ing a fuzzy pattern recognition algorithm. Spectral emissions from the LBW process is monitored using one photodiode for the spectral range between 260- 400 nm (UV) and one for the spectral range between 700-1700 nm (IR). Results show that it is possible to classify the HI used for LBW quality evaluation. Ols- son et al. [41] present two new techniques for monitoring reflected laser light during LBW. In one case the variance of the peak values of the signal is used as a measure of stability, and in a second case the temporal shape of the power distribution of individual reflected pulses is used to evaluate the quality of the weld. It is shown that reflected pulse shape comparisons involving polynomial best fits is a very promising tool for on-line process monitoring for pulsed laser welding. In this work, presented in Paper B, emissions from the plasma plume is cap- tured by a photodiode and its signal is analysed using continuous wavelet trans- form (CWT). The analysed signal is then correlated to beam offsets in LBW.

2.3.3 Cameras

The main components of a digital camera is the image sensor, a matrix of photo detectors, and the optics used to focus the incoming light onto the image sensor. The image sensor of a visual camera is sensitive to the visible spectrum, that is the spectrum detectable by the human eye [42]. However while the spectral sensitivity of the human eye ranges from 380 to 780 nm, the image sensor of a visual camera is responsive in the range from approximately 300 to 1000 nm. By placing optical filters in front of the image sensor, it is possible to limit the spectral range of the light reaching the sensor.

14 2.3. OPTICALSENSORSINLASERBEAMWELDING

There are mainly two kinds of image sensors available for the visual spec- trum, charge-coupled device (CCD) and complementary metal-oxide- semicon- ductor (CMOS) [42]. Both types consist of a matrix of photo detectors, usually photodiodes. In a CCD sensor, the charge of each photo detector is transported sequentially to a readout register and is then charge converted and amplified in a common unit. However, in a CMOS sensor, each row can be selected directly for readout allowing random access, and each photo detector has its own ampli- fier. The random access possibility of CMOS sensors provides a means to define a region of interest (ROI), allowing higher frame rates (measured in frames per second (FPS)) when the area is decreased. The response of the image sensor is normally linear with regards to the intensity of the incoming light. However, in applications such as welding process monitoring it is desirable to obtain im- ages with very high dynamic range (HDR). To obtain the geometrical features surrounding the keyhole and melt pool in the presence of the high intensity process light, a non-linear grey value response is required. This can be achieved by a logarithmic response of the image sensor [43]. Monitoring the LBW process using a camera has been conducted by several researchers. Lee and Na [44] present a study where a CCD camera without auxiliary illumination is used for joint tracking by monitoring the melt pool in pulsed Nd:YAG LBW. By applying an image processing technique it is possible to find the joint position from the images of the melt pool. Kim and Ahn [45] suggest coaxial monitoring of the LBW process using a CCD camera and ex- ternal laser illumination during remote LBW of steel and Al sheets. Experi- mental results show that the shape of the melt pool and keyhole as well as full penetration could be obtained during LBW of steel sheets, and for Al sheets, the keyhole and full penetration could be obtained. Cai et al. [46] use a high- speed camera to monitor the vapour plume in CO2 LBW of carbon steel parts in a T-joint configuration. Experimental results show that the system can be used to track the welding position in LBW of T-joints. Säntti et al. [47] use a smart-camera based on a CMOS sensor to perform joint tracking in LBW of steel sheets. Results show joint position extraction rate of 400 FPS making it suitable for real time joint tracking applications. Jäger et al. [48] present a system using a CMOS camera to monitor the laser induced plasma in an in- dustrial welding application. Principal component analysis (PCA) is used to extract features, and results show that the defects could be successfully classi- fied. Hugger et al. [10] use two cameras to perform three-dimensional tracking of spatters in LBW. It is shown that it is possible to calculate the 3D trajectories of spatters. Zhao and Qi [9] present a system comprising a camera to moni- tor full penetration welding in keyhole LBW of steel plates. Full penetration is detected by applying an image processing algorithm, and results show that the method could potentially be used for full penetration control in LBW. Luo and

15 CHAPTER 2. BACKGROUND

Shin [49] propose a system comprising a CMOS camera together with a green laser illumination to obtain images of the melt pool during LBW of stainless steel. An edge detection algorithm is proposed to extract the boundary of the melt pool, and experimental results show that the melt pool geometry could be extracted. Cameras can be used in combination with one or several projected laser lines to obtain geometrical features, examples are presented in [50], [51] and [52]. High speed cameras are often used to observe the LBW process in order to ob- tain better understanding of the process, examples can be found in [23], [53], [54], [55] and [56]. They are however not suitable for real-time control appli- cations due to the large amount of data that needs to be processed. A CMOS camera, LED illumination, a laser line module and matching op- tical filters are used in this work to obtain images of the area in front of the melt pool. Analysis of those images is then conducted in order to track the joint po- sition and to measure joint gap width. The work related to camera monitoring is presented in Paper C, D, E and F.

2.3.4 Factors influencing camera monitoring performance

The main factors influencing camera monitoring of joint characteristics in fibre LBW are the spectral emissions from the process and variations in the surface characteristics of the work piece [9]. There are four kinds of light emitted from the LBW process; reflected pro- cessing laser light, thermal radiation from the melt pool, light from the plasma and vapour plume [9]. The reflected laser light is of the same wavelength as the processing laser [26], this might be an issue if the camera detector is respon- sive to this wavelength. With coaxial monitoring, the dichroic mirror in the optical path (see Fig. 2.2) will attenuate this wavelength to a large extent. The thermal radiation from the melt pool emits in the NIR range. The tempera- tures in the melt pool ranges from the liquidation to the boiling temperature. For the selected materials investigated in this thesis this range is approximately between 1700 - 3300 K. Wien’s displacement law gives that this range corre- sponds to a peak black body radiation in the wavelength range between 800 - 1700 nm. The intensity is however relatively low in the spectral area where the camera detector is sensitive, hence it is not a big issue for LBW monitoring of the joint characteristics. The vapour plume consists of vaporised metal and has a temperature above 3000 K. It will emit light that will interfere in the camera image, since its detector is responsive in this spectral range. Besides from emit- ting light, the vapour plume also contains condensed metal and smoke. The plasma plume is located inside and just above the keyhole and will emit very high intensity light from this area. However, for monitoring the area in front

16 2.3. OPTICALSENSORSINLASERBEAMWELDING of the keyhole, where the joint is visible, it is not a big issue since the plasma is localised around the keyhole. The plasma plume will just show up as a saturated spot in the image. Surface feature variations can also disturb the measurements when monitor- ing the joint characteristics using a camera. Scratches on the work piece near the joint can cause false detections since the system might interpret a scratch as the joint and variations in surface texture can influence the image processing al- gorithms since the light will be reflected differently. If the parts to be welded are tack welded to avoid distortions during welding, these will interfere the joint detection since it will obscure the joint.

2.3.5 Spectrometer A spectrometer is a device capable of measuring the intensity of one or more spectral ranges from the incoming light [57]. The output from the spectrometer is a discrete signal, showing the intensity for a range of evenly distributed wave- lengths. The main components of the spectrometer is normally an entrance slit, a diffraction grating and a photo detector, see Figure 2.4. The entrance slit acts as aperture, and sometimes there is also an optical filter that limits the band- width of the light entering the spectrometer. The diffraction grating splits the light by wavelength and this light is then directed onto the detector. Each pixel of the detector represents a portion of the spectrum. Depending on the config- uration of those components, different spectral ranges and optical resolutions can be achieved.

Photo detector

Diffraction grating

Optical fiber Entrance slit and optical filter

Figure 2.4: The working principle of a spectrometer.

The use of spectrometers in monitoring of LBW has been addressed by sev- eral researchers. Ancona et al. [58] studies the correlation between weld quality

17 CHAPTER 2. BACKGROUND and the LBW plasma temperature during LBW. The plasma electron tempera- ture is calculated by measuring spectral lines emitted from the LBW plasma using a spectrometer. Correlations between the plasma electron temperature and weld defects such as crater formation, lack of penetration and weld disrup- tions are found. In [59], the welding plasma plume emission is monitored by a spectrometer and it is shown, by correlation analysis, that the dynamics of the plasma plume is related to the stability of the process. Sibillano et al. [60] anal- yse the optical spectrum with regards to weld defects, and it is shown that de- fects, such as oxidation and lack of penetration, can be related to the behaviour of the laser plasma. Also, in [61] it is shown that the plasma electron tem- perature is correlated to the penetration depth in LBW keyhole welding, and in [22] a closed loop control system based on this result is presented where the laser power is controlled in real-time depending on measurements of the plasma electron temperature in order to control the penetration depth. Kong et al. [56] study the effect of zinc coating by monitoring the LBW process us- ing a spectrometer. The correlation between the plasma electron temperature and defects within the weld seam is analysed. Results show correlations be- tween the plasma electron temperature and weld defects such as spatters. Chen et al. [62] present a study where a CO2 LBW process is monitored using a spec- trometer. The plasma electron temperature is calculated and the temperature field of the plasma plume is presented. Harooni et al. [23] study the correlation between the spectral signals, acquired by a spectrometer, and pore formation during LBW of a AZ31B magnesium alloy. Results show that there is a strong correlation between the plasma electron temperature and the weld defects. In this work, presented in Paper C, the plasma electron temperature is esti- mated from spectral lines emitted from the LBW process and it is then corre- lated to beam offsets. Also, in Paper E, the intensity of a spectral line, obtained by a spectrometer, is correlated to variations in joint gap widths.

2.3.6 Laser profile sensor

Distance measurements using lasers can be conducted in several ways, e.g. in- terference, time of flight, occlusion times and triangulation [3]. Laser profile sensors uses triangulation, its working principle is illustrated in Figure 2.5. A laser diode projects a laser beam onto an object and the beam is then reflected onto an image sensor [63]. The position on the image sensor where the reflected beam is projected is related to the distance between the laser diode and the measurement object through fundamental trigonometric rules. It is then possible by calibration to calculate the distance between the sensor and the measurement object. The accuracy of the sensor is limited by the measurement range and the

18 2.3. OPTICALSENSORSINLASERBEAMWELDING

Laser Image diode sensor

Figure 2.5: Working principle of a distance sensor based on laser triangulation. number of pixels in the image sensor. Accuracy can also be influenced by the reflectivity of the surface of the measurement object, although more advanced sensors exist which compensate for those variations by automatically adjusting the integration time of the sensor based on the intensity of the reflected laser light. The laser beam is, by using appropriate optics, shaped as a laser line. By using a 2D image sensor it is possible to obtain a laser profile distance sensor that gives an array of distance measurements over the projected line. A laser profile sensor using laser triangulation can achieve an accuracy of typically one- thousand of the measured distance [64]. The use of laser light for distance measurements is preferred to other light sources since beams with high intensity can be obtained using lightweight sources. The laser light is also possible to focus to a narrow beam and it has a very nar- row optical bandwidth, hence it can be filtered out by optical band pass filters in order to reject other potentially interfering light sources [63]. For laser distance sensors, laser diodes (semiconductor laser) are commonly used with a power of a few milliwatts [64]. The beam quality is important in order to get a small spot. The optical power also needs to be high enough for targets with diffuse reflection. The laser source also needs to be robust and able to withstand vibra- tions in order to be used in an industrial environment. Commonly used laser diodes for laser distance sensors are Indium Gallium Phosphide (InGaP) that generates red light, and Gallium Nitrides (GaN) that generates blue or violet light [65]. For a laser profile sensor the emitted laser beam is reshaped into a laser line either by using a cylindrical lens or by using a more complex lens called Powell lens [66]. The functionality of the two different lenses are shown in Figure 2.6.

19 CHAPTER 2. BACKGROUND

A cylindric lens has one big disadvantage, it produces a laser line profile that is Gaussian shaped. A Powell lens has a rounded root aspheric curve that generates a very uniformly illuminated line suitable for machine vision. The length of the generated laser line is defined by the Powell lens fan angle. The fan angle is a function of the refractive index of glass and the roof angel (see Figure 2.6). A steeper roof angle and a higher refractive angle gives a wider angle and also a longer line. The thickness of the laser line is defined by the incident laser beam. With a narrow incident beam you will get a thin laser line with a small depth of focus. A wide incident beam gives a thick laser line with a large depth of focus. To get a high quality laser line suitable for machine vision the Powell lens should be matched to the laser diodes beam characteristics.

Figure 2.6: Laser line generation using a cylindrical lens and a Powell lens.

Several researchers have addressed the issue of using laser profile sensors with regards to LBW monitoring. Zhang et al. [11] present a study where the issue of disturbing light from the process and also from e.g. sunlight when mea- suring outdoors is discussed in regards to using laser line sensors for monitoring weld seams using a climbing robot. A cross-structure light sensor is pro- posed that can detect horizontal and vertical weld joints simultaneously. By using this set-up they were able to get robust measurements with errors within 0.3 mm. Huang and Kovacevic [50] present a laser based vision system in order to inspect weld quality and detect the presence of weld defects. The focus of this paper is to develop a low cost sensor that can satisfy the requirement of weld quality inspection in an industrial environment. With the presented setup it is shown that a resolution of 0.06 mm/pixel laterally and 0.24 mm/pixel vertically could be achieved. Shao et al. [67] suggest a measurement method based on a camera and three laser lines to detect closed-butt joints for LBW applications. Two laser lines using red lasers are designed to measure a three dimensional profile of the welding area using triangulation. The third laser line uses a green laser and is used for illumination in order to extract the joint position by a vi- sion algorithm. By using this setup it was possible to extract the joint width, joint center position and the normal vector simultaneously from the same im-

20 2.3. OPTICALSENSORSINLASERBEAMWELDING age. Zeng et al. [68] present a 3D path teaching technique to be used before welding of narrow butt joints. Two types of visual information is obtained, a 2D grayscale image of the welding area and 3D point cloud data of the work piece surface. The joint position is then calculated by fusing these two sources of information. By this set-up it was possible to achieve an image resolution of 12.5 µm and experimental results show that it could be used for automatic trajectory teaching for complex 3D components. Xu et al. [69] suggest a vi- sion sensor based on a CCD camera and a projected laser circle in order to find the joint position. Results show that it is possible to obtain a 3D view of the welding area and locate the position of the joint. A laser profile sensor is used in this work, presented in Paper F, to measure the joint gap width of square-butt joints during LBW. The measurements are used in a feed-forward control system that controls the filler wire feed rate in order to compensate for variations in the joint gap width.

2.3.7 Sensor fusion

Sensor fusion [70] implies that combining data from multiple sensors will give a better result than if one of the sensors was used individually. There exist several levels of fusion; information fusion, sensor fusion and data fusion. It is not always clear how to distinguish between those types of fusion. However, data fusion is considered to be used on raw sensor data, while sensor fusion is the next level of fusion. Information fusion is the highest level of fusion, and cannot always be represented by numbers. Some researchers have studied the use of multiple sensing and sensor fusion in relation to welding monitoring. Zhang and Chen [71] suggest a multi-sensor system in order to classify the weld penetration status in arc welding. The sen- sor system consists of a microphone for sound, hall sensor for arc voltage and a spectrometer for the spectral emissions. Features exctracted from the sen- sor data are fused using SVM, and it is shown that the fused features from the multiple sensors gave better classification than the individual sensors. Gao et al. [72] present a sensor system using two photodiodes, one for visible light and one for reflected processing laser light, and two cameras, one for ultraviolet and visible light (320-759 nm) emitted from the process and one for external laser illumination light (976 nm). Features extracted from the different sensors are correlated to weld quality, that in this study is represented by weld seam width. The extracted features are fed into a back propagation neural network and it is shown that by using multiple sensors, a higher accuracy could be achieved than by using the sensors individually. You et al. [73] suggest a multi-sensor sys- tem, comprising a photodiode and two cameras, for defect detection in LBW. A number of features are extracted from the sensor data and the welding status is

21 CHAPTER 2. BACKGROUND classified using SVM. Results show that the system is able to identify four kinds of welding statuses, one without defects and three with different defects. Sun et al. [74] present a sensor fusion system comprising photodiodes for infrared and ultraviolet monitoring, a microphone for audible sound and a piezoelectric transducer for acoustic emissions to detect full penetration in LBW. Different classification methods are used to distinguish between full penetration and par- tial penetration, and it is shown that by sensor fusion, classification could reach 100%. Sensor fusion is used in this work, presented in Paper F, to study if measure- ments from two sensors, a camera and a laser profile sensor, could improve the accuracy of estimations of the joint gap width.

2.4 State of the art

Rout et al. [75] present a general review of joint tracking techniques in robotic welding. It is shown that most of the available systems and research within this field is regarding laser profile sensors used for arc welding. The issue of joint tracking for closed-square-gap butt welding has been ad- dressed by several researchers using different sensor solutions. Fan et al. [76] present a closed loop joint tracking system for both horizontal and vertical po- sitioning based on an off-axis camera with optical filters, a LED illumination, a laser line module. An image processing method is presented to extract the joint position and a fuzzy PID controller is used for position control. Experimental results show that most tracking errors are within 0.25 mm i horizontal direc- tion and 0.30 mm in vertical direction. The set-up± with an off-axis camera placed 120± mm in front of the welding position is not applicable for welding of complex parts with limited space and it is not suitable for curved joints, due to the distance between the sensor and the welding position. Shao et al. [67] suggest a joint tracking system for LBW of narrow butt joints, comprising an off-axis camera and three laser lines. Two red laser lines are used to get a 3D view of the work piece and a green laser is used as illumination in order to extract the joint position. Experiments show that the joint position could be measured with an average error of less that 0.005 mm. The camera is also here placed in an off-axis configuration, which is not applicable to welding of complex parts. Zeng et al. [68] present a path teaching method for narrow butt joints using a camera, LED illumination and a cross-line laser module. The system alters between acquiring images when using the LED illumination and the laser line module and the information from both images are used to obtain the 3D pose of the joint. Results show that the deviation from the system output and a theo- retical CAD model is not more than 0.24 mm and 0.54 degrees. The presented method is however intended for path teaching, not for on-line control where

22 2.4. STATEOFTHEART process disturbances must be considered. Gao et al. [77] suggest a measure- ment principle based on magneto-optical imaging to obtain the joint position for closed-square-butt joints. A method based on optical flow and particle filter- ing is used, and it is shown that the mean tracking error is below 0.01 mm and the maximum error is below 0.03 mm. The set-up, using a magnet on the root side of the work piece, might however not be applicable in an industrial set-up, especially when welding complex parts with limited space. Reegard et al. [78] present the basic concept for joint tracking, and a sensor system is presented for measuring the joint position and also the relative displacement between the LBW tool and the work piece. Results indicate that the joint can be tracked up to a welding travel speed of 5 m/min. Krämer et al. [79] suggest a texture based algorithm that finds the joint position in images captured by a CMOS camera. Although showing promising results, it might not be suitable for real time ap- plications due to the algorithm being very time consuming. Also there is a need for teaching the different textures for every test case, which might be a problem due to variations in texture of the material (e.g. due to oxidation) and also vari- ations in light conditions due to disturbances from the process. Gao et al. [15] present a method based on an IR camera placed in an off-axis configuration to capture images of the melt pool. The joint position is obtained by analysing the image gradients of those images. Experimental results, conducted at laser powers of 6 kW and 10 kW, show a maximum error of approximately 0.1 mm. However, it might be difficult to integrate this type of camera in the LBW tool for a real industrial application due to space limitations since it is placed in an off-axis configuration. Non of the proposed systems present a robust solution for joint tracking of closed-square-butt joints for complex parts where space for integrating sensor solutions is limited. Also, the issue of disturbances, such as scratches that might mislead the tracking system are not considered.

The issue of varying joint gap widths in LBW of butt joints have been ad- dressed by other researchers. Coste et al. [80] use an optical sensor from SERVO- ROBOT, placed in an off-axis configuration, to measure the gap width and con- trol the filler wire feed rate during CO2 laser welding of 8 mm thick castings. It was shown that gap widths up to 1.5 mm could be bridged using the adaptive wire feed control. Zhang et al. [81] suggest a model, using a back propagation neural network and a genetic algorithm, to describe the relationship of laser power and wire feed rate against joint gap width and misalignment. Experi- mental results, when measuring the joint gap width using a high-resolution 2D laser-displacement sensor during welding, show that the geometric properties could be preserved by using the model together with the gap measurements to adjust the filler wire rate. Huang et al. [82] propose an adaptive wire feed rate model describing the relationship between welding travel speed, gap width, re- inforcement, filler wire rate and laser power. A laser sensor measures the gap

23 CHAPTER 2. BACKGROUND width during welding, and laser power and filler wire rate are controlled accord- ing to the proposed model. Results show that the weld reinforcement can be preserved during the welding experiments. Chen et al. [83] suggest a method for adaptive filler wire control based on a scanning laser-photoelectric cell sen- sor and a filler wire feed controller. Welding experiments, where the welding travel speed and wire feed rate were controlled, showed that controlling the welding travel speed was best for with wider gaps, while controlling the filler wire rate was best for narrow gaps. Although showing promising results, all of the proposed systems for filler wire rate control use off-axis configuration for the sensors measuring the gap width, and this is not applicable for welding of complex parts with limited space for sensor integration.

24 Chapter 3

Experimental set-up

Figure 3.1 illustrates the LBW tool and the sensor systems used during the weld- ing experiments. The LBW tool was always mounted on an industrial robot, and a filler wire feeder was used in some of the experiments. The details of the welding and monitoring systems are described in the following.

IPG fiber laser

Photodiodes

Camera Spectrometer LBW tool

Collimator LED illumination Laser Laser line beam Shielding gas

Measurement PC

Figure 3.1: Set-up used in the welding experiments.

3.1 Welding equipment

This section describes the tool manipulator, laser source, LBW tool and filler wire feeder used in the experiments.

25 CHAPTER 3. EXPERIMENTALSET-UP

3.1.1 Laser beam welding tool manipulation

An industrial robot, ABB IRB4400, was used for manipulation of the LBW tool and the monitoring equipment shown in Figure 3.1. The main specifica- tion of this robot is presented in Table 3.1. Robot programming was mainly R conducted off-line, using RobotStudio . Fine tuning of the robot trajectory was conducted manually using the camera integrated in the LBW tool for visu- alisation of the joint position.

Table 3.1: Specification of ABB IRB 4400.

ABB IRB 4400 Number of axes 6 Payload 60 kg Position repeatability 0.19 mm Path repeatability at 1.6 m/s 0.56 mm

3.1.2 Laser source and laser beam welding tool

All welding experiments were conducted using a 1070 nm IPG Ytterbium Fiber Laser, YLR-6000-S 6 kW, together with a LBW tool from Permanova Laser Sys- tem AB. The optical fibre used for delivery of the processing laser was 0.6 µm in diameter. The processing laser light was collimated in a collimating lens with 160 mm focal length before it was focused using a 300 mm focal length lens. This set-up gave a laser beam spot diameter of 1.12 mm and a Rayleigh length [3] of 13.7 mm, measured by a Primes laser beam profiler. A laser beam spot diam- eter of 1.12 mm is relatively large and it results in a rather wide weld, which makes the process less sensitive for inaccuracies in LBW tool manipulation, i.e. the TCP position, and therefore makes it suitable for robotised LBW. The large Rayleigh length obtained by this set-up makes the process relatively insensitive to changes in focal position. The z-position of the robot is therefore not as critical as it would be with a set-up resulting in a short Rayleigh length [3].

3.1.3 Filler wire feeder

A filler wire feeder from Fronius, VR 1500 4R/W/E Roboter, was added to the system to facilitate for adding material, when the joint gap width was varying (Paper F). The wire was supplied in a leading off-axis configuration, with an 30 degree angle between the work piece and the wire, see Figure 3.2. The wire

26 3.2. MONITORINGSYSTEM feed rate for gap widths > 0.1 mm was calculated as suggested by Siva Prasad et al. [84]:

4vξˆ t v k k hk (3.1) wi r e = r 2 πdw ˆ where vwi r e is the calculated filler wire feed rate, ξk is the estimated gap width, thk is the work piece plate thickness, v is the welding travel speed, dw is the filler wire diameter and kr is the reinforcement factor (set to kr = 1.15 to get 15 % reinforcement).

Laser beam

Filler wire

30° Work piece Welding direction

Figure 3.2: Filler wire entrance to the interaction zone in a 30 degree off-axis configuration. The LBW tool and the wire feeder was moving in the welding direction while the work piece was in a fixed position.

3.2 Monitoring system

The LBW tool was equipped with three sensor inlets with an optical path for coaxial monitoring. One was used for a camera, and two were used for fibre coupled photodiodes. A spectrometer was monitoring the process via a fibre coupled collimator, placed in an off-axis configuration. Two LED lamps and a laser line module were placed in an off-axis configuration directed just in front of the keyhole on the work piece. The LED illumination was used to illuminate the scene in front of the keyhole in order to get a good view of the area where the joint is located. The laser line module was used to project a laser line that together with the camera formed the laser profile sensor (using triangulation). Figure 3.3 shows a schematic overview, and Figure 3.4 shows a photo of the LBW tool and the monitoring devises.

27 CHAPTER 3. EXPERIMENTALSET-UP

Figure 3.3: Schematic overview of the sensors integrated into the LBW tool.

3.2.1 Triggering system

A triggering system has been developed in order to synchronously trigger the camera together with the illumination. In Papers C, D and E the LED illu- mination is triggered together with the camera using a digital output module, NI-9472, and software written in LabVIEW. For Paper F, a trigger system was developed to enable triggering of the camera together alternatively with the LED or the laser line projection. It was a programmable triggering system based on an micro controller, Arduino Uno (microcontroller board based on the Microchip ATmega328P microcontroller), where two separately defined pulse train controlled the LED and laser line projection.

3.2.2 Camera, LED and laser profile system

The camera, PhotonFocus DR1-D1312(IE)-200 [85], used in Papers C, D, E and F, was integrated into the LBW tool as shown in Figure 3.3. It has a HDR sensor (120dB), which makes it suitable for weld process monitoring. The HDR is in this camera achieved by a logarithmic compression of high light intensities R inside each pixel. This can be achieved by a technology called LinLog [85]. By analysing the spectral emissions from the process (modelled as a Planck curve at 3000 K (vaporisation temperature)), transmittance of the LBW tool op-

28 3.2. MONITORINGSYSTEM

Camera

Collimator for spectrometer

Laser line module LED lamps

Fixture Sheilding gas

Figure 3.4: Sensors integrated into the LBW tool. In this case after welding an ellipse as described in Paper D. tics and the responsiveness of the CMOS sensor of the camera a suitable spectral range was selected for the camera. The spectral range centred around 450 nm, as shown in Figure 3.5, was selected due to minimal spectral disturbances from the process while still obtaining high responsivity of the CMOS sensor and high transmittance of the LBW tool optics. To achieve this an optical band pass filter with centre wavelength at 450 nm and full width at half max (FWHM) of 10 nm was used in front of the camera. The laser line module was from Permanova Lasersystem [86]. It has been modified with a laser diode, L450P1600MM from Thorlabs. This laser diode gives a royal blue light, centred at 450 nm, and an optical power of 1600 mW when used in continuous mode. The LED lamps comprise a housing, a LED, a collimating lens and a Near-IR hot mirror for protection against process laser reflections. Table 3.2 shows the components used in the LED lamps. The joint is visible in the camera field of view (FoV) by illuminating the area just in front of the keyhole using the LED illumination. A closed-square-butt joint with tight fit-up tolerance will still have a small gap between the parts to be welded [68]. The LED illumination will not be reflected in the gap, hence it

29 CHAPTER 3. EXPERIMENTALSET-UP

1.5 Vapour plume spectral emissions Transmittance in sensor optical path Camera responsivity 1

0.5 Normalized units Normalized

0 200 400 600 800 1000 1200 1400 1600 Wavelenght [nm]

Figure 3.5: The spectral range for the camera, shown in yellow.

will show up as a dark line in the image.

In order to obtain necessary image information during welding, it is im- portant to add external light in the FoV of the camera at the spectral range selected for monitoring. By triggering the LEDs and only activate them during the exposure time of the camera it was possible to source the LED with up to 10 times the nominal current (overdriving). In this manner it was possible to obtain higher light intensity while the average power (defined by the duty cycle of the trigger signal) is still the same as for continuous illumination. By select- ing LED and laser line illumination, at a wavelength of 450 nm, and matching optical bandpass filter and by triggering the LED illumination, necessary image information can be obtained even during harsh welding conditions.

Table 3.2: LED lamp components.

Supplier Model Housing Thorlabs M455L3 Lens Edmund Optics 33-257 LED Thorlabs M450D3 Filter Thorlabs FM01

30 3.2. MONITORINGSYSTEM

3.2.3 Spectrometer system Spectral emissions from the welding zone was acquired through a collimator and passed to a spectrometer via an optical delivery fibre. The collimator was placed in an off-axis configuration, at a distance from the work piece of approxi- mately 30 cm, as shown in Figure 3.3. The collimator were directed to maximize the signal to noise ratio when collecting the light from the process interaction zone. The type of spectrometer used was HR2000+ from Ocean Optics [87]. It has a 2048 pixel CCD detector array and a 10 µm entrance slit giving an op- tical resolution of 0.07 nm. The spectrometer was configured to monitor the spectral range between 400-530 nm, since most of the emission lines belonging to ionized steel elements like Fe, Cr and Mn fall in this spectral range [88].

3.2.4 Photodiode system A monitoring system comprising three photodiodes were developed to monitor the optical emissions from the LBW process. The photodiodes were integrated coaxially into the LBW tool, as shown in Figure 3.3. Three fibre coupled pho- todiode detectors were used, here called VIS sensor, Reflection sensor and IR sensor. Each of them covered different spectral ranges limited by optical filters. A gallium phosphide (GaP) sensor was used for the visual spectrum, i.e. the VIS sensor, with a spectral range from 150 to 550 nm. The Reflection sensor used an InGaAs amplified detector. A band pass filter, 1075 nm +/- 50 nm was placed in front of this sensor. The IR sensor, monitored a spectral range between 1200- 1700 nm, used an InGaAs amplified detector. This sensor had a spectral range from 800 to 1700 nm, a long pass (LP) filter with cut-off wavelength at 1100 nm, and a hotmirror blocking wavelengths between 750 and 1200 nm was placed in front of the sensor. Table 3.3 gives an overview of the used photodiode sensors.

Table 3.3: Specification of the photodiode detectors.

Sensor Supplier Model Spectral range Filter VIS sensor Thorlabs PDA25K2 150-550 nm - Reflection Thorlabs PDA20CS2 800-1700 nm BP 1075 50 nm sensor / LP 1100 nm and IR sensor Thorlabs PDA20CS 800-1700 nm hotmirror

The IR sensor was connected to one of the inlets of the LBW tool via a 400 µm fibre cable, the Reflection sensor and the VIS sensor were connected to one of the other inlets using a 400 µm bifurcated fibre bundle with two fibres.

31 CHAPTER 3. EXPERIMENTALSET-UP

Figure 3.6 illustrates the spectral ranges (in yellow) monitored by the different photodiodes. Also shown in the figure is the transmittance of the sensor optical path in the LBW tool (black dotted curve).

1.4 Transmittance in sensor optical path 1.2

1 VIS Reflection IR

0.8

0.6

0.4 Normalized units Normalized

0.2

0 200 400 600 800 1000 1200 1400 1600 Wavelenght [nm]

Figure 3.6: Ranges of responsiveness of the three photodiode sensors, VIS, Reflection and IR.

Figure 3.7 shows the setup of the photodiode monitoring system. The type of photodiodes used was transimpedance amplified photodiode detectors with adjustable gain [25]. The gain of the sensor influences the bandwidth, higher amplification gives lower bandwidth, but keeping an amplification level of 40 dB or less assures a bandwidth for the sensor of > 200 kHz. The photo detectors were connected via coaxial cables to a unit containing a 4t h order active low pass filter with a cut-off frequency at 120 kHz (-3 dB) and a voltage amplifier. The filter was used to prevent aliasing and the amplifier was used since the output signals from the photo detectors were relatively low. Amplification factor was adjusted to get a signal level of 0-10 V, which matched the input range of the A/D converter. The output signal from the filter/amplifier unit was connected to an analog input module with a 10 V input range and 16-bit resolution. To capture the dynamic behaviour of the LBW process using photodiodes, it is important to sample signals from them fast enough. In previous research, described in [53], it is indicated that the commercial systems using photodiodes may not sample fast enough to capture the process dynamics. Therefore it was interesting to investigate the bandwidth of the signals acquired from the system. The photodiode monitoring system in this work had a sample rate of 1 MHz, synchronously for each channel, and the bandwidth was 120 kHz. By apply- ing a discrete Fourier transform (DFT), it was possible to study the frequency

32 3.3. EXPERIMENTS AND MATERIALS

LBW tool VIS sensor 2 Fibre bundle

Fiber Alias filter and A/D sampling unit PC NI 9223 Reflection sensor amplifier unit

IR sensor

Figure 3.7: Set-up of the photodiode monitoring system. content of the signals. The signal from the VIS sensor proved to contain higher frequency com- ponents than the other sensors, hence it was used for evaluating the frequency content of the monitored signals. An analysis of the spectral content of the signal indicated that a bandwidth of 30 kHz would be sufficient to capture the peaks found in the signal from the VIS sensor, hence this bandwidth should be sufficient to capture all interesting information from the monitored signals.

3.3 Experiments and materials

This section presents the details regarding LBW experiments, both for the joint tracking experiments conducted on closed-square-butt joints and gap width mea- surements performed on square-butt joints with varying joint gap width.

3.3.1 Closed-square-butt joints Papers B, C and D presents experimental work conducted on closed-square-butt joints, a summary of the set-up for those experiments is shown in Table 3.4. Two different set-ups were used for the welding experiments conducted on closed-squared-butt joints. The first was for linear welds (Paper B and C), where welding was conduced on 4 mm thick stainless steel sheet metal parts, 300 x 40 mm in size. The parts were clamped and tack welded before the experiments, as shown in Figure 3.8, in order to get a tight fit between the parts and minimise distortion during welding. The nominal path was a 280 mm line. In the second set-up, for curved joints (Paper D), welding was conducted on 2 mm thick plates of Alloy 718. The nominal joint path had an elliptic shape, and the parts were water cut to give a tight fit, see Figure 3.9. This experiment was conduced to evaluate the proposed joint tracking systems ability to measure the joint position when the nominal joint path was curved, the work piece had

33 CHAPTER 3. EXPERIMENTALSET-UP

Table 3.4: Overview of closed-square-butt joint experiments.

Paper B Paper C Paper D Camera & Sensor Photodiode Camera Spectrometer CW: 2750 Laser Power [W] 2150 2300 Pulsed: 3000/1300 Welding travel 9.6 9.6 15 speed [mm/s] Material 4 mm SS 4 mm SS 2 mm Alloy 718 Shielding gas Argon 32 l/min Argon 32 l/min Argon 100 l/min

Figure 3.8: Set-up for the straight welds. Parts are clamped and tack welded before conducting the welding experiments. a lot of scratches and the fit between the parts to be welded together is very tight. The plates were prepared with scratches before welding. The joint was first cleaned using a scouring pad, this will introduce a lot of small scratches near the joint and it will also make the work piece very shiny. Several scratches was also added by using a knife, this will introduce deeper scratches that in the camera image can be mistaken for the joint, see Figure 3.10. In order to achieve a very tight fit between the parts to be welded, they were water cut from two different plates and the inner ellipse was made somewhat larger to compensate for the material that is removed during cutting. This made the joint gap very narrow, Figure 3.11 shows measurements of the joint gap width made with an optical microscope, Olympus SZX9, on a representative work piece. Twelve measurements were made spread around the ellipse, the mean joint gap width was 62 µm with a standard deviation of 8 µm.

34 3.3. EXPERIMENTS AND MATERIALS

Figure 3.9: Set-up for the curved welds. Parts are clamped and tack welded before conducting the welding experiments.

50

100

150

y [pixels] 200

250

300 100 200 300 400 500 600 x [pixels]

Figure 3.10: Image from the camera. The joint is visible in front of the keyhole (upper line), and also a scratch (lower line).

3.3.2 Varying joint gap widths

Papers E and F present experimental work conducted on joints with varying gap width, a summary of the set-up for those experiments is shown in Table 3.5. Two different set-ups were used during the welding experiments of plates with varying gap widths. In the first set-up (Paper E), the material was 4 mm thick plates of stainless steel. The plates were machined to have four sections

35 CHAPTER 3. EXPERIMENTALSET-UP

80

60

40

20 Joint gap width [µm] width gap Joint

0 1 2 3 4 5 6 7 8 9 10 11 12 Measurement #

Figure 3.11: Joint gap width measurements for ellipse.

Table 3.5: Overview of varying gap width experiments.

Paper E Paper F Sensor Camera Camera Laser Power [W] 2750 2300 Welding travel speed [mm/s] 9.6 15 Material 4 mm SS 2 mm SS Shielding gas Argon 70 l/min Argon 32 l/min with different gap widths (0.2, 0.4, 0.6 and 0.8 mm), and five sections with tech- nical zero gap, see Figure 3.12.

2

1 0.2 mm 0.4 mm 0.6 mm 0.8 mm 0 (mm) -1 Technical zero gap

-2 0 50 100 150 200 250 300 (mm)

Figure 3.12: Plate used for welding experiments in Paper E.

In the second set-up (Paper F), the material used was 2 mm thick sheets of stainless steel, 40 x 300 mm, laser cut to have three sections with varying gaps.

36 3.3. EXPERIMENTSANDMATERIALS

The maximum gap width of the sections was approximately 0.1, 0.2 and 0.3 mm and the sections between the gaps was technically zero, see Figure 3.13.

2 0.1 mm 0.2 mm 0.3 mm 0 (mm) Technical zero gap -2 0 50 100 150 200 250 300 (mm)

Figure 3.13: Plate used for welding experiments in Paper F.

In both set-ups, the plates were tack welded at four or five positions in order to avoid distortions during welding. Welding was conducted without filler wire in the fist set-up (Paper E), and in the second (Paper F), three test cased were conducted; without filler wire, with constant feed rate of the filler wire and with controlled feed rate of the filler wire. The filler wire material was stainless steel, and the wire diameter was 1 mm.

37 38 Chapter 4

Estimating the joint position during LBW of closed-square-butt joints

This chapter presents the proposed solutions for the problem of estimating the joint position during LBW of closed-square-butt joints.

4.1 Problem description

LBW enables narrow and deep welds with a limited HAZ, which in turn min- imises the thermal distortion of the welded components [3]. One disadvantage of using LBW is its sensitivity to beam offsets. Welding with an beam offset could result in lack of sidewall fusion. Lack of sidewall fusion is a serious defect giving a weak weld, and it might not be visible either on the top side or the root side of the work piece. This defect is also hard to detect using non-destructive testing, such as ultrasonic testing, since when the joint fit-up tolerance is very tight, a very flat vertical void will appear in the lack of sidewall fusion area. Inaccuracies in fixturing, tolerances in the LBW tool manipulator (e.g. robot), heat induced distortion of the welded components etc. create deviations from the nominal weld path. Therefore there is a need to control the laser beam spot position during welding to make sure it is always within a specified distance from the actual joint position. Joint tracking is traditionally conducted using a camera and projected struc- tured light, a laser triangulation method is used to get a profile over the joint be- tween the parts to be welded. Several commercial systems are available that uses this method, examples can be found in [28][86][89]. Even though those meth- ods work well for many joint configurations they might not be robust enough for closed-square-butt joint welding with machined parts, when tolerances for gap and misalignment between the parts are close to zero. These systems might also be sensitive to surface scratches that could be interpreted as the actual joint

39 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-... and in this way mislead the tracking system. Figure 4.1 shows a camera image where it is difficult to distinguish between the joint and surface scratches. The systems may also be disturbed by tack welds hiding the view of the joint.

Joint

Scratch

Figure 4.1: Image from the camera showing the joint and several scratches.

4.2 Proposed solutions

Three different optical sensors, a camera, photodiodes and spectrometer are evaluated for their ability to detect and/or estimate beam offsets, that could potentially cause lack of sidewall fusion.

4.2.1 Using a camera to estimate the beam offset This section presents the proposed solution using a camera to estimate the beam offset, both during welding of straight (Paper C) and curved (Paper D) joint paths. The proposed method for estimating the beam offset is divided into four steps, shown in Figure 4.2, each of these steps is described in the following.

Beam Image Vision Data Model based offset Gating algorithm association filter

Figure 4.2: The different steps in the joint tracking algorithm.

Vision algorithm By integrating a camera in the LBW tool, see Section 3.2.2, observing the area around the laser beam interaction zone, it is possible to obtain a view of the joint in front of the keyhole, and also the position of the keyhole, see Figure 4.3. Hence it is possible to extract the joint position from the image and also calculate the beam offset.

40 4.2. PROPOSEDSOLUTIONS

50 Keyhole 100 150 Beam offset

y [pixels] 200 250 Joint 300 100 200 300 400 500 600 x [pixels] Figure 4.3: Keyhole, joint and beam offset for a straight weld.

In Figure 4.4, the joint is clearly visible in front of the keyhole, but a nearby scratch is also visible that could be misinterpreted as the joint. Searching for the strongest line in this image, by using edge detection and the Hough transform [42], the scratch is here selected instead of the joint. Hence, this situation must be handled by the joint tracking algorithm. The keyhole will always be in the same position in the image, since the camera uses the same optical path as the processing laser beam, therefore this position can be determined beforehand.

50

100

150

y [pixels] y 200

250

300 100 200 300 400 500 600 x [pixels]

Figure 4.4: Joint and scratch in front of the keyhole, the scratch is selected by the algorithm (green line).

The image from the camera needs to be scaled and calibrated in order to relate the image pixels to the surface of the work piece. Two different coordinate systems are defined, one for the work piece (work object coordinate system) and one for the camera image (image coordinate system). Figure 4.5 shows how

41 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-... these two coordinate systems are related. This kind of calibration in conducted using a camera model and several models of different complexity exist [42]. The simplest model, called the pinhole camera model, does not take any distortions of the optical lenses into account. In this work it is however considered good enough, since only a small area of the complete image is used for calculations and distortions usually occur in the outer parts of the lenses. Work object coordinate system Image coordinate system

Work piece 0 x y Y Work piece

0 X Figure 4.5: The relation between work object and image coordinate systems.

As can be seen in the image from the camera (Figure 4.3), the joint appears as a dark line in the image. A misalignment between the work pieces could cause a shadowing that enhances this feature. It has been shown in the experiments presented in [2] that this dark line can be detected by the camera, assuming it has a good enough pixel resolution, even when the fit between the work pieces are very tight and the misalignment is close to zero. The aim of the vision algorithm is to robustly detect the dark line representing the joint and measure the beam offset, which in the image is in the y-direction, from the keyhole. The approach chosen to identify the line representing the joint is by us- ing a Standard Hough Transform (SHT) [90], which is a relativity fast way to find straight lines in a binary image. The input to the SHT is a binary image, preferably only consisting of the edge pixels of the line representing the joint. Hence a prior binarysation of the image is very important in order to get good enough results from the SHT. Several methods exist including simple thresh- olding and different types edge detectors [42]. Thresholding works good when the illumination is constant over time, but when it is changing, as in this case due to disturbances from the welding process, it is not robust enough. This is because the adoption of the threshold value to the changed illumination has to be very accurate. An edge detection method is a more robust method, since it not only depends on a simple threshold value but also uses a gradient vector to find edges [42]. Among the different edge detector methods available, the Canny edge detector [91] is considered the most robust due to its low error rate, ability to locate the true edges and only giving a single edge point response to each edge [92]. The Canny method is selected in this work, it finds the edges

42 4.2. PROPOSEDSOLUTIONS by the gradient magnitude of a Gaussian smoothed image. It uses two different thresholds to detect both strong and weak edges, this makes the method suit- able for since the strength of the edges varies a lot between different images due to interference from the process and varying joint geometries. The surface structure of the work piece and scratches near the joint will result in false detections in the SHT of lines that are not the joint. To reduce false detections due to the surface structure, a median filter is applied to the image before applying the SHT [93]. A median filter is suitable in this situation since it reduces noise, while still preserving edges. Lines are in the SHT represented by the distance from the origin in the image coordinate system, dr , and the angle between the horizontal axis and the normal from the origin to the line, θ, according to

dr = x cosθ + y sinθ (4.1) This is illustrated in Figure 4.6 where the line is shown in red and the dis- tance dr and angle θ are shown. x θ

dr

y

Figure 4.6: Line representation in SHT in the image coordinate system.

The SHT is capable of finding lines of any direction in the image, however searching all possible directions are computationally expensive, hence limiting the possible angles to search for is beneficial. Since the orientation of the LBW tool, and therefore also the camera, is the same with respect to the welding direction the joint will always appear as a straight horizontal line in the image. Due to this it is possible to limit the possible angles of the lines in the SHT and therefore also reduce the processing time of the algorithm. In SHT it is assumed that every edge pixel from binary image could be part of a line represented by the definition in Equation 4.1. A matrix accumulator is then used, of size two, representing the unknown parameters dr and θ. Finding the maximum values

43 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-... in this matrix gives the most significant lines in the image represented by the parameters dr and θ. From those it is then possible to find the lines in the image. When welding curved joint paths, as shown in Figure 4.7, the SHT can no longer be applied since it is only valid for straight lines. The proposed solution, presented in Paper D, is a modified Hough transform where the joint is mod- elled as a second order polynomial that represents the joint curvature in front of the keyhole.

2 y = αx + βx + γ (4.2)

50 Joint 100 Beam offset 150

y [pixels] 200 Keyhole 250 300 100 200 300 400 500 600 x [pixels] Figure 4.7: The keyhole, joint and beam offset for a curved joint.

A set of α and β parameters are selected based on the maximum radius of the curve describing the nominal welding path since this will decide the maxi- mum inclination and increase of inclination of the curve. The parameter γ is then calculated, by Equation (4.3), for all edge pixels in and all combinations of

α = [α1,...,αn] and β = [β1,...,βn], where n is the selected number of combi- nations for the α and β parameters. The number of times a certain combination of α, β and γ parameters occur is stored in an accumulator matrix, and by find- ing the maximum value in this matrix the combination of parameters with the best fit for all pixels can be found.

2 γ = y αx + βx (4.3) − The measured joint position can then be found, for each new image, by applying the second order polynomial, using the α, β and γ parameters from the accumulator matrix, to the image. When a scratch is present near the joint, it is hard to distinguish from the joint, see Figure 4.4. A model based joint prediction approach, presented in

44 4.2. PROPOSEDSOLUTIONS

Paper D, is proposed to address this issue. Since the nominal welding path is known in advance, by the CAD file or the programmed robot trajectory, it is possible to predict an approximate parametrisation of the joint curve for each image. A model of the predicted curve can be obtained by translating the joint path from the work object coordinate system, at the current TCP, to the im- age coordinate system. The parameters from the model is then compared with those obtained by the image processing algorithm. In this fashion, the mea- surement that has the best fit to the model, can be selected. The best fit to the model is calculated as the minimum absolute difference of the curve parameters between the model and the curve extracted by the image processing algorithm. Only searching for the most significant line or curve will result in a rela- tively large amount of images where a line or curve not representing the joint is found. The approach selected to avoid this problem is to extract the three most significant lines or curves from the Hough transform, and later select the most probable based on previous knowledge of the joint position. Figure 4.8 shows an example where three lines are found in the image, and one of the lines represents the actual joint.

0

2

4

y [mm] 6

8

0 5 10 15 x [mm]

Figure 4.8: Detected lines (in green) and their projections on the x-axis of the keyhole (red circles). The blue star is the estimated laser beam spot centre.

By projecting the lines found by the Hough transform towards the x-position of the centre point of the keyhole, measurement of the joint position can be found on the same vertical axis as the keyhole, or at a selected distance in front of the keyhole to compensate for the dynamics of the system. These three mea- surements are shown as red circles in Figure 4.8. There are two main advan- tages by doing this, first the measurement will not be in front of the keyhole but at the same position as the keyhole in the welding direction (or at a selected distance in front of the keyhole), second the measured position can only vary in one direction (y-direction) which makes the problem one dimensional and therefore fast and efficient in the following steps of the algorithm. This will

45 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-... result in three possible measurement represented by three different y-positions.

Gating of measurements

The next step is gating, meaning removal of unlikely measurements. From the previous step, three different possible measurements are given. By evaluating the maximum motion of the laser beam spot possible between two consecutive images from the camera (depends on the FPS of the camera), a threshold can be defined to limit how far away from the previous estimated joint position the measurement can be. Measurements further away from this threshold are simply considered unlikely to be a measurement of the joint and are therefore removed. It may be the case that none of the measurements are within the threshold, e.g. due to noisy images, resulting in an empty set of measurements. The output from the gating step is then zero to three possible measurements.

Data association

The next step is to select the most probable measurement. Here the task is to associate the most likely measurement left from the gating step to the esti- mated joint position based on previous measurements. If no measurement is left from the gating step, this step is not executed. The most probable mea- surement is selected as the measurement with the smallest distance from the previously estimated joint position. The output from this step is a single mea- surement representing the most likely measurement of the joint position from the vision algorithm.

Model based filter

Due to noisy images, from process disturbances, scratches on the work piece surface etc. the measured position from the previous steps benefit from filter- ing. Tack welds may also cover the joint preventing the acquisition of infor- mation regarding the joint position in a sequence of images. Many options are available when it comes to filtering, from simple smoothing filter such as mov- ing average filters to more complex model based filters [94]. The Kalman fil- ter [70] is chosen for this work, since it provides signal filtering and also a mean to estimate the joint position even when no measurement is available. It can also handle several measurements, which makes it suitable for sensor fusion [70].A Kalman filter bases its estimates on information from one or several measure- ments and from a model describing the expected behaviour. Different motion models can be used for tracking; constant position, constant velocity or con- stant acceleration [70]. The position in the image to be tracked is assumed to be in the same position as in the previous as long as the laser beam spot is not

46 4.2. PROPOSEDSOLUTIONS moving away from the joint. Since this is the normal case, a constant position model can be used which gives a simple and fast implementation. Inaccuracies in fixturing, robot motion and process induced distortions are modelled by so called state noise (the state is here the current position). The variance of the state noise is difficult to derive from data. However an estimate of the state noise variance can be done by calculating the maximum movement of the robot based on the welding travel speed and the frame rate of the camera. This gives a start value for state noise variance when tuning the algorithm. The output from this final step of the joint tracking algorithm is the distance between the estimated position and the current position of the keyhole. This value should be used in a future control algorithm to correct the path of the laser beam spot by moving the manipulator towards the joint position. A reasonable assumption is that the maximum beam offset should not be more than the radius of the laser beam spot to avoid lack of sidewall fusion, in this case 0.56 mm (since the beam diameter is 1.12 mm). By applying the proposed algorithm, experiments, presented in Paper C and D, showed that this can be achieved, even when process spectral emissions, smoke, tack welds and scratches are present that could disturb the tracking system.

4.2.2 Monitoring beam offsets using photodiodes and wavelet analysis

Photodiodes are fast, inexpensive and easy to integrate in the LBW tool. There- fore, it is interesting to study how they can be used for LBW monitoring. A photodiode directed toward the keyhole, where the laser induced plasma plume is formed, can give information about the process [27]. The absolute light in- tensity is often not as important as the ability to detect sudden changes that can be related to disturbances in the process. The main problem with this method is how to interpret the photodiode signal in a robust way. The information from the photodiode is not straight forward to analyse, hence more complex meth- ods must be considered than just monitoring the signal level obtained from the photodiode [41]. The most common way of analysing signals from photodiodes, in LBW monitoring, is by analysis in the time- and/or frequency domain. The diffi- culty in the time domain is that it is hard to correlate the level of the signal, at a specific time instance, to the state of the process. Time consuming cali- brations are required, and if any condition in the process changes, the system needs to be re-calibrated. In the frequency domain, the time information is lost, therefore it is difficult to detect at what time instance a process change occurs that might be related to a defect formation. Using wavelets, and obtaining a time-frequency resolved representation of the signal, gives a means to analyse

47 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-... changes in frequency content, that could be related to process disturbances, and also get information regarding the time it occured. Wavelet analysis of sensors signals, obtained during LBW, has been pre- sented in a few previous studies, such as acoustic signals [95][96], spectral emis- sion signals [36][39][97][98] and electrode pressure signals [99]. It has been proven to be a valuable tool for the purpose of analysing sensor signals and their correlation to process deviations. However, wavelet analysis of photodiode sig- nals and its correlation to beam offsets in LBW has to the authors knowledge not previously been studied.

Time, frequency and time-frequency analysis

There are several ways one can analyse a signal [100]. The signal can be analysed with high time resolution in the time domain, however, it reveals limited infor- mation regarding the frequency content. The frequency content of the signal can be obtained in the frequency domain, but here the time resolution is lost. One way to overcome those issues, and obtain both high time- and frequency- resolution, is to analyse the signal using wavelet analysis [101]. The Fourier transform [100] can be estimated from a finite number of sam- pled data points by applying the discrete Fourier transform (DFT). However, both the Fourier transform, and the DFT, are based on periodic signals (sum of sines and cosines) and can therefore not correctly represent a non-periodic signal. One way to get at better representation of non-periodic signals is the use the windowed Fourier transform (WFT), which divides the signal into smaller sections that are analysed separately. In this fashion, it is possible to retrieve information of the signal in both the time- and frequency-domain. However, the WFT does not provide an accurate time-frequency localisation. Inaccurate results occur due to aliasing of frequency components that do not fall within the frequency range of the window [102]. Both the Fourier transform and the wavelet transform result in conversions in function space to a different domain [101]. While the Fourier transform uses basis functions that are sines and cosines, the wavelet transform uses more complex basis functions called wavelets. In contrast to the Fourier transform, that is only localised in frequency, the wavelet transform is also localised in time. Since a constant window size is used in the WFT, the resolution in the time-frequency will be the same for all locations. However, in the wavelet transform the window varies, making it possible to obtain both high time and frequency resolution. Also, since it enables a large number of basis functions, information can be obtained that is hidden to the Fourier analysis. There exist two types of wavelet transforms [102], continuous wavelet transform (CWT) and discrete wavelet transform (DWT). In this work, CWT is used to study the correlation between the signal from the photodiode and beam offsets.

48 4.2. PROPOSEDSOLUTIONS

Wavelet analysis of the photodiode signal A monitoring system based on a photodiode integrated in the LBW tool, see Section 3.2.4, is presented in Paper B. The photodiode captures the spectral emissions from the plasma plume, at a spectral range between 150 and 550 nm, and correlations between the sampled photodiode signal, xk (where k is the sample index), analysed by CWT, and the beam offsets is studied.

The discretized CWT, Wn(s), for the signal xk , is defined as its convolution with a scaled and translated version of a wavelet mother function, Ψ(η) [102]: N 1  ‹ X− (k n)δt Wn(s) = xk Ψ − (4.4) s k=0 where the Ψ indicates the complex conjugate of Ψ. The wavelet mother func- tion, Ψ, depends on the parameters n for translation and s for scaling and δt represents the sampling time. The wavelet is shifted by n steps so that local information around time k = n is contained in the transformed function and the scaling factor s decides the window size in which the signal analysis is con- ducted. The result from the CWT, Wn(s), shows both the amplitude of the signal in relation to the scale and also how this amplitude varies in time. Several wavelet mother functions exist, such as Morlet, Paul etc., and a num- ber of features should be considered when selecting one of them [102]. How- ever, when the main interest is the wavelet power spectrum, as in Paper B, the selection of wavelet function is not critical since they in this case will give sim- ilar results. For the study in Paper B, the Morlet function was selected. It is composed of a complex exponential multiplied by a Gaussian function:

2 1/4 iω0η η /2 Ψ0(η) = π− e e− (4.5) where η is a non-dimensional time parameter and ω0 is a non-dimensional fre- quency parameter. The relationship between the equivalent Fourier period and the wavelet scale can be derived analytically, doing this for the Morlet function gives ω0 = 6 [102]. To make sure that the wavelet transforms are comparable at different scales, the wavelet function should be normalized to have unit energy [102]. Normal- izing (4.5), and adding the wavelet scale factor s and the localized time index, n, gives the scaled and translated versions of the mother wavelet function, called daughter wavelets:

1 2  ‹ δt  / (k n)δt Ψn,s (η) = Ψ0 − (4.6) s s 6 The sampling time, δt, is for the photodiodes in Paper B equal to 10− s, and the set of scales, s, used is defined as:

49 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-...

j(δ j) sj = s02 , j 0 j J (4.7) { ∈ N | ≤ }

1 J = (δ j)− log2(Nδt/s0) (4.8) where N is the number of data points of the photodiode signal, s0 is the smallest resolvable scale, set to 2δt, and δ j is the separation between subsequent scales. The largest scale is defined by the parameter J . It represents the case when the length of the wavelet and the data to analyse are the same, and determines the smallest frequency analysed, it is in Paper B equal to 1.5 Hz. A computationally faster method to calculate the CWT is obtained in the Fourier space [102], and is therefore used for the analysis in Paper B. The wavelet transform becomes the inverse Fourier transform of the product:

N 1 X− iω nδt W s xˆ Ψˆ sω e kf (4.9) n( ) = kf ( kf ) kf =0 ˆ where kf = 0...N 1 is the frequency index, xˆk is the DFT of xk , Ψ(sω) is the − f DFT of Ψ t/s , Ψˆ is the complex conjugate of Ψˆ and the angular frequency ω ( ) kf is defined as:

2πkf N 2πkf N ωk = for kf , ωk = for kf > (4.10) f Nδt ≤ 2 f − Nδt 2

Since the wavelet function, Ψ, is complex, the wavelet transform, Wn(s), is also complex. Hence, it can then be divided into a real part, and an imag- inary part. The wavelet power spectrum is computed as the absolute square 2 of the CWT, Wn(s) . The Normalized Wavelet Power Spectrum (NWPS) is in Paper B estimated| | by integrating the wavelet power spectrum over a sliding window of 100 ms, and dividing it by the number of points in that window. In Paper B, a frequency band of 1.5 Hz to 48.5 kHz was first examined, but was later divided into sub-bands in order to identify frequencies whose time variation had the largest correlation to the beam offsets. The result of this in- vestigation showed that the frequency range between 410 Hz and 7.1 kHz con- tained most of the spectral signal component frequencies. This corresponds well to earlier investigations, described in Section 3.2.4, where it is shown that a bandwidth 30 kHz would be sufficient to capture the dynamic behaviour of the plasma plume, and in [36], where it is concluded that plasma spectral emission, under stable conditions, lies in the spectral band between 200 Hz and 15 kHz. The NWPS is in Paper B studied in relation to beam offsets in order to find correlations between the two. The results show that the spectral emissions,

50 4.2. PROPOSEDSOLUTIONS in the selected spectral range between 410 Hz and 7.1 kHz, are more intense when welding with a beam offset. The change is more clear during sudden beam offsets, and it is also more clear in pulsed than in continuous wave LBW. By monitoring changes in the NWPS it is possible to detect beam offsets and in that way predict possible defects in the weld. Also, by analysing the changes of the NWPS signal related to beam offsets rather than the absolute value reduces the need for calibration. Since the CWT is an off-line analysis, it is not suited for in-process control. However, it can give valuable information about possible defects and their position, to be used for guidance in non-destructive testing or as an indication that the part needs repair or should be scrapped.

4.2.3 Correlations between beam offsets and the plasma electron tem- perature It has been shown that the plasma electron temperature can be used to char- acterize weld quality in LBW applications [58]. The study presented in this section, from Paper C, investigates if there exists any correlation between the plasma electron temperature and beam offsets.

Calculating the plasma electron temperature Figure 4.9 shows a spectrum obtained by the spectrometer, see Section 3.2.3, during fibre LBW of stainless steel (see Table 4.1 for chemical composition). Apart from the continuous background contribution, describing the thermal emission from the melt pool and hot vapours from the key hole, several dis- crete emission lines are detected. These spectral lines originate from emission of excited atomic and ionic species from the laser induced plasma plume. The spectral lines shown in Figure 4.9 are analysed by comparing wave- length and relative intensity to the NIST atomic database [88]. The majority of the lines belongs to the excited atomic elements Fe(I), Cr(I) and Mn(I), since these are the fundamental chemical elements in the material used. For each line, the transition probability, Amn, and energies of the upper, Em, and lower, En, levels of the radiative decay are retrieved from the NIST database. By us- ing those parameters it is possible to calculate the plasma electron temperature,

Te . This temperature is an indirect estimation based on the assumption that

Table 4.1: Chemical composition, in wt%, of stainless steel 316.

C Si Mn P S Ni Cr Mo Fe Max Max Max Max Max 10-14 16-18 2-3 Balance 0.08 1.00 2.00 0.045 0.030

51 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-...

7000

6000

5000

4000

3000 Intensity [a.u.]

2000

1000

400 420 440 460 480 500 520 Wavelength [nm]

Figure 4.9: Spectrum acquired during fibre LBW of stainless steel the laser induced welding plasma is optically thin and in a local thermal equi- librium [103]. The intensity, Imn, of a generic plasma optical emission line, λmn, associated with the transition from electronic level m to n is related to the energy of the emitted photons (hc/λmn (where h is the Planck constant and c is the speed of light in vacuum)), the transition probability (Amn), and the population of the excited state Nm according to [58]: hc Imn = NmAmn (4.11) λmn

Nm is given by the Boltzmann distribution (assuming local thermal equilib- rium):

N Em kB Te Nm = gm e− (4.12) Z where N is the total density of the states, gm is the statistical weight of the en- ergy level, Z is the partition function and kB is the Boltzmann’s constant. Com- bining Equation (4.11) and (4.12) and applying the natural logarithm operator gives the following relation:

Imnλmn N hc Em ln( ) = ln( ) (4.13) Amn gm Z − kB Te By selecting several emission lines belonging to the same chemical element from the spectrum and plotting the left hand side of Equation 4.13 against their corresponding upper energy level (Em) it is possible to estimate Te from the slope of the linear fit. This method is called the Boltzmann-plot method and

52 4.2. PROPOSEDSOLUTIONS gives en estimation of the plasma electron temperature as long as the selected lines are unambiguously identified, free from self-absorption and not convo- luted with other adjacent lines. The plasma electron temperature is calculated using the Boltzmann plot method by selecting a number of Fe(I) emission lines from the spectrum. The lines are selected from their intensity with regards to the background level, choosing the ones with highest intensity, and also by only selecting lines with relativity high transition probability (Amn). Figure 4.10 shows an example where seven Fe(I) emission lines are used to create a Boltzmann plot.

15

) 10 m g mn /A 5 mn λ mn I

ln( 0

−5 0 2 4 6 8 Wavelength [nm] 4 x 10 Figure 4.10: Boltzmann plot obtained by a set of seven Fe(I) lines measured from the spectrum shown in Figure 4.9.

The Boltzmann-plot method might not be possible to use for on-line mon- itoring due to the calculations requiring too much computing time. To over- come this a simpler and faster method to calculate Te can be used where only two emission lines are considered:

Em(2) Em(1) Te = − (4.14) k ln I (1)A(2)gm (2)λ(1) B ( I (2)A(1)gm (1)λ(2) ) where (1) and (2) represent the two emission lines. Even if this method might not be as accurate as the Boltzmann-plot method, it is possible to use on-line and the exact temperature is in this work not required, rather it is the change in temperature related to changes in the LBW process that is interesting. For the plasma electron temperature estimations presented in this work the lines Fe(I) 473.28 nm and Fe(I) 461.24 nm are used. Table 4.2 shows the data related to this emission lines.

53 CHAPTER 4. ESTIMATINGTHEJOINTPOSITIONDURING LBW OFCLOSED-...

Table 4.2: Spectroscopic parameters of the selected Fe(I) emission lines used for the electron temperature calculation (NIST database).

1 1 λ (nm) Amn (s − ) Em (c m− ) gm 473.36 3.41 E+04 33096 9 461.32 2.50 E+06 48221 3

Experimental results, presented in Paper C, from monitoring the LBW pro- cess using a spectrometer shows that it is possible to relate the plasma electron temperature to the beam position. It also shows that this method is insensitive to tack welds covering the joint, which is a problem when using a vision system for joint tracking. This results indicate that a sensor fusion strategy including data from the vision system and the spectrometer system could increase the robustness of a joint tracking system.

54 Chapter 5

Laser beam welding of square-butt joints with varying gap width

This Chapter presents the proposed solutions for the problem of estimating the joint gap width and controlling the filler wire feed rate during LBW of square- butt joints with varying gap width.

5.1 Problem description

Adding filler wire to the LBW process is a way to overcome the very strict demands on fit up tolerances that are required in autogenous LBW [5]. This can be achieved since the filler wire fills up the gap making the process less sensitive to variations in joint geometry. A rule of thumb in autogenous LBW is that the joint gap width tolerance is approximately 10 % of the joint thickness [6]. However, by adding filler wire gaps up to 1 mm wide can be bridged when welding 2 mm thick sheets of stainless steel. Even if the joint gap variations can be reduced by joint preparation, giving strict fit up tolerances is very costly. By adding filler wire to the process these strict fit up tolerances can be somewhat relaxed, which leads to lower cost for joint preparation. However, reducing the demand on the joint geometry and allowing the gap to vary requires in-process measurements of the joint gap width, and that the process is controlled in order to avoid defects in the weld. This can be achieved by controlling the laser power, welding travel speed, filler wire rate or by weaving the laser beam etc. Control of the filler wire rate is studied in Paper F, as a means to bridge the gap during LBW of square-butt joints with varying joint gap width. The LBW process becomes more complex by adding filler wire, and more parameters to control are added. Especially important and difficult is the wire position with regards to the laser beam and the work piece. If this is not set- up correctly the result might be worse than without wire. The wire feed rate

55 CHAPTER 5. LASERBEAMWELDINGOFSQUARE-BUTT JOINTS WITH VARYING... influences the geometry of the weld . Yang et al. [104] showed that the wire feed rate influences the upper width of the weld bead and also the FZ. The filler wire can be added either in a trailing or a leading direction, see Figure 5.1. However, leading direction is generally preferred since the stability and melting efficiency of the wire is higher [84]. Using trailing feed direction makes the wire interact with the solidifying edge of the melt pool and a like appearance of the seam is introduced. With the filler wire in a leading direction the wire hides the view of the joint in front of the melt pool, making it impossible for optical sensors to view the joint, hence measurement of the joint gap width can not be conducted.

(a) (b)

Filler wire Filler wire

Joint Joint

Welding direction Welding direction

Figure 5.1: Filler wire welding travel configurations, (a) trailing wire, (b) leading wire.

To overcome this issue, the filler wire is in this work fed in an off-axis con- figuration, applied in a leading direction but with an angle of 30 degrees in the work piece plane between the wire and the joint, see Figure 5.2. The stability of this set-up has been studied by Siva Prasad et al. [84]. They showed that this con- figuration provided acceptable results at welding travel speeds up to 8 m/min. However, undercuts were seen at a higher degree on the side from where the filler wire was fed.

5.2 Proposed solutions

The issue of estimating the joint gap width has been addressed by experimental work using a camera and spectrometer in Paper E, and a camera and a laser profile sensor in Paper F. Combining information from the camera and the laser profile sensor, by sensor fusion, in order to increase the accuracy and robustness of the estimates has also been studied in Paper F.

56 5.2. PROPOSEDSOLUTIONS

(a) (b)

30°

Welding direction

Figure 5.2: Filler wire in a 30 degree off-axis configuration, (a) illustration, (b) image from camera.

5.2.1 Correlation between spectrometer signals and joint gap width

The process light emitted during LBW of plates with varying joint gap width is in Paper E captured by a spectrometer, see Section 3.2.3, and the correlation between spectrometer signal and the joint gap width is studied. Experiments were conducted on plates with varying joint gap width, see Section 3.3.2, and the light emitted from the plasma plume was recorded, during welding, by the spectrometer. The data was analysed off-line, by studying the intensity change of four different spectral lines while the joint gap width was varying. In this study, since the work piece material was stainless steel, most identified lines originated from Iron or Chromium. One of the spectral lines, at 506.29 nm (Fe(II)), was selected for further analysis, since the analysis showed a good correlation to the gap width. The LBW experiment was repeated on five different plates, with the same joint preparation comprising four sections of varying joint gap width (0.2, 0.4, 0.6 and 0.8 mm). When the joint gap width was 0.2 mm, the change in intensity was not clear. However, in this experiment 4 mm thick plates were welded, and since a gap of approximately 10 % plate thickness is acceptable, this gap should not cause any defects. The intensity change is more clear when the gap increases to 0.4 mm, this is the theoretical limit for the gap width. At 0.6 mm gap the intensity change is very clear. This gap is too wide to be bridged, a burn-through occurs which is clearly indicated as a dip in intensity. At the 0.8 mm gap the burn-through is even more obvious, and the change in intensity also clearly indicates this. Even if this method is not explicitly giving the size of the joint gap width, it gives a indication when the gap width increases. In an industrial application it is probably more suitable to be used as an indication that the gap can not be bridged, than to give an exact measurement method for filler wire feed rate control.

57 CHAPTER 5. LASERBEAMWELDINGOFSQUARE-BUTT JOINTS WITH VARYING...

5.2.2 Using a laser profile sensor and/or a camera to estimate the joint gap width

A sensor system, comprising one camera, two LED lamps and a laser line mod- ule, see Section 3.2.2, is suggested in Paper F for measuring the joint gap width during LBW of plates with varying joint gap width. The camera is either used together with the laser line module to obtain a distance profile just in front of the keyhole, or together with the LED illumination in order to obtain grey scale images of the area just in front of the key hole. In this fashion, two dif- ferent sensors, a camera and a laser profile sensor, is obtained using a single camera. The ability to measure the joint gap width during LBW of both sen- sors is evaluated, both individually and also by combining data from the sensors using sensor fusion.

Laser profile sensor

The laser profile sensor is designed by projecting a laser line at the work piece with a fixed angle in relation to the camera. In this fashion a distance profile can be obtained by triangulation, making height measurements possible. Also, since the projected laser line will not be reflected when there is a joint gap, it is possible to measure the gap width by localising the position on both sides of the joint gap where the laser line is no longer reflected. Since only a small portion of the image contains information regarding the gap width, a region of interest (ROI) is selected around the laser line where the joint is expected to be. Keeping this ROI as small as possible is important, especially in a real-time im- plementation, since reducing the number of pixels also reduces the processing time needed in the following steps of the gap width calculations. The ROI image, shown in Figure 5.3 (a), is binarized using thresholding giv- ing a new image only containing pixel intensity values of one (white) or zero (black), shown in Figure 5.3 (b). This makes the laser line solid and the edges around it more distinct. Next, the laser line is thinned, making it one pixel wide in order to obtain a distance profile. This is conducted by, for each row, localising the position of the first and the last white pixel and then selecting the pixel in the middle between the two. In this fashion, a distance profile can be obtained as shown by the red line in Figure 5.3 (c). When a gap is present within the ROI, the laser light will not be reflected, hence a section of the iden- tified red line will be interrupted. This section corresponds to the gap width, and is calculated by the number of pixels it represents. Figure 5.4 shows the extracted distance profile, where the x-axis represents the horizontal distance on the work piece, and the z-axis represents the vertical distance (height). The gap width is shown as the interrupted section of the curve. The height differ- ence between the left and the right plate (misalignment), can also be obtained

58 5.2. PROPOSEDSOLUTIONS from the difference in the z-axis. It is not used in this work, but is considered as interesting information in future work.

(a) (b) (c)

Figure 5.3: The ROI image (a) Gray scale image of the ROI, (b) Binarized ROI, (c) Binarized ROI showing the thinned line (in red).

60

55

50

45

z [pixels] 40 Gap width 35

30 0 20 40 60 80 100 120 x [pixels]

Figure 5.4: The horizontal dots shows the distance profile obtained from the thinned line (red line in Figure 5.3 (c)). The vertical dotted lines mark the measurement of the gap width.

59 CHAPTER 5. LASERBEAMWELDINGOFSQUARE-BUTT JOINTS WITH VARYING...

Camera

Good image data of the area in front of the melt pool, where the joint is visible, can be obtained by using high power LEDs at an appropriate wavelength and matching optical filters in front of the camera. This enables calculations of the joint gap width by applying suitable image processing algorithms. A ROI is selected a certain distance in front of the keyhole to assure that the joint is not hidden by the filler wire, and is used in the following to calculate the joint gap width. Limiting the ROI to 50 pixels x 100 pixels lowers the com- putational time needed for the gap width calculations making it suitable for a real-time application. Figure 5.5 (a) shows the ROI, rotated 90 degrees, and the gap marked by the dotted red lines. The joint gap width is calculated by first

finding the mean intensity of each row in the ROI, αy . Doing this for all rows will give a mean intensity vector with the same length as the number of rows in the ROI (100 in this implementation), see Figure 5.5 (b). A clear drop in the intensity curve occurs in the gap area since almost no light is reflected when there is a gap.

(a)

(b) 200

150

100

50 0 20 40 60 80 100 pixel #

Figure 5.5: (a) ROI rotated 90 degrees, the red dotted lines marks the joint gap. (b) The mean intensity of each row in the ROI.

The gap width can then be calculated by minimum and maximum value of the first derivative of the mean intensity vector, α0y , as shown in Figure 5.6.

60 5.2. PROPOSEDSOLUTIONS

50

0

-50 0 20 40 60 80 100 pixel #

Figure 5.6: The first derivative of the mean intensity vector (5.5 (b)). The horizontal red dotted lines indicate the gap width.

Fusing data from the laser profile sensor and the camera

In Paper F, data from the camera and the laser profile sensor are fused to investi- gate if this could further improve the gap width estimates. Fusing the data from the two sensors is conducted by an implementation of a Kalman filter, since it has a built in support for sensor fusion [70]. The gap measurements from the camera, and from the laser profile sensor are used as measurement inputs to the filter. A Kalman filter bases its estimation on both the measurements and a model that describes the expected behaviour. The model used here assumes that the gap width does not change between two consecutive image frames, but a noise component is added to allow for small changes. This should be a fair assumption, since images are obtained every 10t h ms (the camera runs at 100 FPS for each sensor) and the welding travel speed is 15 mm/s. This means that we travel 0.15 mm between each image frame, and it is not likely that the gap width would change very much during this movement.

Results

The results from LBW experiments show that the gap width can be estimated with high accuracy for both sensors. The laser profile sensor gives a maximum error of approximately 0.05 mm. The error is slightly larger for the camera than for the laser profile sensor, giving a maximum error of approximately 0.12 mm. This is however only during a short peak, the majority of the estimates have an error below 0.05 mm. The results from fusing the measurements from the laser profile sensor and the camera is shown in Figure 5.7. The fit between estimated fused data and the reference measurements is here better than using only the camera or the laser profile sensor separately, and the error is below 0.05 mm for all estimations. Calculations of the mean error, standard deviation and 1-norm of the error show better results for the fused data in all experiments.

61 CHAPTER 5. LASERBEAMWELDINGOFSQUARE-BUTT JOINTS WITH VARYING...

0.8 Fused data 0.6 Reference

0.4

0.2 Gap size [mm] size Gap

0 0 200 400 600 800 1000 1200 1400 1600 1800 image # 0.1

0.05 Error [mm] Error

0 0 200 400 600 800 1000 1200 1400 1600 1800 image #

Figure 5.7: Result from gap width estimations of the fused data. The upper graph shows the estimated gap and the manually measured gap as a reference. The lower graph shows error between the estimate and the reference.

5.2.3 Filler wire feed rate control

The filler wire feed rate control system, presented in Paper F, comprises a mea- surement PC for calculations of the joint gap width based on sensor inputs as described in the former section, an industrial robot, see Section 3.1.1, and a wire feeder, see Section 3.1.3. The calculation of the filler wire feed rate is based on the estimated joint gap width. The nominal wire feed rate, vn, programmed in the robot controller, is applied for gap widths 0.1 mm. For wider gaps, it is adjusted based on the estimated joint gap width,≤ calculated by Equation (3.1). The adjusted filler wire feed rate, vwi r e , is then sent to the filler wire feeder. Figure 5.8 illustrates the filler wire rate control system. The following also needs to be considered when calculating the filler wire feed rate; (i) the sensor measures the gap width a certain distance, d, in front of the keyhole, (ii) the wire feeder has a dynamic behaviour, hence a certain time will elapse between requested wire feed rate until it is achieved. The transfer

62 5.2. PROPOSEDSOLUTIONS

Measurement Δ − PC G(s)

Robot + Wire feeder

Figure 5.8: Filler wire feed rate control system.

function G(s) = G1(s)G2(s), describes this dynamic behaviour. The dynamics of the sensing system is modelled as time shift

τd1 s G1(s) = e (5.1) where τd1 is a function of the welding travel speed, v, and the distance, d, such as τd1 = d/v. The dynamics of the filler wire feeder is modelled as a first order system with a dead time

τd2 s e− G2(s) = (5.2) 1 + τs where τd2 represents the dead time, and τ represents the time constant of the filler wire feeder. This model has been obtained by conducting an experimental system identification [105] where ∆v was changed in steps and the wire feed rate was measured by a wire feed rate measurement device, WireTrak A3A0218.

The results show that τd2 = 0.01 s, and τ = 0.144 s, hence the total time delay between the request for a change in wire feed rate until it is reached will be

154 ms. Since v is known it is possible to decide d such that τd1 = τd2 in order to compensate for the time delay in the control system. As an example, if v is 15 mm/s, the distance, d becomes: d = τv = 0.154 15 = 2.3 mm, hence measuring the joint gap width 2.3 mm in front of the· keyhole would compensate for the dynamics of the wire feeder. This distance should decide the position of the ROI in the camera image. The filler wire feed rate control system has been evaluated by welding ex- periments as described in Section 3.3.2. With a constant wire feed rate, the reinforcement is decreasing with wider joint gaps. The adaptive wire feed rate, controlled by the filler wire rate control system, gives a slightly increased re- inforcement as the gap widens. It can be concluded that the system effectively adapts the filler wire rate and compensates for the varying gap, however the sys- tem should be tuned in order to produce a constant reinforcement independent of the joint gap width.

63 64 Chapter 6

Conclusion and contributions

This Chapter summaries the contributions that answer the research questions:

Q1: How to robustly track the joint position during laser beam welding • of closed-square-butt joints?

Q2: How to robustly estimate the joint gap width and control the filler • wire feed rate in laser beam welding of square-butt joints.

Due to disturbances in the robotised LBW process there is a need to monitor the process using sensor systems. The main question addressed in this work is how data can be captured in-process in order to control LBW, and the aim is to produce welds with better quality. Research question 1 (Q1) is addressed in three papers, Paper B, C and D. Paper B proposes a method for detecting beam offsets from the joint by mon- itoring the spectral emissions from the plasma plume using a photodiode. A monitoring system comprising three photodiodes, each of them monitoring different spectral ranges, is developed in order to capture the spectral emissions from the LBW process. The signal from one of the photodiodes that capture the spectral emission range related to the plasma plume is analysed by a CWA. Welding experiments, where the laser beam was moved away from the joint in different manners, show that there is a correlation between the beam offset and CWA signal. It is suggested that this method is useful as an indication that welding with a beam offset has occurred, and that the welded part needs to be further analysed. Paper C proposes a dual sensor system, consisting of both a camera and a spectrometer that synchronously monitors the LBW process dur- ing welding of closed-square-butt joints. A monitoring system comprising a camera and LED illumination is proposed that captures images of the area in front of the keyhole. It is shown that good image information can be obtained by selecting an appropriate wavelength for the LED illumination and the band pass filter placed in front of the camera. By applying an algorithm based on the

65 CHAPTER 6. CONCLUSIONANDCONTRIBUTIONS

Hough transform and a Kalman filter it is shown that the joint position can be robustly tracked even if scratches are present near the joint, that could be mis- taken for the joint, and also when tack welds obscure the view for the camera. A spectrometer is also used for monitoring the plasma electron temperature during LBW. It is found that the plasma electron temperature can be related to how the laser beam spot is positioned with regards to the joint and can there- fore be used in a joint tracking application. It is also found that this method is not sensitive to tack welds covering the joint. Therefore a combination of vision and spectrometer system is suggested in order in improve the robustness of a joint tracking system. The algorithm for finding the joint position is fur- ther developed in Paper D. The algorithm proposed in Paper C only handles straight joint paths. A modified version of the Hough transform is presented in Paper D that also handles curved joint paths. A model based prediction of the joint path is also presented in order to make tracking more robust. It can be concluded that the proposed solutions can achieve robust joint tracking of closed-square-butt joints. Research question 2 (Q2) is addressed in two papers, Paper E and F. Paper E proposes a dual-sensing system, comprising a camera with LED illumination and a spectrometer, to estimate the joint gap width during LBW of square-butt joint with varying gap. The intensity of a spectral line obtained by a spectrom- eter is correlated to the joint gap width, and it is found that change in intensity can be correlated to the joint gap width. The camera system captures images of the area in front of the keyhole, and an algorithm is proposed to estimate the joint gap width. Experimental results show that the gap width can be suc- cessfully estimated by the camera system. A dual-sensing mode approach is proposed in Paper F to estimate the joint gap width. The sensor system is com- prised by a camera, LED illumination and a laser line module. An algorithm is proposed that fuses the measurements form the two sensing modes in a Kalman filter in order to get robust estimates of the joint gap width. A feed-forward controller is proposed that adaptively adjusts the filler wire feed rate based the joint gap width estimates. Experimental results show that the gap width can be estimated with good accuracy and that the system can be used to obtain a good seam geometry even when the gap width varies. It can be concluded that the proposed solutions can achieve robust estimates of the joint gap width, and that control of the filler wire feed rate is feasible in order to avoid defects in the welded seam. The presented methods for joint tracking and wire feed rate control based on a camera and a laser profile sensor is considered to be industrially applicable for on-line control of the LBW process. Since both the joint tracking system and the wire feed rate control system use a similar sensor set-up, a combination of the two would be beneficial. The camera would be able to track the closed-

66 square-butt joints and measure the gap width, and the laser profile sensor would be able to measure the gap width and also the misalignment between the work pieces that could be a valuable complement to the gap width measurements. The camera and laser profile sensor will in this way complement each other and constitute the basis for an on-line control system for squared-butt joint LBW. The presented methods using photodiodes or a spectrometer is considered to be a proof of concept. They are a complement to the camera and laser profile sensor system for situations where a deviation in the LBW process should be detected but without controlling the process. They could in this case be a valu- able tool used to indicate the occurrence of a process disturbance that might yield a defect in the produced seam and be used for guidance in non-destructive testing or as in indication that the produced part should be scrapped. The proposed sensor systems could also be used for logging data for off- line analysis, especially if something needs further investigations. It could also be used as operator information during welding in order to increase the oper- ators’ understanding on the status of the process and for traceability. Further, the proposed system can play an important role in future digitalisation of the manufacturing industry.

67 68 Chapter 7

Future work

Analysing the photodiode signal and its correlation to beam offsets using wavelets, in Paper B, showed that the signal contained useful information in the fre- quency domain. Other signal processing methods should be evaluated in order to find an on-line method that can detect beam offsets in real-time. The joint tracking algorithm, presented in Paper D, has been implemented in a real-time system and experiments have been conducted on a demonstrator based on curved joint path. The tracking performance of this implementation should be evaluated in order to further assess the robustness of the system. The sensor systems, presented in Paper B-F, should be combined in order to get a more complete view of the LBW process. Different sensor fusion strategies should be evaluated and more features should be considered, such as misalign- ment. The filler wire feed rate control system, presented in Paper F, should be tuned in order to always produce an even seam geometry. Larger gap widths should also be evaluated in order to find the limitations in the sytem with re- gards to bridging the gap. Other methods to bridge the gap should also be con- sidered, such as controlling the laser power and welding travel speed and also weaving of the laser beam.

69 70 References

[1] “Global Laser Welding Machine Market 2018-2022 | Addi- tive Manufacturing to Propel Growth | Technavio.” https: //www.businesswire.com/news/home/20181231005088/en/ Global-Laser-Welding-Machine-Market-2018-2022-Additive. Accessed February 10, 2019.

[2] F. Sikström, A. Runnemalm, P. Broberg, M. Nilsen, and E. Svenman, “Evaluation of non-contact methods for joint tracking in a laser beam welding application,” in 7th Swedish Production Symposium, pp. 1–6, 2016.

[3] W. M. Steen and J. Mazumder, Laser Material Processing. London: Springer London, 2010.

[4] Duley, Walter W, Laser Welding. Wiley-Interscience, 1999.

[5] U. Dilthey, D. Fuest, and W. Scheller, “Laser welding with filler wire,” Optical and Quantum Electronics, vol. 27, pp. 1181–1191, Dec. 1995.

[6] Z. Sun and M. Kuo, “Bridging the joint gap with wire feed laser weld- ing,” Journal of Materials Processing Technology, vol. 87, pp. 213–222, Mar. 1999.

[7] SIS, Svetsstandard - Kvalitet, konstruktion och svetsbeteckningar. SIS För- lag AB, 2 ed., 2002.

[8] F. Bardin, A. Cobo, J. M. Lopez-Higuera, O. Collin, P. Aubry, T. Dubois, M. Högström, P. Nylén, P. Jonsson, J. D. C. Jones, and D. P. Hand, “Closed-loop power and focus control of laser welding for full- penetration monitoring,” Applied Optics, vol. 44, pp. 13–21, Jan. 2005.

[9] H. Zhao and H. Qi, “Vision-based keyhole detection in laser full pene- tration welding process,” Journal of Laser Applications, vol. 28, May 2016.

71 REFERENCES

[10] F. Hugger, K. Hofmann, S. Kohl, M. Dobler, and M. Schmidt, “Spatter formation in laser beam welding using laser beam oscillation,” Welding in the World, vol. 59, pp. 165–172, July 2014.

[11] L. Zhang, J. Sun, G. Yin, J. Zhao, and Q. Han, “A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot,” Sensors, vol. 15, pp. 13725–13751, June 2015.

[12] D. Lévesque, S. E. Kruger, G. Lamouche, R. Kolarik II, G. Jeskey, M. Choquet, and J. P. Monchalin, “Thickness and grain size monitor- ing in seamless tube-making process using laser ultrasonics,” NDT & E International, vol. 39, pp. 622–626, Dec. 2006.

[13] G. M. Graham and I. C. Ume, “Automated system for laser ultrasonic sensing of weld penetration,” Mechatronics, vol. 7, pp. 711–721, Dec. 1997.

[14] H. Cho, S. Ogawa, and M. Takemoto, “Non-contact laser ultrasonics for detecting subsurface lateral defects,” NDT & E International, vol. 29, pp. 301–306, Oct. 1996.

[15] X. Gao, D. You, and S. Katayama, “Infrared image recognition for seam tracking monitoring during fiber laser welding,” Mechatronics, vol. 22, pp. 370–380, June 2012.

[16] D. You, X. Gao, and S. Katayama, “Visual-based spatter detection during high-power disk laser welding,” Optics and Lasers in Engineering, vol. 54, pp. 1–7, Mar. 2014.

[17] P. Broberg, “Surface crack detection in welds using thermography,” NDT & E International, vol. 57, pp. 69–73, July 2013.

[18] D. P. Hand, M. D. T. Fox, F. M. Haran, C. Peters, S. A. Morgan, M. A. McLean, W. M. Steen, and J. D. C. Jones, “Optical focus control system for laser welding and direct casting,” Optics and Lasers in Engineering, vol. 34, pp. 415–427, Oct. 2000.

[19] F. Bardin, A. Cobo, J. M. Lopez-Higuera, O. Collin, P. Aubry, T. Dubois, M. Högström, P. Nylén, P. Jonsson, J. D. C. Jones, and D. P. Hand, “Optical techniques for real-time penetration monitoring for laser welding,” Applied Optics, vol. 44, pp. 3869–3876, July 2005.

[20] K. Kamimuki, T. Inoue, K. Yasuda, M. Muro, T. Nakabayashi, and A. Matsunawa, “Prevention of welding defect by side gas flow and its monitoring method in continuous wave Nd:YAG laser welding,” Jour- nal of Laser Applications, vol. 14, pp. 136–145, Aug. 2002.

72 REFERENCES

[21] P. J. Webster, L. G. Wright, K. D. Mortimer, B. Y. Leung, X. Z. Joe, and J. M. Fraser, “Automatic real-time guidance of laser machining with inline coherent imaging,” Journal of Laser Applications, vol. 23, no. 2, 2011.

[22] T. Sibillano, D. Rizzi, F. P. Mezzapesa, P. M. Lugarà, A. R. Konuk, R. Aarts, B. H. i. t. Veld, and A. Ancona, “Closed Loop Control of Penetration Depth during CO2 Laser Lap Welding Processes,” Sensors, vol. 12, pp. 11077–11090, Aug. 2012.

[23] M. Harooni, B. Carlson, and R. Kovacevic, “Detection of defects in laser welding of AZ31b magnesium alloy in zero-gap configuration by a real-time spectroscopic analysis,” Optics and Lasers in Engineering, vol. 56, pp. 54–66, May 2014.

[24] T. L. Floyd, Electronic Devices. Pearson, 4 edition ed., 1996.

[25] “Thorlabs, Inc..” https://www.thorlabs.com, Oct. 2016. Accessed February 10, 2019.

[26] I. Eriksson and A. F. Kaplan, “Evaluation of laser weld monitoring A case study,” in Proceedings of ICALEO, pp. 1419–1425, 2009.

[27] I. Eriksson, J. Powell, and A. F. H. Kaplan, “Signal overlap in the mon- itoring of laser welding,” Measurement Science and Technology, vol. 21, Oct. 2010.

[28] “Precitec.” http://www.precitec.de/en/. Accessed February 10, 2019.

[29] “Prometec.” https://www.sandvik.coromant.com. Accessed Febru- ary 10, 2019.

[30] R. Olsson, I. Eriksson, J. Powell, A. V. Langtry, and A. F. H. Kaplan, “Challenges to the interpretation of the electromagnetic feedback from laser welding,” Optics and Lasers in Engineering, vol. 49, pp. 188–194, Feb. 2011.

[31] Y. Kawahito, T. Ohnishi, and S. Katayama, “In-process monitoring and feedback control for stable production of full-penetration weld in con- tinuous wave fibre laser welding,” Journal of Physics D: Applied Physics, vol. 42, Apr. 2009.

[32] A. Molino, M. Martina, F. Vacca, G. Masera, A. Terreno, G. Pasquettaz, and G. D’Angelo, “FPGA implementation of time - frequency analysis

73 REFERENCES

algorithms for laser welding monitoring,” Microprocessors and Microsys- tems, vol. 33, pp. 179–190, May 2009.

[33] S.-H. Baik, M.-S. Kim, S.-K. Park, C.-M. Chung, C.-J. Kim, and K.-J. Kim, “Process monitoring of laser welding using chromatic filtering of thermal radiation,” Measurement Science and Technology, vol. 11, Dec. 2000.

[34] S. Postma, R. G. K. M. Aarts, J. Meijer, and J. B. Jonker, “Penetration control in laser welding of sheet metal,” Journal of Laser Applications, vol. 14, pp. 210–214, Nov. 2002.

[35] S. S. Rodil, R. A. Gómez, J. M. Bernárdez, F. Rodríguez, L. J. Miguel, and J. R. Perán, “Laser welding defects detection in automotive industry based on radiation and spectroscopical measurements,” The International Journal of Advanced Manufacturing Technology, vol. 49, pp. 133–145, July 2010.

[36] T. Sibillano, A. Ancona, D. Rizzi, V. Lupo, L. Tricarico, and P. M. Lugarà, “Plasma Plume Oscillations Monitoring during Laser Welding of Stainless Steel by Discrete Wavelet Transform Application,” Sensors, vol. 10, pp. 3549–3561, Apr. 2010.

[37] D. Colombo and B. Previtali, “Through Optical Combiner Monitor- ing of Fiber Laser Processes,” International Journal of Material Forming, vol. 3, pp. 1123–1126, Apr. 2010.

[38] P. Stritt, R. Weber, T. Graf, S. Müller, and C. Ebert, “Utilizing Laser Power Modulation to Investigate the Transition from Heat-Conduction to Deep-Penetration Welding,” Physics Procedia, vol. 12, Part A, pp. 224– 231, 2011.

[39] D. You, X. Gao, and S. Katayama, “WPD-PCA based Laser Welding Pro- cess Monitoring and Defects Diagnosis by using FNN and SVM,” IEEE Transactions on Industrial Electronics, vol. 62, no. 99, pp. 628 – 636, 2014.

[40] Y. W. Park, H. Park, S. Rhee, and M. Kang, “Real time estimation of CO2 laser weld quality for automotive industry,” Optics & Laser Technology, vol. 34, pp. 135–142, Mar. 2002.

[41] R. Olsson, I. Eriksson, J. Powell, and A. F. H. Kaplan, “Advances in pulsed laser weld monitoring by the statistical analysis of reflected light,” Optics and Lasers in Engineering, vol. 49, pp. 1352–1359, Nov. 2011.

74 REFERENCES

[42] C. Steger, M. Ulrich, and C. Wiedemann, Machine Vision Algorithms and Applications. Wiley, Dec. 2007.

[43] Y. Bandoh, G. Qiu, M. Okuda, S. Daly, T. Aach, and O. Au, “Recent advances in high dynamic range imaging technology,” in 2010 17th IEEE International Conference on Image Processing (ICIP), pp. 3125–3128, Sept. 2010.

[44] S. K. Lee and S. J. Na, “A study on automatic seam tracking in pulsed laser edge welding by using a vision sensor without an auxiliary light source,” Journal of Manufacturing Systems, vol. 21, no. 4, pp. 302–315, 2002.

[45] C.-H. Kim and D.-C. Ahn, “Coaxial monitoring of keyhole during Yb:YAG laser welding,” Optics & Laser Technology, vol. 44, pp. 1874– 1880, Sept. 2012.

[46] Y. Cai, Q. Yang, D. Sun, J. Zhu, and Y. Wu, “Monitoring of deviation sta- tus of incident laser beam during CO2 laser welding processes for I-core sandwich construction,” The International Journal of Advanced Manufac- turing Technology, vol. 77, pp. 305–320, Oct. 2014.

[47] T. Santti, J. Poikonen, O. Lahdenoja, M. Laiho, and A. Paasio, “Online seam tracking for laser welding with a vision chip and FPGA enabled camera system,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1985–1988, May 2015.

[48] M. Jager and F. Hamprecht, “Principal Component Imagery for the Quality Monitoring of Dynamic Laser Welding Processes,” IEEE Trans- actions on Industrial Electronics, vol. 56, pp. 1307–1313, Apr. 2009.

[49] M. Luo and Y. C. Shin, “Vision-based weld pool boundary extraction and width measurement during keyhole fiber laser welding,” Optics and Lasers in Engineering, vol. 64, pp. 59–70, Jan. 2015.

[50] W. Huang and R. Kovacevic, “A Laser-Based Vision System for Weld Quality Inspection,” Sensors, vol. 11, pp. 506–521, Jan. 2011.

[51] J.-W. Kim and H.-S. Bae, “A study on a vision sensor system for track- ing the I-Butt weld joints,” Journal of Mechanical Science and Technology, vol. 19, pp. 1856–1863, Oct. 2005.

[52] M.-G. Kang, J.-H. Kim, Y.-J. Park, and G.-J. Woo, “Laser vision system for automatic seam tracking of stainless steel pipe welding machine (IC- CAS 2007),” in International Conference on Control, Automation and Sys- tems, 2007. ICCAS ’07, pp. 1046–1051, Oct. 2007.

75 REFERENCES

[53] A. F. Kaplan, P. Norman, and I. Eriksson, “Analysis of the keyhole and weld pool dynamics by imaging evaluation and photodiode monitoring,” in Proceedings of LAMP2009 - the 5th International Congress on Laser Ad- vanced Materials Processing, pp. 1–6, 2009.

[54] I. Eriksson, J. Powell, and A. F. H. Kaplan, “Melt behavior on the key- hole front during high speed laser welding,” Optics and Lasers in Engi- neering, vol. 51, pp. 735–740, June 2013.

[55] F. Tenner, F. Klämpfl, K. Y. Nagulin, and M. Schmidt, “Evaluation of process observation features for laser metal welding,” Optics & Laser Tech- nology, vol. 80, pp. 77–83, June 2016.

[56] F. Kong, J. Ma, B. Carlson, and R. Kovacevic, “Real-time monitoring of laser welding of galvanized high strength steel in lap joint configuration,” Optics & Laser Technology, vol. 44, pp. 2186–2196, Oct. 2012.

[57] L. R. P. Butler and K. Laqua, “Nomenclature, symbols, units and their usage in spectrochemical analysis-IX. Instrumentation for the spectral dispersion and isolation of optical radiation (IUPAC Recommendations 1995),” Pure and Applied Chemistry, vol. 67, no. 10, pp. 1725–1744, 1995.

[58] A. Ancona, V. Spagnolo, P. M. Lugarà, and M. Ferrara, “Optical sensor for real-time monitoring of co2 laser welding process,” Applied Optics, vol. 40, p. 6019, Nov. 2001.

[59] T. Sibillano, A. Ancona, V. Berardi, and P. M. Lugarà, “Correlation anal- ysis in laser welding plasma,” Optics Communications, vol. 251, pp. 139– 148, July 2005.

[60] T. Sibillano, A. Ancona, V. Berardi, and P. M. Lugarà, “Real-time mon- itoring of laser welding by correlation analysis: The case of AA5083,” Optics and Lasers in Engineering, vol. 45, pp. 1005–1009, Oct. 2007.

[61] T. Sibillano, D. Rizzi, A. Ancona, S. Saludes-Rodil, J. Rodríguez Nieto, H. Chmelí˘cková, and H. Šebestová, “Spectroscopic monitoring of pene- tration depth in CO2 Nd:YAG and fiber laser welding processes,” Jour- nal of Materials Processing Technology, vol. 212, pp. 910–916, Apr. 2012.

[62] G. Chen, M. Zhang, Z. Zhao, Y. Zhang, and S. Li, “Measurements of laser-induced plasma temperature field in deep penetration laser weld- ing,” Optics & Laser Technology, vol. 45, pp. 551–557, Feb. 2013.

[63] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: Modelling, Planning and Control. Advanced Textbooks in Control and Signal Pro- cessing, London: Springer-Verlag, 2009.

76 REFERENCES

[64] “Encyclopedia of Laser Physics and Technology - time-of-flight measure- ments, range finder, pulses.” https://www.rp-photonics.com/time_ of_flight_measurements.html. Accessed February 10, 2019.

[65] “Encyclopedia of Laser Physics and Technology - semiconductor lasers, laser diodes.” https://www.rp-photonics.com/semiconductor_ lasers.html. Accessed February 10, 2019.

[66] “Laserline Optics Canada.” http://www.laserlineoptics.com/ powell_primer.html. Accessed February 10, 2019.

[67] W. J. Shao, Y. Huang, and Y. Zhang, “A novel weld seam detection method for space weld seam of narrow butt joint in laser welding,” Optics & Laser Technology, vol. 99, pp. 39–51, Feb. 2018.

[68] J. Zeng, B. Chang, D. Du, G. Peng, S. Chang, Y. Hong, L. Wang, and J. Shan, “A Vision-Aided 3d Path Teaching Method before Narrow Butt Joint Welding,” Sensors, vol. 17, May 2017.

[69] P. Xu, X. Tang, and S. Yao, “Application of circular laser vision sensor (CLVS) on welded seam tracking,” Journal of Materials Processing Tech- nology, vol. 205, pp. 404–410, Aug. 2008.

[70] Fredrik Gustafsson, Statistical Sensor Fusion, vol. 1. Studentlitteratur, 1 ed., 2015.

[71] Z. Zhang and S. Chen, “Real-time seam penetration identification in arc welding based on fusion of sound, voltage and spectrum signals,” Journal of Intelligent Manufacturing, vol. 28, pp. 207–218, Jan. 2017.

[72] X. Gao, Y. Sun, D. You, Z. Xiao, and X. Chen, “Multi-sensor informa- tion fusion for monitoring disk laser welding,” The International Journal of Advanced Manufacturing Technology, vol. 85, pp. 1167–1175, July 2016.

[73] D. You, X. Gao, and S. Katayama, “Multisensor Fusion System for Moni- toring High-Power Disk Laser Welding Using Support Vector Machine,” IEEE Transactions on Industrial Informatics, vol. 10, pp. 1285–1295, May 2014.

[74] A. Sun, E. Kannatey-Asibu, and M. Gartner, “Monitoring of laser weld penetration using sensor fusion,” Journal of Laser Applications, vol. 14, pp. 114–121, Apr. 2002.

[75] A. Rout, B. B. V. L. Deepak, and B. B. Biswal, “Advances in weld seam tracking techniques for robotic welding: A review,” Robotics and Computer-Integrated Manufacturing, vol. 56, pp. 12–37, Apr. 2019.

77 REFERENCES

[76] J. Fan, F. Jing, L. Yang, T. Long, and M. Tan, “A precise seam tracking method for narrow butt seams based on structured light vision sensor,” Optics & Laser Technology, vol. 109, pp. 616–626, Jan. 2019.

[77] X. Gao, L. Mo, D. You, and Z. Li, “Tight butt joint weld detection based on optical flow and particle filtering of magneto-optical imaging,” Me- chanical Systems and Signal Processing, vol. 96, pp. 16–30, Nov. 2017.

[78] B. Regaard, S. Kaierle, and R. Poprawe, “Seam-tracking for high preci- sion laser welding applications-Methods, restrictions and enhanced con- cepts,” Journal of Laser Applications, vol. 21, no. 4, pp. 183–195, 2010.

[79] S. Krämer, W. Fiedler, A. Drenker, and P. Abels, “Seam tracking with texture based image processing for laser materials processing,” Proc.SPIE, vol. 8963, pp. 8963 – 8963, 2014.

[80] F. Coste, R. Fabbro, and L. Sabatier, “Adaptive control of high-thickness laser welding,” Welding International, vol. 13, pp. 465–469, Jan. 1999.

[81] K. Zhang, Y. Chen, J. Zheng, J. Huang, and X. Tang, “Adaptive filling modeling of butt joints using genetic algorithm and neural network for laser welding with filler wire,” Journal of Manufacturing Processes, vol. 30, pp. 553–561, Dec. 2017.

[82] J. Huang, K. Zhang, X. Zhu, and X. Tang, “Robot-based adaptive laser wire welding of ship steel plates,” in 2016 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), pp. 170–173, July 2016.

[83] W. Chen, C. Liu, W. Yang, and X. Zhang, “Gap-detecting sensor and wire feed control in butt laser welding with filler wire,” in Lasers in Ma- terial Processing and Manufacturing, vol. 4915, pp. 208–218, International Society for Optics and Photonics, Sept. 2002.

[84] H. Siva Prasad, J. Frostevarg, and A. F. H. Kaplan, “The stability of laser welding with an off-axis wire feed,” Journal of Materials Processing Tech- nology, vol. 264, pp. 84–90, Feb. 2019.

[85] “Photonfocus AG.” http://www.photonfocus.com. Accessed February 10, 2019.

[86] “Permanova.” http://permanova.se/. Accessed February 10, 2019.

[87] “Ocean Optics.” https://oceanoptics.com. Accessed February 10, 2019.

78 REFERENCES

[88] “Atomic Spectra Database.” https://www.nist.gov/pml/ atomic-spectra-database. Accessed February 10, 2019.

[89] “Scansonic.” http://www.scansonic.de/en. Accessed February 10, 2019.

[90] R. O. Duda and P. E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Commun. ACM, vol. 15, pp. 11–15, Jan. 1972.

[91] J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol. 8, pp. 679–698, Nov. 1986.

[92] R. C. Gonzalez and R. E. , Digital Image Processing. Upper Saddle River, N.J: Pearson, 3 edition ed., Aug. 2007.

[93] E. Arias-Castro and D. L. Donoho, “Does median filtering truly pre- serve edges better than linear filtering?,” The Annals of Statistics, vol. 37, pp. 1172–1206, June 2009.

[94] Fredrik Gustafsson, Adaptive Filtering and Change Detection. Wiley, July 2000.

[95] H. Luo, H. Zeng, L. Hu, X. Hu, and Z. Zhou, “Application of artificial neural network in laser welding defect diagnosis,” Journal of Materials Processing Technology, vol. 170, pp. 403–411, Dec. 2005.

[96] H. Zeng, Z. Zhou, Y. Chen, H. Luo, and L. Hu, “Wavelet analysis of acoustic emission signals and quality control in laser welding,” Journal of Laser Applications, vol. 13, pp. 167–173, July 2001.

[97] C. Steiger, T. Gruenberger, A. Braunsteiner, M. Bohrer, and M. Bammer, “Application of wavelets for online laser process observer,” in Wavelet Applications VIII, vol. 4391, pp. 219–228, International Society for Op- tics and Photonics, Mar. 2001.

[98] F. Vakili-Farahani, J. Lungershausen, and K. Wasmer, “Wavelet analysis of light emission signals in laser beam welding,” Journal of Laser Applica- tions, vol. 29, p. 022424, May 2017.

[99] N. Wu, J. Xiao, and S. Chen, “Wavelet Analysis based Splashing Identi- fication in Electrode,” IIW-Doc III-1808-17.

[100] S. Orfanidis, Introduction to Signal Processing. Englewood Cliffs, N.J: Prentice Hall, us ed edition ed., Aug. 1995.

79 REFERENCES

[101] A. Graps, “An Introduction to Wavelets,” IEEE Computational Science and Engineering, vol. 2, pp. 50–61, 1995.

[102] C. Torrence and G. P. Compo, “A practical guide to wavelet analysis,” Bulletin of the American Meteorological society, vol. 79, no. 1, pp. 61–78, 1998.

[103] H. R. Griem, Plasma spectroscopy. New York: McGraw-Hill, 1964.

[104] D. Yang, X. Li, D. He, Z. Nie, and H. Huang, “Optimization of weld bead geometry in laser welding with filler wire process using Taguchi’s approach,” Optics & Laser Technology, vol. 44, pp. 2020–2025, Oct. 2012.

[105] L. Ljung, System Identification: Theory for the User. Upper Saddle River, NJ: Prentice Hall, 2 edition ed., Jan. 1999.

80 Summary of Appended Papers

Paper A Optical Methods for in-process monitoring of laser beam welding This paper addresses the issue of evaluating non-intrusive sensors to be used for monitoring the LBW process. This is achieved by conducting a literature sur- vey of suitable sensors to be used for detecting features or disturbances in the process. The focus of this study is on non-contact sensors that can be seam- lessly integrated into the LBW tool. The systems investigated are camera, in- frared camera, laser profile sensor, photodiodes, spectroscopy, confocal sensors, white light interferometry and laser ultrasonic. The result of this work is a ma- trix showing how the selected sensors can be used for monitoring of the LBW process.

Paper B Detecting beam offsets in laser welding of closed-square-butt joints by wavelet anal- ysis of an optical process signal This paper propose a method using photodiodes to monitor the LBW process during welding with a beam offset. The aim is to study if any correlation exist between the signal from the photodiode and the beam position. The photodi- ode monitors the spectral emission related to the plasma plume and the signal from the photodiode is analysed using continuous wavlet transform. Welding experiments, where beam offsets are created by programming the robot that manipulates the LBW tool, show that there exist a correlation between the sig- nal from the continuous wavelet transform and the beam offset. It is suggested that the system could be used to detect beam offsets and in an industrial situ- ation give an indication that the welded part should be further investigated or scrapped.

Paper C Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding A dual sensing system, comprising a camera and a spectrometer, is proposed in this paper. The aim is to investigate if a dual sensor approach could increase the robustness of the monitoring system. By using the data from the spectrometer,

81 SUMMARY OF APPENDED PAPERS the plasma electron temperature is calculated. It is investigated whether changes in the plasma electron temperature can be related to beam offsets. Welding ex- periments on work pieces in a closed-square-butt joint configuration shows that this relationship exists. Also, it is shown that this method is insensitive to tack welds covering the joint, a situation where the camera system fails to find the joint. The result from this work indicates that a dual sensor system, using both camera and spectrometer, could increase the robustness of a joint tracking sys- tem.

Paper D Robust vision based joint tracking for laser welding of curved closed-square-butt joints The camera system presented in Paper C, is in this paper further developed in order to also handle curved closed-square-butt joints. A modified Hough trans- form is proposed, where the joint is modelled as a second order polynomial. A joint path model is also proposed, based on the nominal programmed joint path. The calculated joint path is compared to the model in order to evalu- ate its correctness. In this way, distinction between the scratches and the joint path can be made. Experimental results, from welding a demonstrator with an ellipse shaped joint path, show promising results for joint tracking of curved closed-square butt joints.

Paper E Monitoring of varying joint gap width during laser beam welding by a dual vision and spectroscopic sensing system This paper propose a dual sensing system, comprising a camera with external illumination and a spectrometer, to monitor the joint gap width during auto- genous LBW of square-butt joints with varying gap width. The aim with this study is to evaluate the two sensors ability to estimate the gap width individ- ually. Results from LBW experiments show that the camera is able to provide good estimations of the joint gap width and that there is a good correlation be- tween the spectrometer signal and joint gap width.

Paper F Adaptive control of the filler wire rate during laser beam welding of squared-butt joints with varying gap width A dual sensing system, comprising a camera with two sensing modes, is pro- posed to estimate the joint gap width during LBW with filler wire. The two sensing modes are achieved by altering the illumination between LED and laser line illumination. Two different methods for estimation of the joint gap width is obtained in this manner. A comparison is made between the estimates of two

82 sensing modes, and also by fusing data from the two modes in a Kalman filter. The estimated joint gap width is then used as input to a feed forward control system for wire feed rate control. Experimental results show that both sensing modes gave good estimates of the joint gap width, however by fusing data from the two modes, an even higher accuracy could be achieved. Also, compensation for the variations in gap width could by achieved by controlling the wire feed rate. However the system need calibration since it slightly overcompensated giving an increased reinforcement for increasing joint gap widths.

83 84 Tidigare avhandlingar – Produktionsteknik

PEIGANG LI Cold Lap Formation in of Steel An Experimental Study of Micro-lack of Fusion Defects, 2013:2. NICHOLAS CURRY Design of Thermal Barrier Coatings, 2014:3. JEROEN DE BACKER Feedback Control of Robotic Friction Stir Welding, 2014:4. MOHIT KUMAR GUPTA Design of Thermal Barrier Coatings A modelling approach, 2014:5. PER LINDSTRÖM Improved CWM Platform for Modelling Welding Procedures and their Effects on Structural Behavior, 2015:6. ERIK ÅSTRAND A Framework for Optimised Welding of Fatigue Loaded Structures Applied to Gas Metal Arc Welding of Welds, 2016:7. EMILE GLORIEUX Multi-Robot Motion Planning Optimisation for Handling Sheet Metal Parts, 2017:10. EBRAHIM HARATI Improving fatigue properties of welded high strength steels, 2017:11. ANDREAS SEGERSTARK Laser Metal Deposition using Alloy 718 Powder Influence of Process Parameters on Material Characteristics, 2017:12. ANA ESTHER BONILLA HERNÁNDES On Cutting Tool Resource Management, 2018:16.

SATYAPAL MAHADE Functional Performance of Gadolinium Zirconate/YSZ Multi-layered Thermal Barrier Coatings, 2018:18. ASHISH GANVIR Design of suspension plasma sprayed thermal barrier coatings, 2018:20. AMIR PARSIAN Regenerative Chatter Vibrations in Indexable Drills: Modeling and Simulation, 2018:21. ESMAEIL SADEGHIMERESHT High Temperature Corrosion of Ni-based Coatings, 2018:23. VAHID HOSSEINI Super Duplex Stainless Steels. Microstructure and Properties of Physically Simulated Base and Weld Metal, 2018:24.

MORGAN NILSEN PhD Thesis Production Technology 2019 No. 27

Monitoring and control of laser beam butt joint welding Laser beam welding is a growing area within production technology. It enables high Monitoring and control of laser production rates due to the ability to automate the process, and the high automation level facilitates on-line monitoring and control. Laser beam welding is currently used in MONITORING AND CONTROL OF LASER BEAM BUTT JOINT WELDING beam butt joint welding various industries, ranging from small scale manual welding to fully automated welding in the automotive, aerospace and heavy manufacturing industry. This work addresses robotized laser beam welding, where an industrial robot is used for tool manipulation, and the focus is on finding sensor solutions robust enough to be used in real industrial situ- ations for welding of complex parts with limited space for sensor equipment. The objec- tive is to study different monitoring and control systems applied to laser beam welding of Morgan Nilsen squared-butt joints, that enable production of high-quality seams. Two different cases are studied, one where the gap between the work pieces is close to zero and therefore hard to detect, and one when the gap between the work pieces is varying. Different monitoring and control solutions that show promising results for improving the welding quality in laser beam welding of squared-butt joints are proposed.

Morgan Nilsen Morgan received his bachelor’s degree in Electrical Engineering from University West, Sweden in 1999, and earned his master’s degree in Mechatronics at De Montfort University, UK in 2000. After eleven years in the automotive industry he started to work at University West as a research engineer and in 2014 he started his PhD studies. His research interests are within monitoring and control of laser-based processes. 2019 NO.27

ISBN 978-91-88847-23-2 (Print) ISBN 978-91-88847-22-5 (PDF)

116883_Omslag.indd 1 2019-02-21 15:44:22