Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification

Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification

Preprints of the 19th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 24-29, 2014 Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification R. Muscat*, M. Mahfouf*, A. Zughrat*, Y.Y. Yang*, S. Thornton**, A.V. Khondabi* and S. Sortanos* *Department of Automatic Control and Systems Engineering, The University of Sheffield, Mappin St., Sheffield, S1 3JD, UK (Tel: +44 114 2225607) ([email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]) **Teesside Technology Centre, Tata Steel Europe, Eston Rd., Middlesbrough, TS6 6US, UK ([email protected]) Abstract: This study aims at designing a modelling architecture to deal with the imbalanced data relating to the production of rails. The modelling techniques are based on Support Vector Machines (SVMs) which are sensitive to class imbalance. An internal (Biased Fuzzy SVM) and external (data under- sampling) class imbalance learning methods were applied to the data. The performance of the techniques when implemented on the latter was better, while in both cases the inclusion of a fuzzy membership improved the performance of the SVM. Fuzzy C-Means (FCM) Clustering was analysed for reducing the number of support vectors of the Fuzzy SVM model, concluding it is effective in reducing model complexity without any significant performance deterioration. Keywords: Classification; railway steel manufacture; class imbalance; support vector machines; fuzzy support vector machines; fuzzy c-means clustering. The paper is organised as follows. Section 2 provides an 1 INTRODUCTION overview of the modelling data, obtained from the rail Cost management has become a very important factor in manufacturing process, and input variable selection. Section every industry. Investments in plant, technology, research 3 outlines the theory related to Support Vector Machines and development allow companies to reduce manufacturing (SVMs), Fuzzy Support Vector Machines (FSVMs) and costs while quality control procedures improve process Fuzzy C-Means (FCM) Clustering, and how these techniques output. Modelling techniques are increasingly being were implemented on the data. SVMs are not affected by employed to understand the interaction and influence of input local minima as they are mathematically based on the variables on the process. Tata Steel Europe is at the forefront solution of a convex optimisation problem. Also, in most of rail production and strives to produce the high cases, SVM generalisation performance has been shown to be performance products required by the rail industry. better than that of other classification methods (Burges, 1998). However, quadratic optimisation scales poorly with Advances in computer processing power, together with the the number of data samples and therefore FCM was proposed vast amounts of available data, have encouraged the for reducing training time and model complexity. The application of machine learning techniques to different real modelling results and their analysis are presented in Section world problems in an attempt to extract useful knowledge 4. Concluding remarks and suggestions for future work are from the available information. Pattern classification is a given in Section 5. supervised machine learning method in which a labelled set of data points is used to train a model which is then used to 2 MODELLING DATA classify new test examples. Classifier performance is 2.1 Rail Manufacturing Data commonly evaluated by its accuracy. However, this metric does not correctly value the minority class in an imbalanced At Tata Steel Europe, a precisely controlled rail production data set and as a result the trained model tends to be biased line, whose sub-processes are indicated in Fig. 1, produces towards the majority class (Weiss, 2004). Many data sets high quality rails. from real world problems are inherently imbalanced and During steelmaking, the desired steel chemical composition therefore appropriate measures need to be taken to ensure that is achieved, while maintaining the steel integrity and important information due to the minority class is correctly avoiding imperfections. Consistent steel blooms are produced represented by the classifier. though continuous casting. The blooms are rolled into rail This study deals with the design of a modelling architecture sections and Non-Destructive Testing (NDT) systems ensure for data related to the quality of rails produced by Tata Steel strict dimensional accuracy and detect any surface and Europe. The modelling problem is not trivial as the data set is metallurgical defects so that the delivered rail sections meet highly imbalanced with the number of good rails being much the high standards required for rail applications. higher than the rejected rails. Copyright © 2014 IFAC 10652 19th IFAC World Congress Cape Town, South Africa. August 24-29, 2014 For the work being presented, the first 39 input variables Continuous Steelmaking Casting were used, as obtained from the original 70 variables using this input selection procedure. 3 THEORY Finish and Rolling 3.1 Support Vector Machines (SVMs) Testing SVM is a supervised machine learning technique initially proposed by Vapnik (1979) in his pioneering work and further developed by Boser, Guyon, and Vapnik (1992). Fig. 1. Railway Production Route Since then, SVM application to pattern classification has Every stage of the rail manufacturing process is closely increased mainly due to its attractive properties and better monitored and controlled to ensure the highest rail quality. performance than other classifiers (Burges, 1998). This Instrumentation systems and regular sampling provide data subsection reviews the theory for SVMs (Burges, 1998; both for precise online process control and also for offline Cristianini & Shawe-Taylor, 2000; Haykin, 2009; Fletcher, production management, planning and decision making. 2009). The data set from the Tata Steel Europe rail production route Given a training data set with points in two linearly separable covered a two year production period. In a previous study classes, the SVM finds the optimal hyperplane which (Yang et al., 2011), a process expert from Tata Steel provided maximises the margin of separation between the two classes. expert knowledge to pre-process the data which had around Let the linearly separable training data be { , }, 200 variables. This was reduced to 70 useful process { 1, 1}, where is the output class of input pattern . The variables which could be used for modelling. 푖 푖 푖 decision surface hyperplane that separates the classes풙 푦 can푦 be∈ 푖 푖 2.2 Input Selection written− as: 푦 풙 Although developments in processing power motivated data- + = 0 (1) driven modelling, data dimensionality still remains an where is an adjustable푇 weight vector representing the 풘 풙 푏 important issue. Reducing the dimensionality improves the normal to the hyperplane, represents the input dimensional performance of the predictor by eliminating the curse of space, 풘 is a bias. dimensionality and reduces the training time, resulting in 풙 faster predictors (Guyon & Elisseeff, 2003). Dimensionality Suppose푏 that the data points satisfy the following constraints: reduction aims at finding a lower dimensional space which + +1 for = +1 (2) retains the input-output relationship. It can be divided into 푇 + 1 for = 1 (3) feature extraction and feature selection. On the one hand 풘 풙푖 푏 ≥ 푦푖 feature extraction transforms the original space, either Support vectors푇 are the closest points to the separating 풘 풙푖 푏 ≤ − 푦푖 − linearly or nonlinearly, and the most representative features hyperplane and located on the hyperplanes defined by (2) and are used for modelling. On the other hand, for feature (or (3) with the equality sign. The margin, the distance between variable) selection, a number of variables are selected the support vectors of the two classes, can be defined as the according to a ranking which determines their relevance with difference between the perpendicular distances of the two respect to the output. hyperplanes to the origin: Since the developed model needs to be inverted to obtain the |(1 ) ( 1 )| 2 (4) best input values depending on specific design objectives, the margin = = appropriate dimensionality reduction technique in this case is − 푏 − − − 푏 It is clear that maximising the margin is equivalent to variable selection. This ensures that the model inputs relate ‖풘‖ ‖풘‖ minimising the Euclidean norm of the weight vector . The directly to the original variables. problem can be formulated as follows to benefit from the A data dimension of 70 was still too high for classification as advantages of convex optimisation: 풘 this implies that the number of training samples must be high as well leading to prohibitively long training times. min s.t. ( + ) 1 0 (5) Therefore, variable selection was carried out in the work by 1 2 푇 To deal2 ‖풘 with‖ nonseparable푦푖 풘 data,풙푖 it푏 is− necessary≥ ∀ to푖 include Yang et al. (2011) where the most relevant input variables slack variables, 0, in the constraints of (2) and (3) were selected to reduce the dimensionality of the modelling (Cortes & Vapnik, 1995). As is proportional to the 푖 data. Linear correlation between input variables and the misclassification 휉distan≥ ce from the margin boundary, a i output is low, implying a highly nonlinear input-output positive penalty term, , is includedξ in the objective function relationship.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us