Convolutional Recurrent Neural Network and Lightgbm Ensemble Model for 12-Lead ECG Classification

Convolutional Recurrent Neural Network and Lightgbm Ensemble Model for 12-Lead ECG Classification

Convolutional Recurrent Neural Network and LightGBM Ensemble Model for 12-lead ECG Classification Charilaos Zisou, Andreas Sochopoulos, Konstantinos Kitsios Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece Abstract ciently combines a convolutional recurrent neural network (CRNN) and a gradient boosting machine (GBM), termed Automatic abnormality detection of ECG signals is a LightGBM [5], with interpretable results as to which challenging topic of great research and commercial inter- model has higher predictability for each class. est. It can provide a cost-effective and accessible tool for The rest of the paper is organized as follows: Section early and accurate diagnosis, which increases the chances 2 describes the methods used in the proposed analysis, of successful treatment. as well as the ensemble model architecture. Section 3 In this study, an ensemble classifier that identifies 24 presents the results, while Section 4 discusses model per- types of cardiac abnormalities is proposed, as part of the formance. Finally, Section 5 concludes the paper. PhysioNet/Computing in Cardiology Challenge 2020. The ensemble model consists of a convolutional recurrent neu- 2. Methods ral network that is able to automatically learn deep fea- tures, and LightGBM, a gradient boosting machine that The overall pipeline consists of: data relabeling and relies on hand-engineered expert features. The individ- pre-processing to deal with database format discrepancies, ual models are combined using class-specific weights and feature extraction for the feature-based LightGBM classi- thresholds, which are tuned by a genetic algorithm. fier, training of both individual models CRNN and Light- Results from 5-fold cross validation on the full training GBM, and finally the creation of the ensemble. The pre- set, report the Challenge metric of 0.593 that outperforms processing steps and the ensemble model architecture are both individual models. On the full hidden test set, the depicted in the form of a block diagram in Figure 1. proposed architecture by ”AUTh Team” achieves a score of 0.281 with an official ranking of 13/41. 2.1. Data Relabeling and Pre-processing The provided datasets come from multiple sources with 1. Introduction different sampling rates, lengths and lead gains and were recorded under imperfect, noisy conditions. The data also Cardiovascular diseases are the leading cause of death contain abnormalities that the Challenge organizers de- globally [1]. In order to provide an effective treatment, cided not to score, thus the first step of the analysis in- early and accurate diagnosis is of utmost importance, how- volves data relabeling and pre-processing, in order to cre- ever, it relies on manual ECG inspection by trained profes- ate a single unified dataset. sionals, which is a time-consuming and expensive process. Out of a total of more than 100 classes, 27 of them Attempts at automating this process are a significant step are scored, with 6 of them being pair-wise identical. All towards a cost-effective and accessible tool. recordings with unscored diagnoses are removed, while the Over the years numerous approaches have been pro- remaining ones are relabeled, resulting in 24 target classes. posed, including feature-based [2] and deep learning [3] The signals are then filtered using a 3rd order low-pass classifiers, nevertheless, most of them have been tested on Butterworth filter with a cutoff frequency of 20Hz for the relatively homogeneous datasets, with few target classes. high frequency noise and a Notch filter with a cutoff fre- PhysioNet/Computing in Cardiology Challenge 2020 pro- quency of 0.01Hz for the baseline wander. Since 99.6% motes research on automatic cardiac abnormality detec- of the relabeled recordings have a sampling rate of 500Hz tion by making 12-lead ECG databases from a wide set and 89.2% of the data have a length of 10 seconds, the re- of sources, publicly available [4]. maining recordings are also resampled at the same target The authors propose an ensemble model that effi- frequency and truncated or padded to 10 seconds. Figure 2. Genetic algorithm optimal weights. Following [7], standard time domain, frequency do- main and non-linear heart rate variability (HRV) features are computed. These include: RR interval and succes- Figure 1. Pre-processing and ensemble architecture. sive differences statistics, power spectral density features, Poincare´ plot features, HRV triangular index and sample entropy. Finally, patient sex and age are also added. 2.2. Feature Extraction 2.3. LightGBM After pre-processing, feature extraction is performed so as to produce the input for LightGBM, the feature-based The extracted features are used to train a LightGBM classifier. ECG signals contain oscillatory modes, thus model. LightGBM is a gradient boosting decision tree wavelet multiresolution analysis (WMRA) occupies quite (GBDT) [8] algorithm implementation, proposed by Mi- an important role. The signal is decomposed into multi- crosoft [5]. It introduces algorithmic optimizations such ple scales/resolutions using the discrete wavelet transform as the histogram method, gradient-based one-side sam- (DWT) and information about the presence of modes as- pling and exclusive feature bundling. These algorithms, sociated with specific abnormalities, is extracted through designed to reduce time complexity, offer a well-balanced statistical measures, i.e. standard deviation, skewness and trade-off between accuracy and efficiency, especially in kurtosis. The selected mother wavelet is ’sym5’ and the large-scale, high-dimensional data. number of scales is empirically determined to be 8. Scales Since boosting algorithms are not affected by feature 1, 2 and 8 contain noise and baseline wander residue and scaling/normalization, no such processing is performed, are, therefore, excluded from the analysis. however, leads corrupted with significant motion artefacts Under the existence of white noise, traditional signal are imputed with zeros. Boosting algorithms do not di- processing techniques often resort to assumptions of lin- rectly support multi-label classification, therefore, 24 indi- earity and Gaussianity. However, ECG signals, like most vidual binary classifiers are built using one-vs-rest logic. real-life data, are inherently non-linear and non-Gaussian. To address binary class imbalance, the synthetic minority For that purpose, the proposed analysis involves higher or- over-sampling technique (SMOTE) [9] is employed, and der statistics (HOS). Specifically, the wavelet bispectrum the minority class is over-sampled at 10% of the majority (WBS) is computed using the Gaussian complex mother class. The authors in [9] recommend combining SMOTE wavelet due to its time localization property [6]. From the with random under-sampling of the majority class, there- WBS, highest peak: amplitude, 1st and 2nd frequency and fore random under-sampling by 50% is performed. This standard deviation along the axis of one frequency keep- way, a more balanced class distribution and, ultimately, a ing the other fixed, are extracted. Also, signal process- more robust model is achieved. The training process uses ing features such as signal energy, autocorrelation function the binary logloss function as the objective, a learning rate values, signal-to-noise ratio (SNR) and R peak amplitude of 0.075 and early stopping based on the validation set statistics are calculated. All of the aforementioned features area under the receiver operating characteristic curve (AU- are extracted from every lead. ROC). Description AUROC Metric CRNN LightGBM Ensemble Basic training 0.920 ± 0.001 AUROC 0.908 0.935 0.946 SMOTE 0.930 ± 0.002 Challenge metric 0.511 0.549 0.593 ± SMOTE + BO 0.935 0.002 Table 2. Average scores from 5-fold cross validation on Table 1. 5-fold cross validation scores before applying the training set. All models use GA optimized thresholds anything (Basic training), after applying the synthetic mi- in order to have comparable Challenge metrics. nority over-sampling technique (SMOTE) and after using the Bayesian optimizer (BO). 2.5. Ensemble As described in the previous sections, during both Light- Performance is further improved by hyper-parameter GBM and CRNN training, the Challenge metric is not tuning. Due to its ability to tackle expensive-to-evaluate monitored. The goal is to create classifiers that are as accu- functions, Bayesian optimization (BO) [10] is used. Some rate as possible, because the Challenge metric is optimized abnormalities may be harder to identify than others, thus during the final merging process. The ensemble is a class- parameters are tuned independently, in order to allow each depended weighted average, i.e. 24 different weights and binary classifier to select the appropriate model complex- thresholds are assigned to each class so that: ity for optimal performance. The parameters that are tuned p [c] = p [c] · w + p [c] · (1 − w ) (1) are: feature fraction, lambda l1, lambda l2, max depth, ens crnn c lgbm c min child weight, min split gain and num leaves. Scores ( before and after applying SMOTE and BO are compared 1; if pens[c] ≥ thrc in Table 1 l[c] = (2) 0; if pens[c] < thrc where wc is the weight, thrc is the threshold, pens[c], 2.4. Convolutional Recurrent Neural Net- pcrnn[c] and plgbm[c] are the probabilities of the ensemble, work CRNN and LightGBM, respectively, and l[c] is the label of class c. These weights and thresholds are tuned by a genetic algorithm (GA), that uses the negative value of a In deep learning theory, the convolutional neural net- 5-fold Challenge metric average as the fitness function. work (CNN) is one of the most developed areas, widely Finally, in order to reuse all training data, but also to im- used in image recognition for its spatial feature extraction prove model robustness, 10-fold bagging is performed, i.e. properties. Frequently employed in time series data, the 10 different ensemble models are trained, and their pre- recurrent neural network (RNN) is a different type of neu- dicted probabilities are averaged for the final submission.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us