Parallel Feature Selection for Distributed-Memory Clusters

Parallel Feature Selection for Distributed-Memory Clusters

Parallel Feature Selection for Distributed-Memory Clusters Jorge Gonz´alez-Dom´ıngueza,∗, Ver´onicaBol´on-Canedob, Borja Freirea, Juan Touri~noa aComputer Architecture Group, Universidade da Coru~na,Campus de Elvi~na, 15071 A Coru~na,Spain bDepartment of Computer Science, Universidade da Coru~na,Campus de Elvi~na,15071 A Coru~na,Spain Abstract Feature selection is nowadays an extremely important data mining stage in the field of machine learning due to the appearance of problems of high dimensionality. In the literature there are numerous feature selection methods, mRMR (minimum-Redundancy- Maximum-Relevance) being one of the most widely used. However, although it achieves good results in selecting relevant features, it is impractical for datasets with thousands of features. A possible solution to this limitation is the use of the fast-mRMR method, a greedy optimization of the mRMR algorithm that improves both scalability and efficiency. In this work we present fast-mRMR-MPI, a novel hybrid parallel implementation that uses MPI and OpenMP to accelerate feature selection on distributed-memory clusters. Our performance evaluation on two different systems using five representative input datasets shows that fast- mRMR-MPI is significantly faster than fast-mRMR while providing the same results. As an example, our tool needs less than one minute to select 200 features of a dataset with more than four million features and 16,000 samples on a cluster with 32 nodes (768 cores in total), while the sequential fast-mRMR required more than eight hours. Moreover, fast-mRMR- MPI distributes data so that it is able to exploit the memory available on different nodes of a cluster and then complete analyses that fail on a single node due to memory constraints. Our tool is publicly available at https://github.com/borjaf696/Fast-mRMR. Keywords: Machine Learning, Feature Selection, High Performance Computing, Parallel Computing 1. Introduction Over the last few years we have witnessed an explosion in the size of datasets related to different fields such as biology, physics, mathematics, or engineering, to name just a few. This increasing growth has not only occurred in the number of patterns or samples, but also regarding the number of features. If a feature can be understood as an individual property of a phenomenon which can be measurable and observed, it is reasonable to think that ∗Corresponding author Email addresses: [email protected] (Jorge Gonz´alez-Dom´ınguez), [email protected] (Ver´onicaBol´on-Canedo), [email protected] (Borja Freire), [email protected] (Juan Touri~no) Preprint submitted to Information Sciences October 29, 2018 having more features could give us better results. However, this is not the case because of the \curse of dimensionality", a term that was coined by Richard Bellman back in 1957 to describe the difficulty of optimization by exhaustive enumeration on product spaces [2]. This colorful term refers to the different phenomena that appear when analyzing high-dimensional datasets (with thousands of features) that do not occur in low-dimensional settings. If we assume that datasets are usually represented by matrices in which the rows are the patterns (or samples) and the columns are the features, a possible solution to the afore- mentioned problem is to find \narrower" matrices which do not incur a significant loss of information through data mining. This process is known as dimensionality reduction and one of the most popular representative methods is called Feature Selection (FS). FS can be defined as the process of selecting the relevant features and discarding the irrelevant ones, without incurring a degradation of performance [4, 11]. Moreover, FS has also the advan- tage of its explainability, since it maintains the meaning of the original features, in contrast to feature extraction methods which perform a transformation of the original features to reduce the dimensionality of the problem. However, with the advent of Big Data, and the appearance of datasets with thousands or millions of features, existing state-of-the-art FS methods have become inapplicable, and it has been necessary to accelerate them. Acceleration of FS algorithms to be able to explore large datasets within a reasonable time has been addressed in some previous works. On the one hand, the acceleration can come from reducing the complexity of the algorithm itself. One example is the acceleration of the popular method minimum-Redundancy-Maximum-Relevance (mRMR) [20] which, although widely used, suffers from high complexity when dealing with a large number of features. A greedy variation, called fast-mRMR [22], was recently proposed to reduce runtime by assessing the redundancy of the candidate features to only those already selected, instead of the whole dataset. Another greedy optimization consists in the simplification of the score metric calculated for each feature [23]. Approaches such as memetic frameworks [29] or particle swarm optimization [8] have also been applied to reduce the number of features to explore during FS. On the other hand, efficient exploitation of High Performance Computing (HPC) hard- ware resources can also be employed to accelerate applications without reducing the overall computational requirements of the algorithms. For instance, the popular WEKA toolkit [13] already includes multithreaded support for some FS algorithms in order to exploit the sev- eral cores available in current processors. The adequacy of parallel computing to accelerate some WEKA FS algorithms has been addressed in [6], analyzing the use of multiple threads on a multicore system as well as Spark instances on the cloud. FS on cloud-based systems using MapReduce has been extensively studied, including implementations of ReliefF [28], evolutionary algorithms [21] or positive approximation [14]. Another popular platform is the GPU, with a relatively low price and good energy efficiency. For instance, GPU im- plementations of FS algorithms exist using a non-parametric evaluation criterion [1] or the delta test [10]. In this work, we present fast-mRMR-MPI, a novel parallelization of fast-mRMR that exploits the hardware resources of modern distributed-memory systems to accelerate FS. It is implemented with a hybrid approach that uses MPI [26] to work on different nodes connected 2 through a network, with multiple OpenMP [19] threads to exploit several cores within the same node. Our goal is to gather in the same implementation the performance advantages of greedy optimizations and parallel computing. This idea is already present in [22], with parallel versions of fast-mRMR for GPUs and cloud systems (based on Spark). However, with the popularization of HPC and Big Data, many scientists have access to powerful computational resources such as distributed-memory clusters. These systems are installed in institutes or supercomputing centers and are available to researchers through remote connection. Thus, scientists can have easy access to several multicore nodes connected through a low-latency and high bandwidth network, and they require applications that scale with a large number of nodes. Although the Spark implementation of fast-mRMR can be used in such clusters, message-passing codes usually obtain better performance on them. Therefore, our proposed fast-mRMR-MPI consists in a novel implementation of fast-mRMR using MPI and OpenMP that significantly increases its speedup on clusters. The idea of using message-passing to accelerate FS on multicore clusters has already been addressed in previous works. Some examples are DWFS [24], with a parallel genetic algorithm, as well as an MPI version of online FS [27]. Nevertheless, fast-mRMR-MPI provides the following advantages over them: • The FS filter used as base for our work (mRMR) is extremely popular and has been proved highly accurate. • The MPI implementation of the two previous works is based on a master-slave paradigm, which usually limits the scalability to only tens of processes. Instead, fast-mRMR-MPI employs a static and completely decentralized parallelization that, as will be shown in Section 4.2, scales up to several hundreds of cores. • fast-mRMR-MPI integrates multiple threads and processes with a hybrid MPI/OpenMP implementation to efficiently exploit the computational capabilities of modern multi- core clusters. This hybrid approach has shown good scalability in other fields such as bioinformatics [9] or molecular dynamics [5]. • The code is publicly available at https://github.com/borjaf696/Fast-mRMR The rest of the paper is organized as follows. Section2 explains the background about the FS algorithm and HPC technologies necessary to understand the implementation. Our parallel implementation is described in Section3. Section4 provides the experimental eval- uation in terms of runtime and scalability. Finally, concluding remarks and future work are presented in Section5. 2. Background In this section we introduce some basic concepts associated to the FS method chosen to be parallelized (fast-mRMR), as well as to the parallelization technologies employed in our implementation (OpenMP and MPI). 3 2.1. Feature selection As mentioned in the Introduction, feature selection is a popular preprocessing technique that allows us to reduce the dimensionality of the problem. A correct selection of the features can lead to an improvement of the inductive learner, either in terms of learning speed, generalization capacity or simplicity of the induced model. FS methods are typically divided into three major approaches according

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us