TUMK-ELM: a Fast Unsupervised Heterogeneous Data Learning Approach

TUMK-ELM: a Fast Unsupervised Heterogeneous Data Learning Approach

Received May 22, 2018, accepted June 10, 2018, date of publication June 13, 2018, date of current version July 19, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2847037 TUMK-ELM: A Fast Unsupervised Heterogeneous Data Learning Approach LINGYUN XIANG1,2, GUOHAN ZHAO2, QIAN LI3, WEI HAO 4, AND FENG LI1,2 1Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, Changsha University of Science and Technology, Changsha 410114, China 2School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China 3Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia 4School of Traffic and Transportation Engineering, Changsha University of Science and Technology, Changsha 410114, China Corresponding author: Wei Hao ([email protected]) This work was supported in part by the National Natural Science Foundation of China under Grant 61202439, in part by the Scientific Research Foundation of the Hunan Provincial Education Department of China under Grant 16A008, and in part by the Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems under Grant 2017TP1016. ABSTRACT Advanced unsupervised learning techniques are an emerging challenge in the big data era due to the increasing requirements of extracting knowledge from a large amount of unlabeled heterogeneous data. Recently, many efforts of unsupervised learning have been done to effectively capture information from heterogeneous data. However, most of them are with huge time consumption, which obstructs their further application in the big data analytics scenarios, where an enormous amount of heterogeneous data are provided but real-time learning are strongly demanded. In this paper, we address this problem by proposing a fast unsupervised heterogeneous data learning algorithm, namely two-stage unsupervised multiple kernel extreme learning machine (TUMK-ELM). TUMK-ELM alternatively extracts information from multiple sources and learns the heterogeneous data representation with closed-form solutions, which enables its extremely fast speed. As justified by theoretical evidence, TUMK-ELM has low computational complexity at each stage, and the iteration of its two stages can be converged within finite steps. As experimentally demonstrated on 13 real-life data sets, TUMK-ELM gains a large efficiency improvement compared with three state-of-the-art unsupervised heterogeneous data learning methods (up to 140 000 times) while it achieves a comparable performance in terms of effectiveness. INDEX TERMS Unsupervised learning, heterogeneous data, clustering, extreme learning machine, multiple kernel learning. I. INTRODUCTION neural networks that can reveal highly complex patterns and In most real-world data analytics problems, a huge amount of extremely nonlinear relations. However, most of them fail to data are collected from multiple sources without label infor- learn from multiple sources. They are challenged by types, mation, which is often with different types, structures, and relations and distributions of the heterogeneous data because distributions, namely heterogeneous data [1], [2]. For exam- of the deep neural networks they used. Without strong super- ple, in a sentiment analysis task, the data may contain texts, vised information, the deep neural networks may arbitrarily images, and videos from Twitters, Facebook, and YouTube. fit complex heterogeneous data that leads to meaningless For extracting knowledge from such big unlabeled hetero- solutions. geneous data, advanced unsupervised learning techniques One promising way to reveal information from multi- are required to (1) have a large model capacity/complexity, ple sources is using multiple kernel learning (MKL, for (2) have the ability to integrating information from multiple short) [5], [6]. MKL first adopts multiple kernels to capture sources and (3) have a high learning speed. heterogeneous data characteristics from different sources. Recently, many researchers enhance model capacity by It then learns optimal combination coefficients for these combining unsupervised learning with deep learning to kernels guided by a specific learning task. In this way, propose deep unsupervised learning models [3], [4]. These MKL can effectively capture different complex distribu- models inherit the powerful model capacity from deep tions by different kernels, and reveal the relations between 2169-3536 2018 IEEE. Translations and content mining are permitted for academic research only. VOLUME 6, 2018 Personal use is also permitted, but republication/redistribution requires IEEE permission. 35305 See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. L. Xiang et al.: TUMK-ELM: Fast Unsupervised Heterogeneous Data Learning Approach these different distributions by the kernel combination coef- unsupervised fashion. It breaks out the obstacle of low ficients [7]–[9]. Despite the advantages of MKL, it requires learning speed for high-performance big data analytics. supervised label information to learn the optimal kernel com- • The first fast unsupervised multiple kernel learning bination coefficients. However, label information is often not method. As far as we know, the proposed method is the available or very costly in real big data analytics task, which first fast unsupervised multiple kernel learning method. limits the application of MKL. It shows a promising paradigm for the multiple kernel More recently, unsupervised MKL [10]–[12] has been learning community to efficiently handling large-scale studied to tackle the heterogeneous data learning without unlabeled data. supervised labels. Similar to MKL, unsupervised MKL also • We prove that the proposed method is with a low time uses multiple kernels to distill information from various complexity and can be converged within finite steps. sources. To enable the learning without supervised labels, The theoretical evidence guarantees the high learning it introduces a kernel-based unsupervised learning objective, speed of the proposed method in real large-scale mul- e.g. kernel k-means [13], to learn the optimal kernel com- tiple sources learning applications. bination coefficients. Although unsupervised MKL achieves We present comprehensive experiments on 13 real-life data remarkable performance in unsupervised heterogeneous data sets, Haberman, Biodeg, Seeds, Wine, Iris, Glass, Image- learning, most of the current unsupervised MKL methods are Segment, Libras-Movement, Frogs, Wine-Quality, Statlog, with a slow learning speed. The slow learning speed is mainly Isolet and Shuttle to evaluate our proposed TUMK-ELM caused by the iterative numerical solution, which is adopted method. We show that: (1) Our proposed TUMK-ELM by these methods for optimizing the kernel combination coef- can efficiently learn from multiple sources, which is up ficients. It does not satisfy the requirements of (1) handling a to 140; 000 times faster compared with the state-of-the-art large amount of data and (2) real-time learning. methods; (2) Our proposed TUMK-ELM well captures local To address the above issues, we here propose a fast and global relations of objects (reflected by retrieval task), unsupervised heterogeneous data learning approach, namely producing results substantially better than previous unsu- Two-stage Unsupervised Multiple Kernel Extreme Learning pervised multiple kernel learning methods (up to 9.71% in Machine (TUMK-ELM, for short). TUMK-ELM iteratively terms of accuracy, 12.6% in terms of NMI and 15% in extracts information from multiple sources and learns the het- terms of Purity); (3) Our proposed TUMK-ELM converges erogeneous data representation with closed-form solutions. very fast (within 2 or 3 iterations); and (4) Our proposed It adopts multiple kernels to capture information in hetero- TUMK-ELM is quite stable regarding its key parameters. geneous data and learns an optimal kernel for heterogeneous The above strong evidence shows that the proposed data representation. Different from current unsupervised mul- TUMK-ELM is fit for the fast unsupervised learning from tiple kernel learning methods, it seamlessly integrates a much multiple sources, and we expect that it can be adopted in other more efficient kernel combination coefficients optimization unsupervised big data analytics scenarios that enable better method with an effective unsupervised learning objective that performance. simultaneously guarantees a fast learning speed and a high The rest of paper is organized as follows: SectionII briefly learning quality. Specifically, TUMK-ELM uses the kernel introduces the current work related to this paper. Section III k-means [13] objective function to guide the unsupervised explain heterogeneous data learning clearly. SectionIV gives learning process and adopts the distance-based multiple ker- the details of the proposed TUMK-ELM. Then, SectionV nel extreme learning machine (DBMK-ELM, for short) [9] presents the theoretical analysis of the TUMK-ELM to learn the kernel combination coefficients. TUMK-ELM properties. SectionVI demonstrates the performance of can be split into two iterative stages. At the first stage, TUMK-ELM by comparing it with existing unsupervised TUMK-ELM assigns a cluster for each object in a given multiple kernel learning algorithms. Lastly, Section VII con- dataset via the kernel k-means algorithm based on multi- cludes the paper and discusses future prospects. ple kernels with a set of

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us