Single Pass Fuzzy C Means

Total Page:16

File Type:pdf, Size:1020Kb

Single Pass Fuzzy C Means Single Pass Fuzzy C Means Prodip Hore, Lawrence O. Hall, and Dmitry B. Goldgof Abstract— Recently several algorithms for clustering large clustering algorithms which can process very large data sets data sets or streaming data sets have been proposed. Most are becoming increasingly important. of them address the crisp case of clustering, which cannot be In this paper we propose a modified fuzzy c means algo- easily generalized to the fuzzy case. In this paper, we propose a simple single pass (through the data) fuzzy c means algorithm rithm for large or very large data sets, which will produce a that neither uses any complicated data structure nor any final clustering in a single pass through the data with limited complicated data compression techniques, yet produces data memory allocated. We neither keep any complicated data partitions comparable to fuzzy c means. We also show our structures nor use any complicated compression techniques. simple single pass fuzzy c means clustering algorithm when We will show that our simple single pass fuzzy c means compared to fuzzy c means produces excellent speed-ups in clustering and thus can be used even if the data can be fully algorithm (SP) will provide almost similar clustering quality loaded in memory. Experimental results using five real data sets (by loading as little as 1% of the data) as that of clustering are provided. all the data at once using fuzzy c means (FCM). Moreover, we will also show our single pass fuzzy c means algorithm I. INTRODUCTION provides significant speed up when compared with clustering Recently various algorithms for clustering large data sets a complete data set, which is less than the size of the memory. and streaming data sets have been proposed [2], [4], [5], [6], [7], [8], [12], [13]. The focus has been primarily either on II. RELATED WORK sampling [2], [7], [8], [10], [22] or incrementally loading In [1], a multistage random sampling method was pro- partial data, as much as can fit into memory at one time. posed to speedup fuzzy c means. There were two phases The incremental approach [5], [6], [12], [13] generally keeps in the method. In the first phase, random sampling was sufficient statistics or past knowledge of clusters from a used to obtain an estimate of centroids and then fuzzy c previous run of a clustering algorithm in some data structures means (FCM) was run on the full data with these centroids and uses them in improving the model for the future. Various initialized. A speed-up of 2-3 times was reported. In [9], algorithms [1], [3], [9], [15], [19] for speeding up clustering speed up is obtained by taking a random sample of the data have also been proposed. While many algorithms have been and clustering it. The centroids obtained then were used to proposed for large and very large data sets for the crisp initialize the entire data set. This method is similar to that in case, not as much work has been done for the fuzzy case. [1]; the difference is they used one random sample where in As pointed out in [10], the crisp case may not be easily [1] they may use multiple random samples. In [2], another generalized for fuzzy clustering. This is due to the fact method based on sampling for clustering large image data that in fuzzy methods an example does not belong to a was proposed, where the samples were chosen by the chi- cluster completely but has partial membership values in most square or divergence hypothesis test. It was shown that they clusters. More about clustering algorithms can be found in achieved an average speed-up of 4.2 times on image data [23]. while providing a good final partition using 24% of the total Clustering large amounts of data takes a long time. Further, data. new unlabeled data sets which will not fit in memory are Another speed up technique for image data was proposed becoming available. To cluster them, either sub sampling is in [3]. In this method FCM convergence is obtained by required to fit the data in memory or the time will be greatly using a data reduction method. Data reduction is done by affected by disk accesses making clustering an unattractive quantization and speed-up by aggregating similar examples, choice for data analysis. Another source of large data sets which were then represented by a single weighted exemplar. is streaming data where you do not store all the data, but The objective function of the FCM algorithm was modified process it and delete it. There are some very large data sets to take into account the weights of the exemplars. However, for which a little labeled data is available and the rest of the presence of similar examples might not be common in all the data is unlabeled i.e. for example, computer intrusion data sets. They showed that it performs well on image data. detection. Semi-supervised clustering might be applied to this In summary, the above algorithms attempt to speed up fuzzy type of data [11]. We do not address this specifically in this c means either by reducing the number of examples through paper, but the approach here could be adapted. In general, sampling or by data reduction techniques or by providing Prodip Hore, Lawrence O. Hall, and Dmitry B. Goldgof are with good initialization points to reduce the number of iterations. the Department of Computer Science and Engineering, University of However, the above algorithms do not seem to address the South Florida, Tampa, FL 33620, USA (Prodip Hore phone: 813-472- issue of clustering large or very large data sets under the con- 3634; email:[email protected]), (Lawrence O. Hall phone: 813-974- 4195; email:[email protected]), (Dmitry B. Goldgof phone: 813-974-4055; straint of limited memory. Moreover, some of them address email:[email protected]) the speed up issues for image data only where the range of features may be limited. Some work on parallel/distributed compressing training examples with exemplar. This was done approaches has been done, where multiple processors could by estimating the probability density functions for the PNN be used in parallel to speed up fuzzy c means [17], [20]. with Gaussian models. In [21] a parallel version of the adaptive fuzzy Leader clustering algorithm has been discussed, whereas, in [18] an Recently in [10], a sampling based method has been efficient variant of the conventional Leader algorithm known proposed for extending fuzzy and probabilistic clustering as ARFL (Adaptive rough fuzzy leader) clustering algorithm to large or very large data sets. The approach is based on was proposed. progressive sampling and is an extension of eFFCM [2] to geFFCM, which can handle non-image data. However, There has been research on clustering large or very large the termination criteria for progressive sampling could be data sets [4], [5], [6], [7], [8]. Birch [4] is a data clustering complicated as it depends upon the features of the data method for large data sets. It loads the entire data into set. They used 4 acceptance strategies to control progressive memory by building a CF (Clustering Feature) tree, which sampling based on the features of the data. The first strategy is a compact representation of the data using the available (SS1) is to accept the sampled data when a particular feature memory. Then the leaf entries of the CF tree can be clustered signals termination. The second one (SS2) is to accept to produce a partition. A hierarchical clustering algorithm when any one of the features signals termination. The third was used in their paper. It provides an optional cluster one (SS3) is to accept when all the features sequentially refining step in which quality can be further improved by signal termination and the last one accepts (SS4) when all additional passes over the dataset. However, the accuracy of the features simultaneously signal termination. However, the data summarization depends on available memory. It was method could be complicated for a large number of features pointed out in [5] that depending on the size of the data, and the sample size could grow large also. memory usage can increase significantly as the implementa- tion of Birch has no notion of an allocated memory buffer. In [24], two methods of scaling EM [23] to large data sets In [5], a single pass hard c means clustering algorithm is have been proposed by reducing time spent in E step. The proposed under the assumption of a limited memory buffer. first method, incremental EM, is similar to [28], in which data They used various data compression techniques to obtain a is partitioned into blocks and then incrementally updating the compact representation of data. In [6], another single pass log-likelihoods. In the second method, lazy EM, at scheduled scalable hard c means algorithm was proposed. This is a iterations the algorithm performs partial E and M steps on simpler implementation of Bradley’s single pass algorithm a subset of data. In [25], EM was scaled in a similar way [5], where no data compression techniques have been used. as they scaled k-means in [5]. In [27], data is incrementally They showed that complicated data compression techniques loaded and modelled using gaussians. At the first stage the do not improve cluster quality much while the overhead gaussians models are allowed to overfit the data and then and book-keeping of data compression techniques slow the later reduced at a second stage to output the final models. The algorithm down. Bradley’s algorithm compresses data using a methods used to scale EM may not generalize to FCM as they primary compression and secondary compression algorithm.
Recommended publications
  • Deep Learning Models for Spatio-Temporal Forecasting and Analysis
    UNIVERSITY OF CALIFORNIA, IRVINE Deep Learning Models for Spatio-Temporal Forecasting and Analysis DISSERTATION submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Computer Science by Reza Asadi Dissertation Committee: Professor Amelia Regan, Chair Professor Michael Dillencourt Professor R. Jayakrishnan 2020 c 2020 Reza Asadi DEDICATION To my family for their love and support ii TABLE OF CONTENTS Page LIST OF FIGURES vi LIST OF TABLES ix ACKNOWLEDGMENTS x VITA xi ABSTRACT OF THE DISSERTATION xiii 1 Introduction 1 1.1 Spatio-temporal traffic data . .1 1.2 Machine learning problems in traffic data . .6 1.2.1 Traffic flow prediction . .7 1.2.2 Imputing incomplete traffic data . .7 1.2.3 Clustering of traffic flow data . .8 1.3 Outline and contributions . .9 2 Distributed network flow problem 12 2.1 Introduction . 13 2.2 Preliminaries . 15 2.2.1 Notation . 15 2.2.2 Graph theory . 16 2.2.3 Eliminating affine equality constraints in optimization problems . 19 2.3 Problem definition . 20 2.4 A cycle-basis distributed ADMM solution . 23 2.4.1 Minimum search variable . 23 iii 2.4.2 Constructing cyber layer to solve the optimal power flow in a dis- tributed manner . 24 2.5 Numerical example . 28 2.6 Conclusion and future research . 30 3 Spatio-temporal clustering of traffic data 33 3.1 Introduction . 34 3.2 Technical background . 37 3.2.1 Problem definition . 37 3.2.2 Autoencoders . 38 3.2.3 Deep embedded clustering . 39 3.3 Spatio-temporal clustering . 40 3.4 Experimental results .
    [Show full text]
  • Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification
    Preprints of the 19th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 24-29, 2014 Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification R. Muscat*, M. Mahfouf*, A. Zughrat*, Y.Y. Yang*, S. Thornton**, A.V. Khondabi* and S. Sortanos* *Department of Automatic Control and Systems Engineering, The University of Sheffield, Mappin St., Sheffield, S1 3JD, UK (Tel: +44 114 2225607) ([email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]) **Teesside Technology Centre, Tata Steel Europe, Eston Rd., Middlesbrough, TS6 6US, UK ([email protected]) Abstract: This study aims at designing a modelling architecture to deal with the imbalanced data relating to the production of rails. The modelling techniques are based on Support Vector Machines (SVMs) which are sensitive to class imbalance. An internal (Biased Fuzzy SVM) and external (data under- sampling) class imbalance learning methods were applied to the data. The performance of the techniques when implemented on the latter was better, while in both cases the inclusion of a fuzzy membership improved the performance of the SVM. Fuzzy C-Means (FCM) Clustering was analysed for reducing the number of support vectors of the Fuzzy SVM model, concluding it is effective in reducing model complexity without any significant performance deterioration. Keywords: Classification; railway steel manufacture; class imbalance; support vector machines; fuzzy support vector machines; fuzzy c-means clustering. The paper is organised as follows. Section 2 provides an 1 INTRODUCTION overview of the modelling data, obtained from the rail Cost management has become a very important factor in manufacturing process, and input variable selection.
    [Show full text]
  • Multi-View Fuzzy Clustering with the Alternative Learning Between Shared Hidden Space and Partition
    1 Multi-View Fuzzy Clustering with The Alternative Learning between Shared Hidden Space and Partition Zhaohong Deng, Senior Member, IEEE, Chen Cui, Peng Xu, Ling Liang, Haoran Chen, Te Zhang, Shitong Wang 1Abstract—As the multi-view data grows in the real world, posed in relevant literature [1-6] and most of them are classi- multi-view clustering has become a prominent technique in data fied into two categories: mining, pattern recognition, and machine learning. How to ex- i. Multi-view clustering methods based on single view clus- ploit the relationship between different views effectively using the tering methods, such as K-means [7, 8], Fuzzy C-means (FCM) characteristic of multi-view data has become a crucial challenge. [9,10], Maximum Entropy Clustering (MEC) [11, 12] and Aiming at this, a hidden space sharing multi-view fuzzy cluster- Possibilistic C-means (PCM) [13, 14]. Such approach usually ing (HSS-MVFC) method is proposed in the present study. This considers each view independently and treats each of them as method is based on the classical fuzzy c-means clustering model, an independent clustering task, followed by ensemble methods and obtains associated information between different views by [15, 16] to achieve the final clustering result. However, such introducing shared hidden space. Especially, the shared hidden space and the fuzzy partition can be learned alternatively and strategy is likely to cause poor results or unsteadiness of the contribute to each other. Meanwhile, the proposed method uses algorithm because of potential high deviation of a certain view maximum entropy strategy to control the weights of different or significant discrepancies between the results of each view.
    [Show full text]
  • Dynamic Topology Model of Q-Learning LEACH Using Disposable Sensors in Autonomous Things Environment
    applied sciences Article Dynamic Topology Model of Q-Learning LEACH Using Disposable Sensors in Autonomous Things Environment Jae Hyuk Cho * and Hayoun Lee School of Electronic Engineering, Soongsil University, Seoul 06978, Korea; [email protected] * Correspondence: [email protected]; Tel.: +82-2-811-0969 Received: 10 November 2020; Accepted: 15 December 2020; Published: 17 December 2020 Abstract: Low-Energy Adaptive Clustering Hierarchy (LEACH) is a typical routing protocol that effectively reduces transmission energy consumption by forming a hierarchical structure between nodes. LEACH on Wireless Sensor Network (WSN) has been widely studied in the recent decade as one key technique for the Internet of Things (IoT). The main aims of the autonomous things, and one of advanced of IoT, is that it creates a flexible environment that enables movement and communication between objects anytime, anywhere, by saving computing power and utilizing efficient wireless communication capability. However, the existing LEACH method is only based on the model with a static topology, but a case for a disposable sensor is included in an autonomous thing’s environment. With the increase of interest in disposable sensors which constantly change their locations during the operation, dynamic topology changes should be considered in LEACH. This study suggests the probing model for randomly moving nodes, implementing a change in the position of a node depending on the environment, such as strong winds. In addition, as a method to quickly adapt to the change in node location and construct a new topology, we propose Q-learning LEACH based on Q-table reinforcement learning and Fuzzy-LEACH based on Fuzzifier method.
    [Show full text]
  • Parameter Specification for Fuzzy Clustering by Q-Learning
    Parameter Specification for Fuzzy Clustering by Q-Learning Chi-Hyon Oh, Eriko Ikeda, Katsuhiro Honda and Hidetomo Ichihashi Department of Industrial Engineering, College of Engineering, Osaka Prefecture University, 1–1, Gakuencho, Sakai, Osaka, 599-8531, JAPAN E-mail: [email protected] Abstract In this paper, we propose a new method to specify the sequence of parameter values for a fuzzy clustering algorithm by using Q-learning. In the clustering algorithm, we employ similarities between two data points and distances from data to cluster centers as the fuzzy clustering criteria. The fuzzy clustering is achieved by optimizing an objective function which is solved by the Picard iteration. The fuzzy clustering algorithm might be useful but its result depends on the parameter specifications. To conquer the dependency on the parameter values, we use Q-learning to learn the sequential update for the parameters during the iterative optimization procedure of the fuzzy clustering. In the numerical example, we show how the clustering validity improves by the obtained parameter update sequences. Keywords Parameter Specification, Fuzzy Clustering, Reinforcement Learning. I. Introduction Many fuzzy clustering algorithms have been proposed since Ruspini developed the first one [1]. Fuzzy ISODATA [2] and its extension, Fuzzy c-means [3], are the popular fuzzy clustering algorithm using distance- based objective function methods. Other approaches are driven by several fuzzy clustering criteria. First, we propose a new fuzzy clustering algorithm, which adopts the similarities between two data point and the distances from data to cluster centers as the fuzzy clustering criteria to be optimized. We can expect that our algorithm might ably partition the data set into some clusters.
    [Show full text]
  • Fuzzy Clustering Based Image Denoising and Improved Support Vector Machine (ISVM) Based Nearest Target for Retina Images
    International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8 Issue-5, January 2020 Fuzzy Clustering Based Image Denoising and Improved Support Vector Machine (ISVM) Based Nearest Target for Retina Images B. Sivaranjani, C. Kalaiselvi Abstract: A developing automated retinal disease diagnostic Particularly in areas in which retinal specialists may not be system based on image analysis has now demonstrated the ability easily available for diabetic retinopathy and premature in clinical research. Though, the accuracy of these systems has retinopathy (ROP) screening. In this section, the difficult been negotiated repeatedly, generally due to the basic effort in issue of matching and recording retinal images was perceiving the abnormal structures as well as due to deficits in discussed to allow for new applications for the image gaining that affects image quality. Use the fuzzy clustering; the noises contained in the samples are omitted from teleophthalmology. A new technique for locating optic discs the above. Unless the noises will be taken away from the samples in retinal images was suggested in [1]. instead dimension reduction initializes the optimization of Mutual Information (MI) as just a coarse localization process The first phase of certain vessel segmentation, disease that narrows the domain of optimization and tries to avoid local diagnosis, and retinal recognition algorithms would be to optimization. Furthermore, the suggested work closer to the locate the optic disc and its core. In [2] the latest research on retina picture being done using the Improved Support Vector the essential characteristics and features of DR eye Machine (ISVM) system used in the area-based registration, telehealth services was evaluated in the categories listed: offering a reliable approach.
    [Show full text]
  • An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks
    Sensors 2015, 15, 19783-19818; doi:10.3390/s150819783 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks Ibrahim Mustapha 1,2,†, Borhanuddin Mohd Ali 1,*, Mohd Fadlee A. Rasid 1,†, Aduwati Sali 1,† and Hafizal Mohamad 3,† 1 Department of Computer and Communications Systems Engineering and Wireless and Photonics Research Centre, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang Selangor, Malaysia; E-Mails: [email protected] (I.M.); [email protected] (M.F.A.R.); [email protected] (A.S.) 2 Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Maiduguri, P. M. B. 1069, Maiduguri, Nigeria 3 Wireless Networks and Protocol Research Lab, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, Malaysia; E-Mail: [email protected] † These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +60-3-8946-6443; Fax: +60-3-8656-7127. Academic Editor: Davide Brunelli Received: 25 May 2015 / Accepted: 31 July 2015 / Published: 13 August 2015 Abstract: It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution.
    [Show full text]
  • A Hybrid Technique Based on Fuzzy-C-Means and SVM for Detection of Brain Tumor in MRI Images Dipali B.Birnale¹, Prof .S
    ISSN XXXX XXXX © 2016 IJESC Research Article Volume 6 Issue No. 8 A Hybrid Technique based on Fuzzy-C-Means and SVM for Detection of Brain Tumor in MRI Images Dipali B.Birnale¹, Prof .S. N. Patil² Associate Professor² Department of Electronics Engineering PVPIT Budhgaon, Maharashtra, India Abstract: MRI is the most important technique, in detecting the brain tumor. In this paper data mining methods are used for classificat ion of MRI images. A new hybrid technique based on the support vector machine (SVM) and fuzzy c -means for brain tumor classification is proposed. The proposed algorithm is a combination of support vector machine (SVM) and fuzzy –c- means, a hybrid technique for prediction of brain tumor. In this algorithm the image is De-noised first and then morphological operations are used for skull striping. Fuzzy c-means (FCM) clustering is used for the segmentation of the image to detect the suspicious region in brain MRI image & feature are extracted from the brain image, after which SVM technique is applied to classify the brain MRI images, which provide accurate and more effective result for classification of brain MRI images. Keywords: Data Mining, MRI, De-noising, Fuzzy C-means clustering, Support Vector Machine (SVM) I. INTRODUCTION Flow Chart In medical image segmentation, brain tumor detection is one of the most challenging tasks, since brain images are complicated and tumors can be analyzed only by expert physicians. Brain tumor is a group of abnormal cells that grows inside of the brain or around the brain. The tumor may be primary or secondary.
    [Show full text]
  • Deep Adaptive Fuzzy Clustering for Evolutionary Unsupervised Representation Learning Dayu Tan, Zheng Huang, Xin Peng, Member, IEEE, Weimin Zhong, and Vladimir Mahalec
    1 Deep Adaptive Fuzzy Clustering for Evolutionary Unsupervised Representation Learning Dayu Tan, Zheng Huang, Xin Peng, Member, IEEE, Weimin Zhong, and Vladimir Mahalec Abstract—Cluster assignment of large and complex images is a extensively studied and applied to different fields [1]. This crucial but challenging task in pattern recognition and computer type of method iteratively groups the features with a standard vision. In this study, we explore the possibility of employing fuzzy division method, for example, k-means [2] and fuzzy means clustering in a deep neural network framework. Thus, we present a novel evolutionary unsupervised learning representation model [3], which use the subsequent assignment as supervision to with iterative optimization. It implements the deep adaptive fuzzy update the clusters. Numerous different clustering methods are clustering (DAFC) strategy that learns a convolutional neural calculated by distance functions, and their combinations with network classifier from given only unlabeled data samples. DAFC embedding algorithms have been explored in literature [4, 5]. consists of a deep feature quality-verifying model and a fuzzy The conventional method always employs Euclidean distance clustering model, where deep feature representation learning loss function and embedded fuzzy clustering with the weighted to express this similarity, while stochastic neighbor embedding adaptive entropy is implemented. We joint fuzzy clustering to (SNE) converts this distance relationship into a conditional the deep reconstruction model, in which fuzzy membership is probability to express similarity. In this way, the distance utilized to represent a clear structure of deep cluster assignments between similar sample points in high-dimensional space is and jointly optimize for the deep representation learning and also similar to that in low-dimensional space.
    [Show full text]
  • Intrusion Detection Using Fuzzy Clustering and Artificial Neural
    Advances in Neural Networks, Fuzzy Systems and Artificial Intelligence Intrusion Detection using Fuzzy Clustering and Artificial Neural Network Shraddha Surana Research Scholar, Department of Computer Engineering, Vishwakarma Institute of Technology, Pune India [email protected] host based IDS makes use of log files from individual computers, whereas a network based Abstract IDS captures packets from the network and analyses its contents. An online IDS is able to flag This paper presents the outline of a hybrid an intrusion while it’s happening whereas an Artificial Neural Network (ANN) based on fuzzy offline IDS analyses records after the event has clustering and neural networks for an Intrusion occurred and raises a flag indicating that a security Detection System (IDS). While neural networks breach had occurred since the last intrusion are effective in capturing the non-linearity in data detection check was performed. An Anomaly provided, it also has certain limitations including based IDS detects deviation from normal the requirement of high computational resources. behaviour while misuse based IDS compare By clustering the data, each ANN is trained on a activities on the system with known behaviours of particular subset of data, reducing time required attackers. and achieving a higher detection rate. The outline of the implemented algorithm is as follows: first This paper outlines a hybrid approach using the data is divided into smaller homogeneous Artificial Neural Networks and Fuzzy clustering to groups/ subsets using a fuzzy clustering technique. detect intrusions in a network. The method Subsequently, a separate ANN is trained on each outlined is network based which extracts features subset.
    [Show full text]
  • Developing Support Vector Machine with New Fuzzy Selection for the Infringement of a Patent Rights Problem
    mathematics Article Developing Support Vector Machine with New Fuzzy Selection for the Infringement of a Patent Rights Problem Chih-Yao Chang 1,2 and Kuo-Ping Lin 3,4,* 1 Graduate Institute of Technology, Innovation & Intellectual Property Management, National Cheng Chi University, Taipei 116302, Taiwan; [email protected] 2 Taiwan Development & Research Academia of Economic & Technology, Taipei 104, Taiwan 3 Department of Industrial Engineering and Enterprise Information, Tunghai University, Taichung 40704, Taiwan 4 Faculty of Finance and Banking, Ton Duc Thang University, Ho Chi Minh City 758307, Vietnam * Correspondence: [email protected]; Tel.: +886-4-2359-4319 (ext. 111) Received: 16 June 2020; Accepted: 30 July 2020; Published: 1 August 2020 Abstract: Classification problems are very important issues in real enterprises. In the patent infringement issue, accurate classification could help enterprises to understand court decisions to avoid patent infringement. However, the general classification method does not perform well in the patent infringement problem because there are too many complex variables. Therefore, this study attempts to develop a classification method, the support vector machine with new fuzzy selection (SVMFS), to judge the infringement of patent rights. The raw data are divided into training and testing sets. However, the data quality of the training set is not easy to evaluate. Effective data quality management requires a structural core that can support data operations. This study adopts new fuzzy selection based on membership values, which are generated from fuzzy c-means clustering, to select appropriate data to enhance the classification performance of the support vector machine (SVM). An empirical example based on the SVMFS shows that the proposed SVMFS can obtain a superior accuracy rate.
    [Show full text]
  • A Unified Deep Learning Framework for Text Data Mining Using Deep Adaptive Fuzzy Clustering
    European Journal of Molecular & Clinical Medicine ISSN 2515-8260 Volume 07, Issue 09, 2020 A UNIFIED DEEP LEARNING FRAMEWORK FOR TEXT DATA MINING USING DEEP ADAPTIVE FUZZY CLUSTERING S. Praveen1, Dr. R. Priya, MCA, M.phil, Ph.D2, Research Scholar1, Associate Professor and Head, Department of Computer Science 1,2 Sree Narayana Guru College, Coimbatore, Abstract Text clustering is an important method for effectively organising, summarising, and navigating text information. The purpose of the clustering is to distinguish and classify the similarity among the text instance as label. However, in the absence of labels, the text data to be clustered cannot be used to train the text representation model based on deep learning as it contains high dimensional data with complex latent distributions. To address this problem, a new unified deep learning framework for text clustering based on deep representation learning is proposed using the deep adaptive fuzzy clustering in this paper to provide soft partition of data. Initially reconstruction of original data into feature space carried out using the word embedding process of deep learning. Word embedding process is a learnt representation of the text or sentence towards clustering into vector containing words, characters and N-grams of words. Further clustering of feature vector is carried out with max pooling layer to determine the inter-cluster seperability and intra-cluster compactness. Moreover learning of the feature space is processed with gradient descent. Moreover tuning of feature vector is fine tuned on basis of Discriminant information using hyper parameter optimization with fewer epochs. Finally representation learning and soft clustering has been achieved using deep adaptive fuzzy clustering and quantum annealing based optimization has been employed .The results demonstrate that the clustering approach more stable and accurate than the traditional FCM clustering algorithm on employing k fold validation for evaluation.
    [Show full text]