Deep Clustering for Improved Inter-Cluster Separability and Intra-Cluster Homogeneity with Cohesive Loss

Total Page:16

File Type:pdf, Size:1020Kb

Deep Clustering for Improved Inter-Cluster Separability and Intra-Cluster Homogeneity with Cohesive Loss IEICE TRANS. INF. & SYST., VOL.E104–D, NO.5 MAY 2021 776 LETTER Deep Clustering for Improved Inter-Cluster Separability and Intra-Cluster Homogeneity with Cohesive Loss Byeonghak KIM†, Murray LOEW††a), David K. HAN†††b), Nonmembers, and Hanseok KO††††c), Member SUMMARY To date, many studies have employed clustering for the be used for data representation based on deep learning has classification of unlabeled data. Deep separate clustering applies several been proposed. deep learning models to conventional clustering algorithms to more clearly An autoencoder is an unsupervised feature ex- separate the distribution of the clusters. In this paper, we employ a convo- lutional autoencoder to learn the features of input images. Following this, traction tool used in deep clustering. Representative k-means clustering is conducted using the encoded layer features learned autoencoder-based deep clustering algorithms include auto- by the convolutional autoencoder. A center loss function is then added to encoder-based data clustering (ABDC) [8], deep embed- aggregate the data points into clusters to increase the intra-cluster homo- ded clustering (DEC) [9], improved deep embedded clus- geneity. Finally, we calculate and increase the inter-cluster separability. We combine all loss functions into a single global objective function. Our tering (IDEC) [10], discriminatively boosted clustering new deep clustering method surpasses the performance of existing cluster- (DBC) [11], and deep embedded regularized clustering ing approaches when compared in experiments under the same conditions. (DEPICT) [12]. They are all based on the idea that the neu- key words: separate clustering, convolutional autoencoder, intra-cluster ral network learns the features that are suitable for cluster- homogeneity, inter-cluster separability ing. In addition, deep embedded clustering with data aug- mentation (DEC-DA) [13] employs random rotation, crop- 1. Introduction ping, shearing, and shifting on the data to generalize the model. Yet, these techniques do not exhibit strong inter- In supervised learning, data classification is a particularly cluster separability nor robust intra-cluster homogeneity, important task. However, individually labeling data points leaving a room for improvements. ff requires significant time and e ort, and it is often impossi- In this paper, we propose a new clustering algorithm ble to fully label datasets for research applications. To over- with the ability to separate clusters more effectively than come this problem, clustering via unsupervised learning has existing deep clustering algorithms by increasing the inter- ff been proposed and is widely utilized. Clustering e ectively cluster separability and the intra-cluster homogeneity. Our groups unlabeled data based on specific criteria of similar- deep separate clustering algorithm focuses on making the ity and can automatically extract semantic information that scattered samples characterized by Gaussian distribution humans cannot abstract. more cohesive between very similar data but, when the sim- Many data-mining researchers have investigated vari- ilarity is low, it separates the data points belonging to more ous types of clustering. Both hard and soft clustering is pos- corresponding clusters. This process can also be effective sible depending on whether an observation point belongs when processing cluster-overlapping data. Our proposed to one or to multiple clusters. Hard clustering includes k- method greatly improves on the performance of existing means clustering [1], k-medoid clustering [2], density-based deep clustering algorithms when tested on public datasets. spatial clustering of applications with noise (DBSCAN) [3], In Sect. 2, the proposed method is described. The pro- hierarchical clustering [4], random binary pattern of patch cess and results of the experiment are presented in Sect. 3. clustering (RBPPC) [5]. Soft clustering includes Gaussian Finally, Sect. 4 provides the conclusions. mixture model-based clustering [6] and fuzzy clustering [7]. Recently, a deep clustering method that extracts features to 2. Proposed Deep Clustering with Cohesive Loss Manuscript received October 26, 2020. Manuscript revised December 31, 2020. In this section, we propose an innovative deep clustering al- Manuscript publicized January 28, 2021. † gorithm based on the separation between clusters using four The author is with the Dept. of Visual Information Processing, loss functions for global optimization. First, deep features Korea University, Seoul, 02841, Korea. ††The author is with the Dept. of Biomedical Engineering, are learned while reconstructing unlabeled data with a con- George Washington University, Washington DC, USA. volutional autoencoder (CAE). Upon the completion of pre- †††The author is with the Dept. of Electrical and Computer En- training of the autoencoder, the second stage is focused on gineering, Drexel University, Philadelphia, PA USA. clustering of deep features using the distances between the †††† The author is with the School of Electrical Engineering, Korea features assigned to the same cluster and the distances be- University, Seoul, 02841, Korea. tween data points assigned to different clusters in an end-to- a) E-mail: [email protected] b) E-mail: [email protected] end manner. c) E-mail: [email protected] (Corresponding author) DOI: 10.1587/transinf.2020EDL8138 Copyright c 2021 The Institute of Electronics, Information and Communication Engineers LETTER 777 + − µ 2 −1/ (1 zi j ) q = (3) ij + − µ 2 −1 j (1 zi j ) The target distribution pij can be expressed based on qij: QT q = i ij pij (4) T Fig. 1 The structure of the feature extraction process with a convolu- j Q i qij tional autoencoder for the Fashion-MNIST dataset. The size of the embed- ded features is smaller than that of input X. The learned features can be where Q = (eqij − 1)/(e − 1) and T > 0. We set this T value used for clustering. T to 3. The range of Q is 0 to 1 because qij has a value from 0 to 1. In Eq. (4), the exponential function constructs p by adding nonlinearity to q. Therefore, the target distribution p enhances the prediction by giving more emphasis to the 2.1 Feature Extraction with a Convolutional Autoencoder cluster assignments with high probability in q. In addition, the loss is regularized to prevent distortion of the entire fea- An autoencoder is a deep learning approach that learns the ture space by different contributions to the loss depending features of unlabeled data and that is widely employed as a on the density of the cluster, q . Finally, the clustering feature extractor. The structure is shown in Fig. 1. i ij loss function is In training set X = {x ∈ RD}m , x denotes the i-th i i=1 i training data point from m data points, and D is the dimen- pij L = D (P Q) = p log (5) sion of x . The autoencoder loss function is c KL ij q i i j ij L = X − X2 (1) ae 2.3 The Proposed Discriminative Cluster Loss Function In addition, because the autoencoder learns features in order to reconstruct input X as accurately as possible us- The final goal of clustering is to assign data points with a ing output X through the encoder and decoder, autoencoder strong similarity to the same cluster with a certain similar- loss is also referred to as reconstruction loss. Equation (1) ity measurement. Therefore, to achieve robust clustering, can thus be expressed as follows: both the intra-cluster homogeneity and the inter-cluster sep- arability should be increased. In this study, LW is used to 1 m 2 minimize the intra-cluster distance and L reduces the inter- Lr = xi − x (2) B i=1 i 2 m cluster proximity. LW is a function that represents the vari- The embedded features learned by Eq. (2) become the ations between the cluster centroids and the inner cluster input for the subsequent clustering algorithm described in points, as in [16]. LW increases the homogeneity of the inner Sects. 2.2 and 2.3. cluster points by making the data points in the same cluster gather around the centroid. LW is represented by (6): 2.2 Loss Function for Deep Embedded Clustering 1 m 2 LW = zi − µ (6) 2 i=1 yi 2 Autoencoder-based clustering is also known as deep embed- ded clustering. The main idea of deep embedded cluster- where yi is the predicted cluster label for the i-th sample and µ ∈ Rd µ ing algorithms is to perform clustering the features obtained yi is the centroid of the yi-th cluster. yi can be up- from autoencoders. As mentioned earlier, [13] and also im- dated for an iterative training process. LB is a function cal- prove model generalization using data augmentation tech- culated from the inter-cluster cosine distance. Thus, smaller niques. For this reason, we employ the data augmentation. LB means larger inter-cluster distances. LB can be derived The features in deep embedded clustering generally as follows: have smaller dimensions than the input and output data. n−1 n−1 µ · µ 1 1 j k In this paper, we perform clustering on embedded fea- LB = · · log ReLU +1+ d m µ µ = { ∈ R } 2 nC2 j 2 k 2 tures Z zi i=1, where d is the embedded fea- j=0 k=0 ture size and m is the number of data points. The clus- (7) tering loss function is calculated by applying the Student’s µ µ t-distribution [14] results obtained by performing soft deep where n denotes the number of clusters and j and k are the clustering using the network in Sect. 2.1 to Kullback-Leibler j-th and k-th centroids of the embedded features from the divergence (KLD) [15]. CAE, respectively. nC2 represents the number of all combi- In more detail, k-means clustering is performed on Z nations when pairing two in n clusters. = {µ ∈ Rd}n µ to M j j=1, where j is the j-th element of n An ReLU activation function is used to prevent a nega- centroids.
Recommended publications
  • Deep Learning Models for Spatio-Temporal Forecasting and Analysis
    UNIVERSITY OF CALIFORNIA, IRVINE Deep Learning Models for Spatio-Temporal Forecasting and Analysis DISSERTATION submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Computer Science by Reza Asadi Dissertation Committee: Professor Amelia Regan, Chair Professor Michael Dillencourt Professor R. Jayakrishnan 2020 c 2020 Reza Asadi DEDICATION To my family for their love and support ii TABLE OF CONTENTS Page LIST OF FIGURES vi LIST OF TABLES ix ACKNOWLEDGMENTS x VITA xi ABSTRACT OF THE DISSERTATION xiii 1 Introduction 1 1.1 Spatio-temporal traffic data . .1 1.2 Machine learning problems in traffic data . .6 1.2.1 Traffic flow prediction . .7 1.2.2 Imputing incomplete traffic data . .7 1.2.3 Clustering of traffic flow data . .8 1.3 Outline and contributions . .9 2 Distributed network flow problem 12 2.1 Introduction . 13 2.2 Preliminaries . 15 2.2.1 Notation . 15 2.2.2 Graph theory . 16 2.2.3 Eliminating affine equality constraints in optimization problems . 19 2.3 Problem definition . 20 2.4 A cycle-basis distributed ADMM solution . 23 2.4.1 Minimum search variable . 23 iii 2.4.2 Constructing cyber layer to solve the optimal power flow in a dis- tributed manner . 24 2.5 Numerical example . 28 2.6 Conclusion and future research . 30 3 Spatio-temporal clustering of traffic data 33 3.1 Introduction . 34 3.2 Technical background . 37 3.2.1 Problem definition . 37 3.2.2 Autoencoders . 38 3.2.3 Deep embedded clustering . 39 3.3 Spatio-temporal clustering . 40 3.4 Experimental results .
    [Show full text]
  • Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification
    Preprints of the 19th World Congress The International Federation of Automatic Control Cape Town, South Africa. August 24-29, 2014 Hierarchical Fuzzy Support Vector Machine (SVM) for Rail Data Classification R. Muscat*, M. Mahfouf*, A. Zughrat*, Y.Y. Yang*, S. Thornton**, A.V. Khondabi* and S. Sortanos* *Department of Automatic Control and Systems Engineering, The University of Sheffield, Mappin St., Sheffield, S1 3JD, UK (Tel: +44 114 2225607) ([email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]) **Teesside Technology Centre, Tata Steel Europe, Eston Rd., Middlesbrough, TS6 6US, UK ([email protected]) Abstract: This study aims at designing a modelling architecture to deal with the imbalanced data relating to the production of rails. The modelling techniques are based on Support Vector Machines (SVMs) which are sensitive to class imbalance. An internal (Biased Fuzzy SVM) and external (data under- sampling) class imbalance learning methods were applied to the data. The performance of the techniques when implemented on the latter was better, while in both cases the inclusion of a fuzzy membership improved the performance of the SVM. Fuzzy C-Means (FCM) Clustering was analysed for reducing the number of support vectors of the Fuzzy SVM model, concluding it is effective in reducing model complexity without any significant performance deterioration. Keywords: Classification; railway steel manufacture; class imbalance; support vector machines; fuzzy support vector machines; fuzzy c-means clustering. The paper is organised as follows. Section 2 provides an 1 INTRODUCTION overview of the modelling data, obtained from the rail Cost management has become a very important factor in manufacturing process, and input variable selection.
    [Show full text]
  • Multi-View Fuzzy Clustering with the Alternative Learning Between Shared Hidden Space and Partition
    1 Multi-View Fuzzy Clustering with The Alternative Learning between Shared Hidden Space and Partition Zhaohong Deng, Senior Member, IEEE, Chen Cui, Peng Xu, Ling Liang, Haoran Chen, Te Zhang, Shitong Wang 1Abstract—As the multi-view data grows in the real world, posed in relevant literature [1-6] and most of them are classi- multi-view clustering has become a prominent technique in data fied into two categories: mining, pattern recognition, and machine learning. How to ex- i. Multi-view clustering methods based on single view clus- ploit the relationship between different views effectively using the tering methods, such as K-means [7, 8], Fuzzy C-means (FCM) characteristic of multi-view data has become a crucial challenge. [9,10], Maximum Entropy Clustering (MEC) [11, 12] and Aiming at this, a hidden space sharing multi-view fuzzy cluster- Possibilistic C-means (PCM) [13, 14]. Such approach usually ing (HSS-MVFC) method is proposed in the present study. This considers each view independently and treats each of them as method is based on the classical fuzzy c-means clustering model, an independent clustering task, followed by ensemble methods and obtains associated information between different views by [15, 16] to achieve the final clustering result. However, such introducing shared hidden space. Especially, the shared hidden space and the fuzzy partition can be learned alternatively and strategy is likely to cause poor results or unsteadiness of the contribute to each other. Meanwhile, the proposed method uses algorithm because of potential high deviation of a certain view maximum entropy strategy to control the weights of different or significant discrepancies between the results of each view.
    [Show full text]
  • Dynamic Topology Model of Q-Learning LEACH Using Disposable Sensors in Autonomous Things Environment
    applied sciences Article Dynamic Topology Model of Q-Learning LEACH Using Disposable Sensors in Autonomous Things Environment Jae Hyuk Cho * and Hayoun Lee School of Electronic Engineering, Soongsil University, Seoul 06978, Korea; [email protected] * Correspondence: [email protected]; Tel.: +82-2-811-0969 Received: 10 November 2020; Accepted: 15 December 2020; Published: 17 December 2020 Abstract: Low-Energy Adaptive Clustering Hierarchy (LEACH) is a typical routing protocol that effectively reduces transmission energy consumption by forming a hierarchical structure between nodes. LEACH on Wireless Sensor Network (WSN) has been widely studied in the recent decade as one key technique for the Internet of Things (IoT). The main aims of the autonomous things, and one of advanced of IoT, is that it creates a flexible environment that enables movement and communication between objects anytime, anywhere, by saving computing power and utilizing efficient wireless communication capability. However, the existing LEACH method is only based on the model with a static topology, but a case for a disposable sensor is included in an autonomous thing’s environment. With the increase of interest in disposable sensors which constantly change their locations during the operation, dynamic topology changes should be considered in LEACH. This study suggests the probing model for randomly moving nodes, implementing a change in the position of a node depending on the environment, such as strong winds. In addition, as a method to quickly adapt to the change in node location and construct a new topology, we propose Q-learning LEACH based on Q-table reinforcement learning and Fuzzy-LEACH based on Fuzzifier method.
    [Show full text]
  • Parameter Specification for Fuzzy Clustering by Q-Learning
    Parameter Specification for Fuzzy Clustering by Q-Learning Chi-Hyon Oh, Eriko Ikeda, Katsuhiro Honda and Hidetomo Ichihashi Department of Industrial Engineering, College of Engineering, Osaka Prefecture University, 1–1, Gakuencho, Sakai, Osaka, 599-8531, JAPAN E-mail: [email protected] Abstract In this paper, we propose a new method to specify the sequence of parameter values for a fuzzy clustering algorithm by using Q-learning. In the clustering algorithm, we employ similarities between two data points and distances from data to cluster centers as the fuzzy clustering criteria. The fuzzy clustering is achieved by optimizing an objective function which is solved by the Picard iteration. The fuzzy clustering algorithm might be useful but its result depends on the parameter specifications. To conquer the dependency on the parameter values, we use Q-learning to learn the sequential update for the parameters during the iterative optimization procedure of the fuzzy clustering. In the numerical example, we show how the clustering validity improves by the obtained parameter update sequences. Keywords Parameter Specification, Fuzzy Clustering, Reinforcement Learning. I. Introduction Many fuzzy clustering algorithms have been proposed since Ruspini developed the first one [1]. Fuzzy ISODATA [2] and its extension, Fuzzy c-means [3], are the popular fuzzy clustering algorithm using distance- based objective function methods. Other approaches are driven by several fuzzy clustering criteria. First, we propose a new fuzzy clustering algorithm, which adopts the similarities between two data point and the distances from data to cluster centers as the fuzzy clustering criteria to be optimized. We can expect that our algorithm might ably partition the data set into some clusters.
    [Show full text]
  • Fuzzy Clustering Based Image Denoising and Improved Support Vector Machine (ISVM) Based Nearest Target for Retina Images
    International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8 Issue-5, January 2020 Fuzzy Clustering Based Image Denoising and Improved Support Vector Machine (ISVM) Based Nearest Target for Retina Images B. Sivaranjani, C. Kalaiselvi Abstract: A developing automated retinal disease diagnostic Particularly in areas in which retinal specialists may not be system based on image analysis has now demonstrated the ability easily available for diabetic retinopathy and premature in clinical research. Though, the accuracy of these systems has retinopathy (ROP) screening. In this section, the difficult been negotiated repeatedly, generally due to the basic effort in issue of matching and recording retinal images was perceiving the abnormal structures as well as due to deficits in discussed to allow for new applications for the image gaining that affects image quality. Use the fuzzy clustering; the noises contained in the samples are omitted from teleophthalmology. A new technique for locating optic discs the above. Unless the noises will be taken away from the samples in retinal images was suggested in [1]. instead dimension reduction initializes the optimization of Mutual Information (MI) as just a coarse localization process The first phase of certain vessel segmentation, disease that narrows the domain of optimization and tries to avoid local diagnosis, and retinal recognition algorithms would be to optimization. Furthermore, the suggested work closer to the locate the optic disc and its core. In [2] the latest research on retina picture being done using the Improved Support Vector the essential characteristics and features of DR eye Machine (ISVM) system used in the area-based registration, telehealth services was evaluated in the categories listed: offering a reliable approach.
    [Show full text]
  • An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks
    Sensors 2015, 15, 19783-19818; doi:10.3390/s150819783 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks Ibrahim Mustapha 1,2,†, Borhanuddin Mohd Ali 1,*, Mohd Fadlee A. Rasid 1,†, Aduwati Sali 1,† and Hafizal Mohamad 3,† 1 Department of Computer and Communications Systems Engineering and Wireless and Photonics Research Centre, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang Selangor, Malaysia; E-Mails: [email protected] (I.M.); [email protected] (M.F.A.R.); [email protected] (A.S.) 2 Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Maiduguri, P. M. B. 1069, Maiduguri, Nigeria 3 Wireless Networks and Protocol Research Lab, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, Malaysia; E-Mail: [email protected] † These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +60-3-8946-6443; Fax: +60-3-8656-7127. Academic Editor: Davide Brunelli Received: 25 May 2015 / Accepted: 31 July 2015 / Published: 13 August 2015 Abstract: It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution.
    [Show full text]
  • A Hybrid Technique Based on Fuzzy-C-Means and SVM for Detection of Brain Tumor in MRI Images Dipali B.Birnale¹, Prof .S
    ISSN XXXX XXXX © 2016 IJESC Research Article Volume 6 Issue No. 8 A Hybrid Technique based on Fuzzy-C-Means and SVM for Detection of Brain Tumor in MRI Images Dipali B.Birnale¹, Prof .S. N. Patil² Associate Professor² Department of Electronics Engineering PVPIT Budhgaon, Maharashtra, India Abstract: MRI is the most important technique, in detecting the brain tumor. In this paper data mining methods are used for classificat ion of MRI images. A new hybrid technique based on the support vector machine (SVM) and fuzzy c -means for brain tumor classification is proposed. The proposed algorithm is a combination of support vector machine (SVM) and fuzzy –c- means, a hybrid technique for prediction of brain tumor. In this algorithm the image is De-noised first and then morphological operations are used for skull striping. Fuzzy c-means (FCM) clustering is used for the segmentation of the image to detect the suspicious region in brain MRI image & feature are extracted from the brain image, after which SVM technique is applied to classify the brain MRI images, which provide accurate and more effective result for classification of brain MRI images. Keywords: Data Mining, MRI, De-noising, Fuzzy C-means clustering, Support Vector Machine (SVM) I. INTRODUCTION Flow Chart In medical image segmentation, brain tumor detection is one of the most challenging tasks, since brain images are complicated and tumors can be analyzed only by expert physicians. Brain tumor is a group of abnormal cells that grows inside of the brain or around the brain. The tumor may be primary or secondary.
    [Show full text]
  • Deep Adaptive Fuzzy Clustering for Evolutionary Unsupervised Representation Learning Dayu Tan, Zheng Huang, Xin Peng, Member, IEEE, Weimin Zhong, and Vladimir Mahalec
    1 Deep Adaptive Fuzzy Clustering for Evolutionary Unsupervised Representation Learning Dayu Tan, Zheng Huang, Xin Peng, Member, IEEE, Weimin Zhong, and Vladimir Mahalec Abstract—Cluster assignment of large and complex images is a extensively studied and applied to different fields [1]. This crucial but challenging task in pattern recognition and computer type of method iteratively groups the features with a standard vision. In this study, we explore the possibility of employing fuzzy division method, for example, k-means [2] and fuzzy means clustering in a deep neural network framework. Thus, we present a novel evolutionary unsupervised learning representation model [3], which use the subsequent assignment as supervision to with iterative optimization. It implements the deep adaptive fuzzy update the clusters. Numerous different clustering methods are clustering (DAFC) strategy that learns a convolutional neural calculated by distance functions, and their combinations with network classifier from given only unlabeled data samples. DAFC embedding algorithms have been explored in literature [4, 5]. consists of a deep feature quality-verifying model and a fuzzy The conventional method always employs Euclidean distance clustering model, where deep feature representation learning loss function and embedded fuzzy clustering with the weighted to express this similarity, while stochastic neighbor embedding adaptive entropy is implemented. We joint fuzzy clustering to (SNE) converts this distance relationship into a conditional the deep reconstruction model, in which fuzzy membership is probability to express similarity. In this way, the distance utilized to represent a clear structure of deep cluster assignments between similar sample points in high-dimensional space is and jointly optimize for the deep representation learning and also similar to that in low-dimensional space.
    [Show full text]
  • Intrusion Detection Using Fuzzy Clustering and Artificial Neural
    Advances in Neural Networks, Fuzzy Systems and Artificial Intelligence Intrusion Detection using Fuzzy Clustering and Artificial Neural Network Shraddha Surana Research Scholar, Department of Computer Engineering, Vishwakarma Institute of Technology, Pune India [email protected] host based IDS makes use of log files from individual computers, whereas a network based Abstract IDS captures packets from the network and analyses its contents. An online IDS is able to flag This paper presents the outline of a hybrid an intrusion while it’s happening whereas an Artificial Neural Network (ANN) based on fuzzy offline IDS analyses records after the event has clustering and neural networks for an Intrusion occurred and raises a flag indicating that a security Detection System (IDS). While neural networks breach had occurred since the last intrusion are effective in capturing the non-linearity in data detection check was performed. An Anomaly provided, it also has certain limitations including based IDS detects deviation from normal the requirement of high computational resources. behaviour while misuse based IDS compare By clustering the data, each ANN is trained on a activities on the system with known behaviours of particular subset of data, reducing time required attackers. and achieving a higher detection rate. The outline of the implemented algorithm is as follows: first This paper outlines a hybrid approach using the data is divided into smaller homogeneous Artificial Neural Networks and Fuzzy clustering to groups/ subsets using a fuzzy clustering technique. detect intrusions in a network. The method Subsequently, a separate ANN is trained on each outlined is network based which extracts features subset.
    [Show full text]
  • Developing Support Vector Machine with New Fuzzy Selection for the Infringement of a Patent Rights Problem
    mathematics Article Developing Support Vector Machine with New Fuzzy Selection for the Infringement of a Patent Rights Problem Chih-Yao Chang 1,2 and Kuo-Ping Lin 3,4,* 1 Graduate Institute of Technology, Innovation & Intellectual Property Management, National Cheng Chi University, Taipei 116302, Taiwan; [email protected] 2 Taiwan Development & Research Academia of Economic & Technology, Taipei 104, Taiwan 3 Department of Industrial Engineering and Enterprise Information, Tunghai University, Taichung 40704, Taiwan 4 Faculty of Finance and Banking, Ton Duc Thang University, Ho Chi Minh City 758307, Vietnam * Correspondence: [email protected]; Tel.: +886-4-2359-4319 (ext. 111) Received: 16 June 2020; Accepted: 30 July 2020; Published: 1 August 2020 Abstract: Classification problems are very important issues in real enterprises. In the patent infringement issue, accurate classification could help enterprises to understand court decisions to avoid patent infringement. However, the general classification method does not perform well in the patent infringement problem because there are too many complex variables. Therefore, this study attempts to develop a classification method, the support vector machine with new fuzzy selection (SVMFS), to judge the infringement of patent rights. The raw data are divided into training and testing sets. However, the data quality of the training set is not easy to evaluate. Effective data quality management requires a structural core that can support data operations. This study adopts new fuzzy selection based on membership values, which are generated from fuzzy c-means clustering, to select appropriate data to enhance the classification performance of the support vector machine (SVM). An empirical example based on the SVMFS shows that the proposed SVMFS can obtain a superior accuracy rate.
    [Show full text]
  • A Unified Deep Learning Framework for Text Data Mining Using Deep Adaptive Fuzzy Clustering
    European Journal of Molecular & Clinical Medicine ISSN 2515-8260 Volume 07, Issue 09, 2020 A UNIFIED DEEP LEARNING FRAMEWORK FOR TEXT DATA MINING USING DEEP ADAPTIVE FUZZY CLUSTERING S. Praveen1, Dr. R. Priya, MCA, M.phil, Ph.D2, Research Scholar1, Associate Professor and Head, Department of Computer Science 1,2 Sree Narayana Guru College, Coimbatore, Abstract Text clustering is an important method for effectively organising, summarising, and navigating text information. The purpose of the clustering is to distinguish and classify the similarity among the text instance as label. However, in the absence of labels, the text data to be clustered cannot be used to train the text representation model based on deep learning as it contains high dimensional data with complex latent distributions. To address this problem, a new unified deep learning framework for text clustering based on deep representation learning is proposed using the deep adaptive fuzzy clustering in this paper to provide soft partition of data. Initially reconstruction of original data into feature space carried out using the word embedding process of deep learning. Word embedding process is a learnt representation of the text or sentence towards clustering into vector containing words, characters and N-grams of words. Further clustering of feature vector is carried out with max pooling layer to determine the inter-cluster seperability and intra-cluster compactness. Moreover learning of the feature space is processed with gradient descent. Moreover tuning of feature vector is fine tuned on basis of Discriminant information using hyper parameter optimization with fewer epochs. Finally representation learning and soft clustering has been achieved using deep adaptive fuzzy clustering and quantum annealing based optimization has been employed .The results demonstrate that the clustering approach more stable and accurate than the traditional FCM clustering algorithm on employing k fold validation for evaluation.
    [Show full text]