Density Estimation Over Data Stream∗

Density Estimation Over Data Stream∗

Density Estimation Over Data Stream∗ Aoying Zhou Zhiyuan Cai Li Wei Dept. of Computer Science, Dept. of Computer Science, Dept. of Computer Science, Fudan University Fudan University Fudan University 220 Handan Rd. 220 Handan Rd. 220 Handan Rd. Shanghai, 200433, P.R. China Shanghai, 200433, P.R. China Shanghai, 200433, P.R. China [email protected] [email protected] [email protected] ABSTRACT to the size of total data in auxiliary storage, the size of main Density estimation is an important but costly operation for memory is such small that every time only a small portion of applications that need to know the distribution of a data set. data can be load in memory to process, and under the time Moreover, when the data comes as a stream, traditional den- restrict only one scan is allowed to the data. Therefore, de- sity estimation methods cannot cope with it efficiently. In spite all our efforts in scaling up data mining algorithms, this paper, we examined the problem of computing density they still cannot handle such data efficiently. function over data streams and developed a novel method to solve it. A new concept M-Kernel is used in our algorithm, How to apply an efficient way to organize and extract use- and it is of the following characteristics: (1) the running ful information from the data stream is a problem met by time is in linear with the data size, (2) it can keep the whole researchers from almost all fields. Though there are many computing in limited size of memory, (3) its accuracy is com- algorithms for data mining, they are not designed for data parable to the traditional methods, (4) a useable density stream. model could be available at any time during the processing, (5) it is flexible and can suit with different stream models. To process the high-volume, open-ended data streams, a Analytical and experimental results showed the efficiency of method should meet some stringent criteria. In [6], Domin- the proposed algorithm. gos presents a series of designed criteria, which are summa- rized as following: Keywords Data Stream, Kernel density estimation 1. The time needed by the algorithm to process each data record in the stream must be small and constant; oth- 1. INTRODUCTION erwise, it is impossible for the algorithm to catch up Recently, it has been found that the technique of processing the pace of the data. data streams is very important in a wide range of scien- 2. Regardless of the number of records the algorithm has tific and commercial applications. A data stream is such seen, the amount of main memory used must be fixed. a model that a large volume of data is arriving continu- ously and it is either unnecessary or impractical to store 3. It must be a one-pass algorithm, since in most appli- the data in some forms. For example, transaction of banks, cations, either the data is still not available, or there call records of telecommunications company, hit logs of web is no time to revisit old data. server are all this kinds of data. In these applications, deci- sions should be made as soon as various events (data) being 4. It must have the ability to make a usable model avail- received. It is not likely that processing accumulated data able at any time, since we may never meet the end of periodically by batches is allowed. Moreover, data streams the stream. are also regarded as a model to access large data sets stored 5. The model must be up-to-date at any point in time, in secondary memory where performance requirements ne- that is to say, it must keep up with the changes of the cessitate access with linear scans. In most cases, comparing data. ∗(Produces the permission block, copyright information and page numbering). For use with ACM PROC ARTICLE- The first two criteria are most important and hard to achieve. SP.CLS V2.0. Supported by ACM. Although many works have been done on scalable data min- ing algorithms, most algorithms still require an increasing main memory in proportion to the data size, and their com- putation complexity is much higher than linear with the data size. So they are not equipped to cope with data stream, because they will exhaust all available main memory or fall behind the data, some time or other. Recent proposed techniques include clustering algorithms when the objects arrive as a stream [8, 14], computing de- −3 x 10 cision tree classifiers when the classification examples arrive 2 as a stream [5, 10, 7], as well as approximate computing 1.8 medians and quantiles in one pass [12, 13, 2]. 1.6 Another common but useful technique used on data stream 1.4 is Density estimation. Given a sequence of independent 1.2 random variables identically drawn from a specific distribu- 1 tion, the density estimation problem is to construct a density 0.8 function of the distribution based on the data drawn from it. Density estimation is a very important problem in nu- 0.6 merical analysis, data mining and many scientific research 0.4 fields [9, 15]. By knowing the density distribution of a data 0.2 set, we can have an idea of the distribution in the data set. 0 Moreover, based on the knowledge of density distribution, 0 100 200 300 400 500 600 700 800 900 1000 we can find the dense or sparse area in the data set quickly; and medians and other quantiles can be easily calculated. Figure 1: Construction of a fixed weight kernel den- So in many data mining applications, such as density-biased sity estimate(solid curve). The normal kernels are sampling, density-based clustering, density estimation is an shown as the dotted lines. inevitable step to them [3, 11]. Kernel density estimation is a widely studied nonparametric density estimation method 2. PROBLEM DESCRIPTION [4]. But it becomes computationally expensive when involv- In this section, we briefly review the kernel density estima- ing large data sets. Zhang et.al. provide a method to obtain tion method and show the problems when using it to deal a fast kernel estimation of the density in very large data sets with stream data. by construct a CF-tree on the data [16]. Calculating den- sity function over data stream has many practical applica- Given a sequence of independent random variables x ; x ;::: tions. A straightforward one is that it can be used to verify 1 2 identically distributed with density f, the density estimation whether two or more data streams are drawn from the same n problem is to construct a sequence of estimators fˆn(x ; x) distribution. n of f(x) based on the sample (x1; : : : ; xn) x . ≡ 1.1 Our Contribution The kernel method is a widely studied nonparametric den- The contributions of the paper can be summarized as: sity estimation method. The equation of it based on n data points is defined as: Xn Xn 1 x Xi 1 1. To the best of our knowledge, it is the earliest work of fˆn(x) = K( − ) = Kh(x Xi) (1) nh h n − density estimate over data stream. i=1 i=1 2. We bring forward a new concept of M-Kernel, which 1 1 ˆ solves the incompatibility of limited main memory and Note that Kh(t) = h− K(h− t) for h > 0. Clearly, f(x) continuously coming data. And an algorithm is pro- is nonnegative, integrates to one, and is a density function. posed based on it to solve density estimation problem In Figure 1, the construction of such estimates is demon- over data stream. strated. The dashed lines denote the individual standard normal shaped kernels, and the solid line the resulting es- 3. Analytical and experimental results prove the effec- timate. Kernel density estimation can be thought of as be- tiveness and efficiency of our algorithm. It is a one- ing obtained by placing a ”bump” at each point and then pass algorithm and needs only a fixed-size main mem- summing the height of each bump at each point on the X- ory. The running time is in linear with the size of the axis. The shape of the bump is defined by a kernel func- data stream. Meanwhile, the algorithm has an accept- tion, K(x), which is taken to be a unimodal, symmetric, able error rate when compared with traditional kernel nonnegative function that centers at zero and integrates to density estimation. Another advantage is that it can one. The spread of the bump is determined by a window maintain a useable model at any point in time. or bandwidth, h, which controls the degree of smoothing of the estimation. Kernel density estimation has many desir- 4. What we provide is a basic notion to handle data able properties [16]. It is simple, no curse of dimension, no stream, which can be extended and integrated to many need to know the data range in advance, and the estimate is data mining applications such as density-biased sam- asymptotically unbiased, consistent in a mean-square sense, pling. and uniformly consistent in probability. For low to medium data sizes, kernel estimation is a good 1.2 Paper Organization practical choice. However if the kernel density estimation is The organization of the rest of the paper is as follows: in the applied to very large data sets, it becomes computationally next section (Section 2) we formally define the problem. A expensive and space intensive. Because there are n distinct new density estimation method that can handle data stream ”bumps”, or kernel functions in fˆ(x), generally speaking, it is presented in section 3.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us