Quantiles and selection in multiple passes
You can read the notes from the previous lecture of Chandra Chekuri's course on \ell_0 Sampling, and Priority Sampling here.
Suppose we have a stream of objects from an ordered universe. For simplicity we will assume that they are real numbers and more over that they are distinct (for simplicity). We would like to find the 'th ranked element for some . In particular we may be interested in the median element. We will discuss exact and approximate versions of these problems. Another terminology for finding rank elements is quantiles. Given a number where we would like to return an element of rank . This normalization allows us to talk about -approximate quantiles for . An -approximate quantile is an element whose rank is at least and at most . In other words we are allowing an additive error of . There is a large amount of literature on quantiles and quantile queries. In fact one of the earliest "streaming" papers is the one by Munro and Paterson [5] who described a -pass algorithm for selection using space for any . They also considered the "random-order" streaming setting which has become quite popular in recent years.
The material for these lectures is taken mainly from the excellent chapter/survey by Greenwald and Khanna [1]. We mainly refer to that chapter and describe here the outline of what we covered in lectures. We will omit proofs or give sketchy arguments and refer the reader to [1].
1. Approximate Quantiles and Summaries
Suppose we want to be able to answer -approximate quantile queries over an ordered set of elements. It is easy to see that we can simply pick elements of rank for and store them as a summary and use the summary to answer any quantile query with an additive error. However, to obtain these elements we first need to do selection which we can do in the offline setting if all we want is a concise summary. The question we will address is to find a summary in the streaming setting. In the sequel we will count space usage in terms of "words". Thus, the space usage for the preceding offline summary is .
We will see two algorithms. The first will create an -approximate summary with space and is inspired by the ideas from the work of Munro and Paterson. Greenwald and Khanna [2] gave a different summary that uses space.
Following we will think of a quantile summary as storing a set of elements from along with an interval for each is a lower bound on the rank of in and is an upper bound on the rank of in . It is convenient to assume that and moreover that is the minimum element in and that is the maximum element in . For ease of notation we will simply use to refer to the quantile summary and also the (ordered) set .
Our first question is to ask whether a quantile summary can be used to give -approximate quantile queries. The following is intuitive and is worthwhile proving for oneself.
Lemma 1 Suppose is a quantile summary for such that for . Then can be used for -approximate quantile queries over .
In the following, when we say that is an -approximate quantile summary we will implicitly be using the condition in the preceding lemma.
Given a quantile summary for a multiset and a quantile summary for a multiset we would like to combine and into a quantile summary for ; by union here we view as a multiset. Of course we would like to keep the approximation of the resulting summary similar to those of and . Here a lemma which shows that indeed we can easily combine.
Lemma 2 Let be an -approximate quantile summary for multiset and be an -approximate quantile summary for multiset . Then yields an -approximate quantile summary for where , where and .
We will not prove the correctness but describe how the intervals are constructed for from those for and . Suppose and . Let . Consider some and suppose for some . Let be the largest element of smaller than and be the smallest element of larger than . We will ignore the cases where one or both of are not defined. We set
and
It is easy to justify the above as valid intervals. One can then prove that with these settings, for the following holds:
We will refer to the above operation as . The following is easy to see.
Lemma 3 Let be -approximate quantile summaries for respectively. Then can be combined in any arbitrary order to obtain an -approximate quantile summary for .
Next we discuss the operation on a quantile summary that reduces the size of to while losing a bit in the approximation quality.
Lemma 4 Let be an -approximate quantile summary for . Given an integer parameter there is a quantile summary for such that and it is -approximate.
We sketch the proof. We simply query for ranks and choose these elements to be in . We retain their and values from .
1.1. An space algorithm
The idea is inspired by the Munro-Paterson algorithm and was abstracted in the paper by Manku et al. We will describe the idea in an offline fashion though it can be implemented in the streaming setting. We will use several quantile summaries with elements each for some parameter , say of them. Each summary of size will be called a buffer. We will need to reuse these buffers as more elements in the stream arrive; buffers will combined and pruned, in other words "collapsed" into a single buffer of the same size . Pruning introduces error.
Assume is a power of 2 for simplicity. Consider a complete binary tree with leaves where each leaf corresponds to consecutive elements of the stream. Think of assigning a buffer of size to obtain a 0-error quantile summary for those elements; technically we need elements but we will ignore this minor additive issue for sake of clarity of exposition. Now each internal node of the tree corresponds to a subset of the elements of the stream. Imagine assigning a buffer of size to each internal node to maintain an approximate quantile summary for the elements of the stream in the sub-tree. To obtain a summary at node we combine the summaries of its two children and and prune it back to size introducing and additional error in the approximation. The quantile summary at the root of size will be our final summary that we output for the stream.
Our first observation is that in fact we can implement the tree-based scheme with buffers where is the height of the tree. Note that . The reason we only need buffers is that if need a new buffer for the next elements in the stream we can collapse two buffers corresponding to the children of an internal node - hence, at any time we need to maintain only one buffer per level of the tree (plus a temporary buffer to do the collapse operation).
Consider the quantile summary at the leaves. They have error 0 since we store all the elements in the buffer. However at each level the error increases by . Hence the error of the summary at the root is . Thus, to obtain an -approximate quantile summary we need . And . One can see that for this to work out it suffices to choose .
The total space usage is and and thus the space usage is .
One can choose -ary trees instead of binary trees and some optimization can be done to improve the constants but the asymptotic dependence on does not improve with this high-level scheme.
1.2. An space algorithm
We now briefly describe the Greenwald-Khanna algorithm that obtains an improved space bound. The GK algorithm maintains a quantile summary as a collection of tuples where each tuple is a triple : (i) a value that is an element of the ordered set (ii) the value which is equal to (for ) and (iii) the value which equals . The elements are in ascending order and moreover will the minimum element in and is the maximum element in . Note that . The summary also stores the number of elements seen so far. With this set up we note that and .
Lemma 5 Suppose is a quantile summary for a set such that then it can be used to answer quantile queries with additive error.
The query can be answered as follows. Given rank , find such that and and output ; here is the current size of .
The quantile summary is updated via two operations. When a new element arrives it is inserted into the summary. The quantile summary is compressed by merging consecutive elements to keep the summary within the desired space bounds.
We now describe the INSERT operation that takes a quantile summary and inserts a new element . First we search over the elements in to find an such that ; the case when is the new smallest element or the new largest element are handled easily. A new tuple with is added to the summary where becomes the new st tuple. Note that here is the current size of the stream. It is not hard to see if the summary before arrival of satisfied the condition in Lemma 5 then satisfies the condition after inserting the tuple (note that increased by 1 ). We note that that the first elements are inserted into the summary with .
Compression is the main ingredient. To understand the operation it is helpful to define the notion of a capacity of a tuple. Note that when arrived is inserted where where is the time when arrived. At time the capacity of the tuple is defined as . As grows, the capacity of the tuple increases since we are allowed to have more error. We can merge tuples into at time (which means we eliminate ) while ensuring the desired precision if is updated to and does not change. Note that this means that of a tuple does not change once it is inserted.
Note that the insertion and merging operations preserve correctness of the summary. In order to obtain the desired space bound the merging/compression has to be done rather carefully. We will not go into details but mention that one of the key ideas is to keep track of the capacity of the tuples in geometrically increasing intervals and to ensure that the summary retains only a small number of tuples per interval.
2. Exact Selection
We will now describe a -pass deterministic algorithm to select the rank element in a stream using -space; here . It is not hard to show that for any deterministic algorithm needs space; one has to be a bit careful in arguing about bits vs words and the precise model but a near-linear lower bound is easy. Munro and Paterson described the -space algorithm using passes. We will not describe their precise algorithm but instead use the approximate quantile based analysis.
We will show that given space and a stream of items the problem can be effectively reduced in one pass to selecting from items.
Suppose we can do the above. Choose . After passes the problem is reduced to elements. Setting we see that the number of elements left for the 'th pass is . Thus all of them can be stored and selection can be done offline.
We now describe how to use one pass to reduce the effective size of the elements under consideration to . The idea is that we will be able to select two elements from the stream such that and the 'th ranked element is guaranteed to be in the interval . Moreover, we are also guaranteed that the number of elements between and in the stream is and are the left and right filter after pass 1. Initially and . After passes we will have filters . Note that during the st pass we can compute the exact rank of and .
How do we find ? We saw how to obtain an -approximate summary using space. Thus, if we have space , we can set . Let be -approximate quantile summary for the stream. We query for and and obtain and as the answers to the query (here we are ignoring the corner cases where or ). Then, by the -approximate guarantee of we have that the rank element lies in the interval and moreover there are at most elements in this range.
It is useful to work out the algebra for which shows that the median can be computed in space.
2.1. Random Order Streams
Munro and Paterson also consider the random order stream model in their paper. Here we assume that the stream is a random permutation of an ordered set. It is also convenient to use a different model where the the 'th element is a real number drawn independently from the interval . We can ask whether the randomness can be taken advantage of. Indeed one can. They showed that with space one can find the median with high probability. More generally they showed that in passes one can find the median with space . Even though this space bound is better than for adversarial streams it still requires passes is we have only poly-logarithmic space, same as the adversarial setting. Guha and McGregor [3] showed that in fact passes suffice (with high probability).
Here we describe the Munro-Paterson algorithm; see also http://polylogblog.wordpress.com/2009/08/30/bite-sized-streams-exact-median-of-a-random-order-stream/
The algorithm maintains a set of consecutively ranked elements in the stream seen so far. It maintains two counters for the number of elements less then (the min element in ) which have been seen so far and for the number of elements larger than which have been so far. It tries to maintain as close to 0 as possible to "capture" the median.
While (stream is not empty) do |
endWhile |
if |
else return FAIL. |
To analyze the algorithm we consider the random variable which starts at 0. In the first iterations we simply fill up to capacity and remains 0. After that, in each step is either incremented or decremented by 1. Consider the start of iteration when . The total number of elements seen prior to is . In iteration , since the permutation is random, the probability that will be larger than is precisely . The probability that will be smaller than is precisely and thus with probability will be added to .
Note that the algorithm fails only if at the end of the stream is greater than . A sufficient condition for success is that throughout the algorithm. Let be the probability that increases by 1 conditioned on the fact that . Then we see that . Thus the process can be seen to be similar to a random walk on the line and some analysis shows that if we choose then with high probability throughout. Thus, space suffices to find the median with high probability when the stream is in random order.
Connection to CountMin sketch and deletions: Note that when we were discussion frequency moments we assume that the elements were drawn from a where was known in advance while here we did not assume anything about the elements other than the fact that they came from an ordered universe (apologies for the confusion in notation since we used previously for length of stream). If we know the range of the elements in advance and it is small compared to the length of the stream then CountMin and related techniques are better suited and provide the ability to handle deletions. The GK summary can also handle some deletions. We refer the reader to [1] for more details.
Lower Bounds: For median selection Munro and Paterson showed a lower bound of on the space for passes in a restricted model of computation. Guha and McGregor showed a lower bound of bits without any restriction. For random order streams passes suffice with polylog space for exact selection with high probability [3]. Moreover passes are indeed necessary; see [3] for references and discussion.
Bibliographic Notes: See the references for more information.
References
[1] Michael Greenwald and Sanjeev Khanna. Quantiles and equidepth histograms over streams. Available at http://www.cis.upenn.edu/mbgreen/papers/chapter.pdf. To appear as a chapter in a forthcoming book on Data Stream Management.
[2] Michael Greenwald and Sanjeev Khanna. Space-efficient online computation of quantile summaries. In ACM SIGMOD Record, volume 30, pages 58-66. ACM, 2001.
[3] Sudipto Guha and Andrew McGregor. Stream order and order statistics: Quantile estimation in random-order streams. SIAM Journal on Computing, 38(5):2044-2059, 2009.
[4] Gurmeet Singh Manku, Sridhar Rajagopalan, and Bruce G Lindsay. Approximate medians and other quantiles in one pass and with limited memory. In ACM SIGMOD Record, volume 27, pages 426-435. ACM, 1998.
[5] J Ian Munro and Mike S Paterson. Selection and sorting with limited storage. Theoretical computer science, 12(3):315-323, 1980.
Recommended for you
High Dimension Data Analysis - A tutorial and review for Dimensionality Reduction Techniques
High Dimension Data Analysis - A tutorial and review for Dimensionality Reduction Techniques
This article explains and provides a comparative study of a few techniques for dimensionality reduction. It dives into the mathematical explanation of several feature selection and feature transformation techniques, while also providing the algorithmic representation and implementation of some other techniques. Lastly, it also provides a very brief review of various other works done in this space.