0 0 ℓ_(0)\ell_{0} sampling, and priority sampling

You can read the notes from the previous lecture of Chandra Chekuri's course on Count and Count-Min Sketches here.

1. Priority Sampling and Sum Queries

Suppose we have a stream a 1 , a 2 , , a n a 1 , a 2 , , a n a_(1),a_(2),dots,a_(n)a_{1}, a_{2}, \ldots, a_{n} (yes, we are changing notation here from m m mm to n n nn for the length of the stream) of objects and each a i a i a_(i)a_{i} has a non-negative weight w i w i w_(i)w_{i}. We want to store a representative sample S [ n ] S [ n ] S sub[n]S \subset[n] of the items so that we can answer subset sum queries. That is, given a query I [ n ] I [ n ] I sube[n]I \subseteq[n] we would like to answer i I w i i I w i sum_(i in I)w_(i)\sum_{i \in I} w_{i}. One way to do this as follows. Suppose we pick S S SS as follows: sample each i [ n ] i [ n ] i in[n]i \in[n] independently with probability p i p i p_(i)p_{i} and if i i ii is chosen we set a scaled weight w ^ i = w i / p i w ^ i = w i / p i hat(w)_(i)=w_(i)//p_(i)\hat{w}_{i}=w_{i} / p_{i}. Now, given a query I I II we output the estimate for its weight as i I S w ^ i i I S w ^ i sum_(i in I nn S) hat(w)_(i)\sum_{i \in I \cap S} \hat{w}_{i}. Note that expectation of the estimate is equal to w ( I ) w ( I ) w(I)w(I). The disadvantage of this scheme is mainly related to the fact that we cannot control the size of sample. This means that we cannot fully utilize the memory available. Related to the first, is that if we do not know the length of stream apriori, we cannot figure out the sampling rate even if we are willing to be flexible in the size of the memory. An elegant scheme of Duffield, Lund and Thorup, called priority sampling overcomes these limitations. They considered the setting where we are given a parameter k k kk for the size of the sample S S SS and the goal is maintain a k k kk-sample S S SS along with some weights w ^ i w ^ i hat(w)_(i)\hat{w}_{i} for i S i S i in Si \in S such that we can answer subset sum queries.
Their scheme is the following, described as if a 1 , a 2 , , a n a 1 , a 2 , , a n a_(1),a_(2),dots,a_(n)a_{1}, a_{2}, \ldots, a_{n} are available offline.
  1. For each i [ n ] i [ n ] i in[n]i \in[n] set priority q i = w i / u i q i = w i / u i q_(i)=w_(i)//u_(i)q_{i}=w_{i} / u_{i} where u i u i u_(i)u_{i} is chosen uniformly (and independently from other items) at random from [ 0 , 1 ] [ 0 , 1 ] [0,1][0,1].
  2. S S SS is the set of items with the k k kk highest priorities.
  3. τ τ tau\tau is the ( k + 1 ) ( k + 1 ) (k+1)(k+1)'st highest priority. If k n k n k >= nk \geq n we set τ = 0 τ = 0 tau=0\tau=0.
  4. If i S i S i in Si \in S, set w ^ i = max { w i , τ } w ^ i = max w i , τ hat(w)_(i)=max{w_(i),tau}\hat{w}_{i}=\max \left\{w_{i}, \tau\right\}, else set w ^ i = 0 w ^ i = 0 hat(w)_(i)=0\hat{w}_{i}=0.
We observe that the above sampling can be implemented in the streaming setting by simply keeping the current sample S S SS and current threshold τ τ tau\tau. We leave it as an exercise to show that this informatino can be updated when a new item arrives.
We show some nice and non-obvious properties of priority sampling. We will assume for simplicity that 1 < k < n 1 < k < n 1 < k < n1<k<n. The first one is the basic one that we would want.
Lemma 1 E [ w ^ i ] = w i E w ^ i = w i E[ hat(w)_(i)]=w_(i)\mathbf{E}\left[\hat{w}_{i}\right]=w_{i}.
Proof: Fix i i ii. Let A ( τ ) A τ A(tau^('))A\left(\tau^{\prime}\right) be the event that the k k kk'th highest priority among items j i j i j!=ij \neq i is τ τ tau^(')\tau^{\prime}. Note that i S i S i in Si \in S if q i = w i / u i τ q i = w i / u i τ q_(i)=w_(i)//u_(i) >= tau^(')q_{i}=w_{i} / u_{i} \geq \tau^{\prime} and if i S i S i in Si \in S then w ^ i = max { w i , τ } w ^ i = max w i , τ hat(w)_(i)=max{w_(i),tau^(')}\hat{w}_{i}=\max \left\{w_{i}, \tau^{\prime}\right\}, otherwise w ^ i = 0 w ^ i = 0 hat(w)_(i)=0\hat{w}_{i}=0. To evaluate Pr [ i S A ( τ ) ] Pr i S A τ Pr[i in S∣A(tau^('))]\operatorname{Pr}\left[i \in S \mid A\left(\tau^{\prime}\right)\right] we consider two cases.
Case 1: w i τ w i τ w_(i) >= tau^(')w_{i} \geq \tau^{\prime}. Here we have Pr [ i S A ( τ ) ] = 1 Pr i S A τ = 1 Pr[i in S∣A(tau^('))]=1\operatorname{Pr}\left[i \in S \mid A\left(\tau^{\prime}\right)\right]=1 and w ^ i = w i w ^ i = w i hat(w)_(i)=w_(i)\hat{w}_{i}=w_{i}.
Case 2: w i < τ w i < τ w_(i) < tau^(')w_{i}<\tau^{\prime}. Then Pr [ i S A ( τ ) ] = w i τ Pr i S A τ = w i τ Pr[i in S∣A(tau^('))]=(w_(i))/(tau^('))\operatorname{Pr}\left[i \in S \mid A\left(\tau^{\prime}\right)\right]=\frac{w_{i}}{\tau^{\prime}} and w ^ i = τ w ^ i = τ hat(w)_(i)=tau^(')\hat{w}_{i}=\tau^{\prime}.
In both cases we see that E [ w ^ i ] = w i E w ^ i = w i E[ hat(w)_(i)]=w_(i)\mathbf{E}\left[\hat{w}_{i}\right]=w_{i}.
The previous claim shows that the estimator I S w ^ i I S w ^ i sum_(I nn S) hat(w)_(i)\sum_{I \cap S} \hat{w}_{i} has expectation equal to w ( I ) w ( I ) w(I)w(I). We can also estimate the variance of w ^ i w ^ i hat(w)_(i)\hat{w}_{i} via the threshold τ τ tau\tau.
Lemma 2 V a r [ w ^ i ] = E [ v ^ i ] V a r w ^ i = E v ^ i Var[ hat(w)_(i)]=E[ hat(v)_(i)]\mathbf{Var}\left[\hat{w}_{i}\right]=\mathbf{E}\left[\hat{v}_{i}\right] where v ^ i = { τ max { 0 , τ w i } if i S 0 if i S v ^ i = τ max 0 , τ w i      if  i S 0      if  i S hat(v)_(i)={[tau max{0,tau-w_(i)},"if "i in S],[0,"if "i!in S]:}\hat{v}_{i}= \begin{cases}\tau \max \left\{0, \tau-w_{i}\right\} & \text {if } i \in S \\ 0 & \text {if } i \notin S\end{cases}
Proof: Fix i i ii. We define A ( τ ) A τ A(tau^('))A\left(\tau^{\prime}\right) to be the event that τ τ tau^(')\tau^{\prime} is the k k kk'th highest priority among elements j i j i j!=ij \neq i. The proof is based on showing that
E [ v ^ i A ( τ ) ] = E [ w ^ i 2 A ( τ ) ] w i 2 . E v ^ i A τ = E w ^ i 2 A τ w i 2 . E[ hat(v)_(i)∣A(tau^('))]=E[ hat(w)_(i)^(2)∣A(tau^('))]-w_(i)^(2).\mathbf{E}\left[\hat{v}_{i} \mid A\left(\tau^{\prime}\right)\right]=\mathbf{E}\left[\hat{w}_{i}^{2} \mid A\left(\tau^{\prime}\right)\right]-w_{i}^{2} .
From the proof outline of the preceding lemma, we estimate the lhs of as
E [ v ^ i A ( τ ) ] = Pr [ i S A ( τ ) ] × E [ v ^ i i S A ( τ ) ] = min { 1 , w i / τ } × τ max { 0 , τ w i } = max { 0 , w i τ w i 2 } . E v ^ i A τ = Pr i S A τ × E v ^ i i S A τ = min 1 , w i / τ × τ max 0 , τ w i = max 0 , w i τ w i 2 . {:[E[ hat(v)_(i)∣A(tau^('))]=Pr[i in S∣A(tau^('))]xxE[ hat(v)_(i)∣i in S^^A(tau^('))]],[=min{1,w_(i)//tau^(')}xxtau^(')max{0,tau^(')-w_(i)}],[=max{0,w_(i)tau^(')-w_(i)^(2)}.]:}\begin{aligned} \mathbf{E}\left[\hat{v}_{i} \mid A\left(\tau^{\prime}\right)\right] &=\operatorname{Pr}\left[i \in S \mid A\left(\tau^{\prime}\right)\right] \times \mathbf{E}\left[\hat{v}_{i} \mid i \in S \wedge A\left(\tau^{\prime}\right)\right] \\ &=\min \left\{1, w_{i} / \tau^{\prime}\right\} \times \tau^{\prime} \max \left\{0, \tau^{\prime}-w_{i}\right\} \\ &=\max \left\{0, w_{i} \tau^{\prime}-w_{i}^{2}\right\}. \end{aligned}
Now we analyze the rhs,
E [ w ^ i 2 A ( τ ) ] = Pr [ i S A ( τ ) ] × E [ w ^ i 2 i S A ( τ ) ] = min { 1 , w i / τ } × ( max { w i , τ } ) 2 = max { w i 2 , w i τ } . E w ^ i 2 A τ = Pr i S A τ × E w ^ i 2 i S A τ = min 1 , w i / τ × max w i , τ 2 = max w i 2 , w i τ . {:[E[ hat(w)_(i)^(2)∣A(tau^('))]=Pr[i in S∣A(tau^('))]xxE[ hat(w)_(i)^(2)∣i in S^^A(tau^('))]],[=min{1,w_(i)//tau^(')}xx(max{w_(i),tau^(')})^(2)],[=max{w_(i)^(2),w_(i)tau^(')}.]:}\begin{aligned} \mathbf{E}\left[\hat{w}_{i}^{2} \mid A\left(\tau^{\prime}\right)\right] &=\operatorname{Pr}\left[i \in S \mid A\left(\tau^{\prime}\right)\right] \times \mathbf{E}\left[\hat{w}_{i}^{2} \mid i \in S \wedge A\left(\tau^{\prime}\right)\right] \\ &=\min \left\{1, w_{i} / \tau^{\prime}\right\} \times\left(\max \left\{w_{i}, \tau^{\prime}\right\}\right)^{2} \\ &=\max \left\{w_{i}^{2}, w_{i} \tau^{\prime}\right\} . \end{aligned}
Surprisingly, if k 2 k 2 k >= 2k \geq 2 then the covariance between w ^ i w ^ i hat(w)_(i)\hat{w}_{i} and w ^ j w ^ j hat(w)_(j)\hat{w}_{j} for any i j i j i!=ji \neq j is equal to 0.
Lemma 3 E [ w ^ i w ^ j ] = 0 E w ^ i w ^ j = 0 E[ hat(w)_(i) hat(w)_(j)]=0\mathbf{E}\left[\hat{w}_{i} \hat{w}_{j}\right]=0.
In fact the previous lemma is a special case of a more general lemma below.
Lemma 4 E [ i I w ^ i ] = i = 1 k w i E i I w ^ i = i = 1 k w i E[prod_(i in I) hat(w)_(i)]=prod_(i=1)^(k)w_(i)\mathbf{E}\left[\prod_{i \in I} \hat{w}_{i}\right]=\prod_{i=1}^{k} w_{i} if | I | k | I | k |I| <= k|I| \leq k and is 0 if | I | > k | I | > k |I| > k|I|>k.
Proof: It is easy to see that if | I | > k | I | > k |I| > k|I|>k the product is 0 since at least one of them is not in the sample. We now assume | I | k | I | k |I| <= k|I| \leq k and prove the desired claim by inducion on | I | | I | |I||I|. In fact we need a stronger hypothesis. Let τ τ tau^('')\tau^{\prime \prime} be the ( k | I | + 1 ) ( k | I | + 1 ) (k-|I|+1)(k-|I|+1)'th highest priority among the items j i j i j!=ij \neq i. We will condition on τ τ tau^('')\tau^{\prime \prime} and prove that E [ i I w ^ i A ( τ ) ] = i I w i E i I w ^ i A τ = i I w i E[prod_(i in I) hat(w)_(i)∣A(tau^(''))]=prod_(i in I)w_(i)\mathbf{E}\left[\prod_{i \in I} \hat{w}_{i} \mid A\left(\tau^{\prime \prime}\right)\right]=\prod_{i \in I} w_{i}. For the base case we have seen the proof for | I | = 1 | I | = 1 |I|=1|I|=1.
Case 1: There is h I h I h in Ih \in I such that w h > τ w h > τ w_(h) > tau^('')w_{h}>\tau^{\prime \prime}. Then clearly h S h S h in Sh \in S and w ^ h = w h w ^ h = w h hat(w)_(h)=w_(h)\hat{w}_{h}=w_{h}. In this case
E [ i I w ^ i A ( τ ) ] = w h E [ i I { h } w ^ i A ( τ ) ] , E [ i I w ^ i A τ ] = w h E [ i I { h } w ^ i A τ ] , E[prod_(i in I) hat(w)_(i)∣A(tau^(''))]=w_(h)*E[prod_(i in I\\{h}) hat(w)_(i)∣A(tau^(''))],\mathbf{E}[\prod_{i \in I} \hat{w}_{i} \mid A\left(\tau^{\prime \prime}\right)]=w_{h} \cdot \mathbf{E}[\prod_{i \in I \backslash\{h\}} \hat{w}_{i} \mid A\left(\tau^{\prime \prime}\right)],
and we apply induction to I { h } I { h } I\\{h}I \backslash\{h\}. Technically the term E [ i I { h } w ^ i A ( τ ) ] E i I { h } w ^ i A τ E[prod_(i in I\\{h}) hat(w)_(i)∣A(tau^(''))]\mathbf{E}\left[\prod_{i \in I \backslash\{h\}} \hat{w}_{i} \mid A\left(\tau^{\prime \prime}\right)\right] is referring to the fact that τ τ tau^('')\tau^{\prime \prime} is the k | I | + 1 k I + 1 k-|I^(')|+1k-\left|I^{\prime}\right|+1'st highest priority where I = I { h } I = I { h } I^(')=I\\{h}I^{\prime}=I \backslash\{h\}.
Case 2: For all h I , w h < τ h I , w h < τ h in I,w_(h) < tau^('')h \in I, w_{h}<\tau^{\prime \prime}. Let q q qq be the minimum priority among items in I I II. If q < τ q < τ q < tau^('')q<\tau^{\prime \prime} then w ^ j = 0 w ^ j = 0 hat(w)_(j)=0\hat{w}_{j}=0 for some j I j I j in Ij \in I and the entire product is 0. Thus, in this case, there is no contribution to the expectation. Thus we will consider the case when q τ q τ q >= tau^('')q \geq \tau^{\prime \prime}. The probability for this event is i I w i τ i I w i τ prod_(i in I)(w_(i))/(tau^(''))\prod_{i \in I} \frac{w_{i}}{\tau^{\prime \prime}}. But in this case all i I i I i in Ii \in I will be in S S SS and moreover w ^ i = τ w ^ i = τ hat(w)_(i)=tau^('')\hat{w}_{i}=\tau^{\prime \prime} for each i i ii. Thus the expected value of i I w ^ i = i I w i i I w ^ i = i I w i prod_(i in I) hat(w)_(i)=prod_(i in I)w_(i)\prod_{i \in I} \hat{w}_{i}=\prod_{i \in I} w_{i} as desired.
Combining Lemma 2 and 3 the variance of the estimator i I S w ^ i i I S w ^ i sum_(i in I nn S) hat(w)_(i)\sum_{i \in I \cap S} \hat{w}_{i} is
V a r [ i I S w ^ i = i I S V a r [ w ^ i ] = i I S E [ v ^ i ] . V a r [ i I S w ^ i = i I S V a r w ^ i = i I S E v ^ i . Var[sum_(i in I nn S) hat(w)_(i)=sum_(i in I nn S)Var[ hat(w)_(i)]=sum_(i in I nn S)E[ hat(v)_(i)].\mathbf{Var}[\sum_{i \in I \cap S} \hat{w}_{i}=\sum_{i \in I \cap S} \mathbf{Var}\left[\hat{w}_{i}\right]=\sum_{i \in I \cap S} \mathbf{E}\left[\hat{v}_{i}\right] .
The advantage of this is that the variance of the estimator can be computed by examining τ τ tau\tau and the weights of the elements in the S I S I S nn IS \cap I.

2. 0 0 ℓ_(0)\ell_{0} Sampling

We have seen 2 2 ℓ_(2)\ell_{2} sampling in the streaming setting. The ideas generalize to p p ℓ_(p)\ell_{p} sampling to p p ℓ_(p)\ell_{p} sampling for p ( 0 , 2 ) p ( 0 , 2 ) p in(0,2)p \in(0,2) – see [] for instance. However, 0 0 ℓ_(0)\ell_{0} sampling requires slightly different ideas. 0 0 ℓ_(0)\ell_{0} sampling means that we are sampling near-uniformly from the distinct elements in the stream. Surprisingly we can do this even in the turnstile setting.
Recall that one of the applications we saw for the Count-Sketch is 2 2 ℓ_(2)\ell_{2}-sparse recovery. In particular we can obtain a ( 1 + ϵ ) ( 1 + ϵ ) (1+epsilon)(1+\epsilon)-approximation for err 2 k ( x ) 2 k ( x ) _(2)^(k)(x){ }_{2}^{k}(\mathbf{x}) with high-probability using O ( k log n / ϵ ) O ( k log n / ϵ ) O(k log n//epsilon)O(k \log n / \epsilon) words. Suppose x x x\mathbf{x} is k k kk-sparse then err 2 k ( x ) = 0 2 k ( x ) = 0 _(2)^(k)(x)=0{ }_{2}^{k}(\mathbf{x})=0! It means that we can detect if x x x\mathbf{x} is k k kk-sparse, and in fact identify the non-zero coordinates of x x x\mathbf{x}, with high-probability. In fact one can prove the following stronger version.
Lemma 5 For 1 k n 1 k n 1 <= k <= n1 \leq k \leq n there and k = O ( k ) k = O ( k ) k^(')=O(k)k^{\prime}=O(k) there is a sketch L : R n R k L : R n R k L:R^(n)rarrR^(k^('))L: \mathbb{R}^{n} \rightarrow \mathbb{R}^{k^{\prime}} (generated from O ( k log n ) O ( k log n ) O(k log n)O(k \log n) random bits) and a recovery procedure that on input L ( x ) L ( x ) L(x)L(\mathbf{x}) has the following feature: (i) if x x x\mathbf{x} is k k kk-sparse then it outputs x = x x = x x^(')=x\mathbf{x}^{\prime}=\mathbf{x} with probability 1 and (ii) if x x x\mathbf{x} is not k k kk-sparse the algorithm detects this with high-probability.
We will use the above for 0 0 ℓ_(0)\ell_{0} sampling as follows. We will first describe a high-level algorithm that is not streaming friendly and will indicate later how it can be made stream implementable.
  1. For h = 1 , , log n h = 1 , , log n h=1,dots,|__ log n __|h=1, \ldots,\lfloor\log n\rfloor let I h I h I_(h)I_{h} be a random subsets of [ n ] [ n ] [n][n] where I j I j I_(j)I_{j} has cardinality 2 j 2 j 2^(j)2^{j}. Let I 0 = [ n ] I 0 = [ n ] I_(0)=[n]I_{0}=[n].
  2. Let k = 4 log ( 1 / δ ) k = 4 log ( 1 / δ ) k=|~4log(1//delta)~|k=\lceil 4 \log (1 / \delta)\rceil. For h = 0 , , log n h = 0 , , log n h=0,dots,|__ log n __|h=0, \ldots,\lfloor\log n\rfloor, run k k kk-sparse-recovery on x x x\mathbf{x} restricted to coordinates of I h I h I_(h)I_{h}.
  3. If any of the sparse-recoveries succeeds then output a random coordinate from the first sparserecovery that succeeds.
  4. Algorithm fails if none of the sparse-recoveries output a valid vector.
Let J J JJ be the index set of non-zero coordinates of x x x\mathbf{x}. We now show that the algorithm with probability ( 1 δ ) ( 1 δ ) (1-delta)(1-\delta) succeeds in outputting a uniform sample from J J JJ. Suppose | J | k | J | k |J| <= k|J| \leq k. Then x x x\mathbf{x} is recovered exactly for h = 0 h = 0 h=0h=0 and the algorithm outputs a uniform sample from J J JJ. Suppose | J | > k | J | > k |J| > k|J|>k. We observe that E [ | I h J | ] = 2 h | J | / n E I h J = 2 h | J | / n E[|I_(h)nn J|]=2^(h)|J|//n\mathbf{E}\left[\left|I_{h} \cap J\right|\right]=2^{h}|J| / n and hence there is a h h h^(**)h^{*} such that E [ | I h J | ] = 2 h | J | / n E I h J = 2 h | J | / n E[|I_(h^(**))nn J|]=2^(h^(**))|J|//n\mathbf{E}\left[\left|I_{h^{*}} \cap J\right|\right]=2^{h^{*}}|J| / n is between k / 3 k / 3 k//3k / 3 and 2 k / 3 2 k / 3 2k//32 k / 3. By Chernoff-bounds one can show that with probability at least ( 1 δ ) ( 1 δ ) (1-delta)(1-\delta), 1 | I h J | k 1 I h J k 1 <= |I_(h^(**))nn J| <= k1 \leq\left|I_{h^{*}} \cap J\right| \leq k. For this h h h^(**)h^{*} the sparse recovery will succeed and output a random coordinate of J J JJ. The formal claims are the following:
  • With probability at least ( 1 δ ) ( 1 δ ) (1-delta)(1-\delta) the algorithm outputs a coordinate i [ n ] i [ n ] i in[n]i \in[n].
  • If the algorithm outputs a coordinate i i ii then the probability that it is not a uniform random sample is because the sparse recovery algorithm failed for some h h hh; we can make this probability be less than 1 / n c 1 / n c 1//n^(c)1 / n^{c} for any desired constant c c cc.
Thus, in fact we get zero-error 0 0 ℓ_(0)\ell_{0} sample.
The algorithm, as described, requires one to sample and store I h I h I_(h)I_{h} for h = 0 , , log n h = 0 , , log n h=0,dots,|__ log n __|h=0, \ldots,\lfloor\log n\rfloor. In order to avoid this we can use Nisan's pseudo-random generator for small-space computation. We skip the details of this; see[1]. The overall space requirements for the above procedure can be shown to be O ( log 2 n log ( 1 / δ ) O log 2 n log ( 1 / δ ) O(log^(2)n log(1//delta):}O\left(\log ^{2} n \log (1 / \delta)\right. with an error probability bounded by δ + O ( 1 / n c ) δ + O 1 / n c delta+O(1//n^(c))\delta+O\left(1 / n^{c}\right). This is near-optimal for constant δ δ delta\delta as shown in[1:1].
Bibliographic Notes: The material on priority sampling is directly from[2] which describes applications, relationship to prior sampling techniques and also has an experimental evaluation. Priority sampling has shown to be "optimal" in a strong sense; see[3].
The 0 0 ℓ_(0)\ell_{0} sampling algorithm we described is from the paper by Jowhari, Saglam and Tardos[1:2]. See a simpler algorithm in the chapter on signals by McGregor-Muthu draft book.
You can read the notes from the next lecture of Chandra Chekuri's course on Quantiles and Selection in Multiple Passes here.

  1. Hossein Jowhari, Mert Saglam, and Gábor Tardos. Tight bounds for lp samplers, finding duplicates in streams, and related problems. CoRR, abs/1012.4889, 2010. ↩︎ ↩︎ ↩︎
  2. Nick Duffield, Carsten Lund, and Mikkel Thorup. Priority sampling for estimation of arbitrary subset sums. Journal of the ACM (JACM), 54(6):32, 2007. ↩︎
  3. Mario Szegedy. The dlt priority sampling is essentially optimal. In Proceedings of the thirtyeighth annual ACM symposium on Theory of computing, pages 150-158. ACM, 2006. ↩︎

Recommended for you

Jintang Li
Adversarial Learning on Graph
Adversarial Learning on Graph
This review gives an introduction to Adversarial Machine Learning on graph-structured data, including several recent papers and research ideas in this field. This review is based on our paper “A Survey of Adversarial Learning on Graph”.
7 points
0 issues