Four Identities for RL used in TRPO

hamster professor of math
In this blog post, I will present four useful identities that give some theoretical insight on advantage and value functions in reinforcement learning. These identities are useful in proving the theoretical correctness of the non-decreasing expected return policy iteration algorithm in the paper "Trust Region Policy Optimization" (TRPO) - Schulman et al, 2015, https://arxiv.org/abs/1502.05477. I would like to note that readers should be familiar with the reinforcement learning terminology and notation from the TRPO paper before reading this post. Finally, I encourage the reader to try proving the identities themselves if they feel, like me, the need to keep their math skills sharp when you no longer have problem sets and exams to do.

Identity 1

First, the expected value of the advantage function using an action sampled from it’s own policy is zero. Intuitively this makes sense, because the advantage function for a state s s ss and an action a a aa measures how much better that action is relative to the “average” action taken at the state s s ss. The “average” action is not better or worse than itself, hence the identity:
(1) E a π s [ A π ( s , a ) ] = 0 (1) E a π s A π ( s , a ) = 0 {:(1)E_(a∼pi∣s)[A_(pi)(s,a)]=0:}\begin{equation} \mathbb{E}_{a \sim \pi \mid s}\left[A_{\pi}(s, a)\right]=0 \end{equation}
Proof of Identity [1]:
E a π s [ A π ( s , a ) ] = E a π s [ Q π ( s , a ) V π ( s ) ] = E a π s [ Q π ( s , a ) E a π s [ Q π ( s , a ) ] ] = 0 E a π s A π ( s , a ) = E a π s Q π ( s , a ) V π ( s ) = E a π s Q π ( s , a ) E a π s Q π ( s , a ) = 0 {:[E_(a∼pi∣s)[A_(pi)(s,a)]=E_(a∼pi∣s)[Q_(pi)(s,a)-V_(pi)(s)]],[=E_(a∼pi∣s)[Q_(pi)(s,a)-E_(a∼pi∣s)[Q_(pi)(s,a)]]],[=0]:}\begin{aligned} \mathbb{E}_{a \sim \pi \mid s}\left[A_{\pi}(s, a)\right] &=\mathbb{E}_{a \sim \pi \mid s}\left[Q_{\pi}(s, a)-V_{\pi}(s)\right] \\ & =\mathbb{E}_{a \sim \pi \mid s}\left[Q_{\pi}(s, a)-\mathbb{E}_{a \sim \pi \mid s}\left[Q_{\pi}(s, a)\right]\right] \\ & = 0 \end{aligned}
Here we used an identity from probability: E [ X E [ X ] ] = 0 E [ X E [ X ] ] = 0 E[X-E[X]]=0\mathbb{E}[X-\mathbb{E}[X]]=0. You can reason that this makes sense intuitively, or you can prove it mathematically (hint: E[E[X]] = E[X] from the Appendix of this post).

Identity 2

The second identity I will present shows another way to describe the state-action value function Q:
(2) Q π ( s , a ) = r ( s ) + γ E s P ( s s , a ) [ V π ( s ) ] (2) Q π ( s , a ) = r ( s ) + γ E s P s s , a V π s {:(2)Q_(pi)(s","a)=r(s)+gammaE_(s^(')∼P(s^(')∣s,a))[V_(pi)(s^('))]:}\begin{equation} Q_{\pi}(s, a)= r(s) + \gamma \mathbb{E}_{s^{\prime} \sim P\left(s^{\prime} \mid s, a\right)}\left[V_{\pi}\left(s^{\prime}\right)\right] \end{equation}
Proof of Identity [2] (omitting the π π ∼pi\sim \pi and P P ∼P\sim P symbols for brevity):
From the definition of Q:
Q π ( s t , a t ) = E s t + 1 , a t + 1 [ l = 0 γ l r ( s t + l ) ] = r ( s t ) + E s t + 1 , a t + 1 [ l = 1 γ l r ( s t + l ) ] = r ( s t ) + E s t + 1 , a t + 1 [ l = 0 γ l + 1 r ( s t + 1 + l ) ] = r ( s t ) + γ E s t + 1 , a t + 1 [ l = 0 γ l r ( s t + 1 + l ) ] = r ( s t ) + γ E s t + 1 [ E a t + 1 [ l = 0 γ l r ( s t + 1 + l ) ] ] = r ( s t ) + γ E s t + 1 [ V π ( s t + 1 ) ] by definition of V π Q π s t , a t = E s t + 1 , a t + 1 l = 0 γ l r s t + l = r ( s t ) + E s t + 1 , a t + 1 l = 1 γ l r s t + l = r ( s t ) + E s t + 1 , a t + 1 l = 0 γ l + 1 r s t + 1 + l = r ( s t ) + γ E s t + 1 , a t + 1 l = 0 γ l r s t + 1 + l = r ( s t ) + γ E s t + 1 E a t + 1 l = 0 γ l r s t + 1 + l = r ( s t ) + γ E s t + 1 V π ( s t + 1 ) by definition of  V π {:[Q_(pi)(s_(t),a_(t))=E_(s_(t+1),a_(t+1))dots[sum_(l=0)^(oo)gamma^(l)r(s_(t+l))]],[=r(s_(t))+E_(s_(t+1),a_(t+1))dots[sum_(l=1)^(oo)gamma^(l)r(s_(t+l))]],[=r(s_(t))+E_(s_(t+1),a_(t+1))dots[sum_(l=0)^(oo)gamma^(l+1)r(s_(t+1+l))]],[=r(s_(t))+gammaE_(s_(t+1),a_(t+1))dots[sum_(l=0)^(oo)gamma^(l)r(s_(t+1+l))]],[=r(s_(t))+gammaE_(s_(t+1))[E_(a_(t+1))dots[sum_(l=0)^(oo)gamma^(l)r(s_(t+1+l))]]],[=r(s_(t))+gammaE_(s_(t+1))[V_(pi)(s_(t+1))]],["by definition of "(V_(pi))◻]:}\begin{aligned} Q_{\pi}\left(s_{t}, a_{t}\right) &= \mathbb{E}_{s_{t+1}, a_{t+1}} \ldots\left[\sum_{l=0}^{\infty} \gamma^{l} r\left(s_{t+l}\right)\right] \\ &= r(s_t) + \mathbb{E}_{s_{t+1}, a_{t+1}} \ldots\left[\sum_{l=1}^{\infty} \gamma^{l} r\left(s_{t+l}\right)\right] \\ &= r(s_t) + \mathbb{E}_{s_{t+1}, a_{t+1}} \ldots\left[\sum_{l=0}^{\infty} \gamma^{l+1} r\left(s_{t+1+l}\right)\right] \\ &= r(s_t) + \gamma \mathbb{E}_{s_{t+1}, a_{t+1}} \ldots\left[\sum_{l=0}^{\infty} \gamma^{l} r\left(s_{t+1+l}\right)\right] \\ &= r(s_t) + \gamma \mathbb{E}_{s_{t+1}}\left[\mathbb{E}_{a_{t+1}} \ldots\left[\sum_{l=0}^{\infty} \gamma^{l} r\left(s_{t+1+l}\right)\right]\right]\\ &= r(s_t) + \gamma \mathbb{E}_{s_{t+1}}\left[V_{\pi}(s_{t+1})\right]\\ & \text{by definition of $V_{\pi}$} \space \square \end{aligned}
In plain English, we can reason that this makes sense. The Q function is giving you the expected discounted return sampled from all possible future states when taking action a a aa in the current state s s ss in addition to the reward of the current state s s ss.

Identity 3

The advantage function can be expressed without the state-action value function:
(3) A π ( s , a ) = E s P ( s s , a ) [ r ( s ) + γ V π ( s ) V ( s ) ] (3) A π ( s , a ) = E s P s s , a r ( s ) + γ V π s V ( s ) {:(3)A_(pi)(s","a)=E_(s^(')∼P(s^(')∣s,a))[r(s)+gammaV_(pi)(s^('))-V(s)]:}\begin{equation} A_{\pi}(s, a)=\mathbb{E}_{s^{\prime} \sim P\left(s^{\prime} \mid s, a\right)}\left[r(s)+\gamma V_{\pi}\left(s^{\prime}\right)-V(s)\right] \end{equation}
Proof of equation [3]: Use Identity [2] and the definition of the advantage function A π ( s , a ) = Q π ( s , a ) V π ( s ) A π ( s , a ) = Q π ( s , a ) V π ( s ) A_(pi)(s,a)=Q_(pi)(s,a)-V_(pi)(s)A_{\pi}(s,a) = Q_{\pi}(s, a) - V_{\pi}(s):
A π ( s , a ) = r ( s ) + γ E s P ( s s , a ) [ V π ( s ) ] V π ( s ) A π ( s , a ) = r ( s ) + γ E s P s s , a V π s V π ( s ) A_(pi)(s,a)=r(s)+gammaE_(s^(')∼P(s^(')∣s,a))[V_(pi)(s^('))]-V_(pi)(s)A_{\pi}(s, a)=r(s)+\gamma \mathbb{E}_{s^{\prime} \sim P\left(s^{\prime} \mid s, a\right)}\left[V_{\pi}\left(s^{\prime}\right)\right]-V_{\pi}(s)
Then move all terms that are constant w.r.t. the sampling distribution into the expectation braces. \square

Identity 4

The final identity that I will present is an identity for the difference between the expected discounted returns of two policies π ~ π ~ tilde(pi)\tilde{\pi} and π π pi\pi. This difference is equal to the expected accumulated discounted advantages that would be obtained by following policy π π pi\pi but on a trajectory actually followed by policy π ~ π ~ tilde(pi)\tilde{\pi}.
(4) η ( π ~ ) η ( π ) = E τ π ~ [ t = 0 γ t A π ( s t , a t ) ] (4) η ( π ~ ) η ( π ) = E τ π ~ t = 0 γ t A π s t , a t {:(4)eta( tilde(pi))-eta(pi)=E_(tau∼ tilde(pi))[sum_(t=0)^(oo)gamma^(t)A_(pi)(s_(t),a_(t))]:}\begin{equation} \eta(\tilde{\pi}) - \eta(\pi) = \mathbb{E}_{\tau \sim \tilde{\pi}}\left[\sum_{t=0}^{\infty} \gamma^{t} A_{\pi}\left(s_{t}, a_{t}\right)\right] \end{equation}
The intuitive way of understanding this equation is that if π ~ π ~ tilde(pi)\tilde{\pi} is better than π π pi\pi, then on average (but favoring earlier states and actions through the discount γ γ gamma\gamma), the value-functions that follow the imperfect policy π π pi\pi at given states like the actions that policy π ~ π ~ tilde(pi)\tilde{\pi} chooses more.
The proof of Identity [4] is in the TRPO paper, and assumes that you accept Identity 3 and understand some probability algebra. I will rewrite it here with more detail. I will also explicity show that the trajectory τ τ tau\tau not only depends on the policy π ~ π ~ tilde(pi)\tilde{\pi}, but also the start state distribution ρ ( ) ρ ( ) rho(*)\rho(\cdot) and the transition function P ( s | s , a ) P ( s | s , a ) P(s^(')|s,a)P(s’ | s, a). This would be a bit excessive to notate in the paper, but I’ll notate it here to minimize any chance of confusion.
E τ ρ ( ) , π ~ , P [ t = 0 γ t A π ( s t , a t ) ] = E τ ρ ( ) , π ~ , P [ E s t + 1 P ( s t + 1 s t , a t ) [ t = 0 γ t ( r ( s t ) + γ V π ( s t + 1 ) V π ( s t ) ) ] ] see Appendix = E τ ρ ( ) , π ~ , P [ t = 0 γ t ( r ( s t ) + γ V π ( s t + 1 ) V π ( s t ) ) ] = E τ ρ ( ) , π ~ , P [ t = 0 γ t r ( s t ) + t = 0 ( γ t + 1 V π ( s t + 1 ) γ t V π ( s t ) ) ] = E τ ρ ( ) , π ~ , P [ t = 0 γ t r ( s t ) ] + E τ ρ ( ) , π ~ , P [ t = 0 ( γ t + 1 V π ( s t + 1 ) γ t V π ( s t ) ) ] = E τ ρ ( ) , π ~ , P [ t = 0 γ t r ( s t ) ] + E τ ρ ( ) , π ~ , P [ lim t γ t V π ( s t ) V π ( s 0 ) + t = 1 1 ( γ t V π ( s t ) γ t V π ( s t ) ) ] assuming our discount factor γ ( 0 , 1 ) = E τ ρ ( ) , π ~ , P [ t = 0 γ t r ( s t ) ] E s 0 , a 0 , s 1 , ρ ( ) , π ~ , P [ V π ( s 0 ) ] = E τ ρ ( ) , π ~ , P [ t = 0 γ t r ( s t ) ] E s 0 ρ ( ) [ V π ( s 0 ) ] = E τ ρ ( ) , π ~ , P [ t = 0 γ t r ( s t ) ] E s 0 ρ ( ) [ E a 0 , s 1 , a 1 , π , P [ t = 0 γ t r ( s t ) ] ] = η ( π ~ ) η ( π ) E τ ρ ( ) , π ~ , P t = 0 γ t A π s t , a t = E τ ρ ( ) , π ~ , P E s t + 1 P ( s t + 1 s t , a t ) t = 0 γ t r s t + γ V π s t + 1 V π s t see Appendix = E τ ρ ( ) , π ~ , P t = 0 γ t r s t + γ V π s t + 1 V π s t = E τ ρ ( ) , π ~ , P t = 0 γ t r s t + t = 0 γ t + 1 V π s t + 1 γ t V π s t = E τ ρ ( ) , π ~ , P t = 0 γ t r s t + E τ ρ ( ) , π ~ , P t = 0 γ t + 1 V π s t + 1 γ t V π s t = E τ ρ ( ) , π ~ , P t = 0 γ t r s t + E τ ρ ( ) , π ~ , P lim t γ t V π ( s t ) V π ( s 0 ) + t = 1 1 γ t V π s t γ t V π s t assuming our discount factor  γ ( 0 , 1 ) = E τ ρ ( ) , π ~ , P t = 0 γ t r s t E s 0 , a 0 , s 1 , ρ ( ) , π ~ , P V π ( s 0 ) = E τ ρ ( ) , π ~ , P t = 0 γ t r s t E s 0 ρ ( ) V π ( s 0 ) = E τ ρ ( ) , π ~ , P t = 0 γ t r s t E s 0 ρ ( ) E a 0 , s 1 , a 1 , π , P t = 0 γ t r s t = η ( π ~ ) η ( π ) {:[E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)A_(pi)(s_(t),a_(t))]],[=E_(tau∼rho(*), tilde(pi),P)[E_(s_(t+1)∼P(s_(t+1)∣s_(t),a_(t)))[sum_(t=0)^(oo)gamma^(t)(r(s_(t))+gammaV_(pi)(s_(t+1))-V_(pi)(s_(t)))]]],["see Appendix"],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)(r(s_(t))+gammaV_(pi)(s_(t+1))-V_(pi)(s_(t)))]],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))+sum_(t=0)^(oo)(gamma^(t+1)V_(pi)(s_(t+1))-gamma ^(t)V_(pi)(s_(t)))]],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))]+E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)(gamma^(t+1)V_(pi)(s_(t+1))-gamma ^(t)V_(pi)(s_(t)))]],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))]+E_(tau∼rho(*), tilde(pi),P)[lim_(t rarr oo)gamma ^(t)V_( pi)(s_(t))-V_( pi)(s_(0))+sum_(t=1)^(oo-1)(gamma^(t)V_(pi)(s_(t))-gamma ^(t)V_(pi)(s_(t)))]],["assuming our discount factor "(gamma in(0,1))],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))]-E_(s_(0),a_(0),s_(1),cdots∼rho(*), tilde(pi),P)[V_( pi)(s_(0))]],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))]-E_(s_(0)∼rho(*))[V_( pi)(s_(0))]],[=E_(tau∼rho(*), tilde(pi),P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))]-E_(s_(0)∼rho(*))[E_(a_(0),s_(1),a_(1),cdots∼pi,P)[sum_(t=0)^(oo)gamma^(t)r(s_(t))]]],[=eta( tilde(pi))-eta(pi)◻]:}\begin{aligned} &\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[\sum_{t=0}^{\infty} \gamma^{t} A_{\pi}\left(s_{t}, a_{t}\right)\right] \\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \mathbb{E}_{s_{t+1} \sim P(s_{t+1} \mid s_t, a_t)} \left[\sum_{t=0}^{\infty} \gamma^{t}\left(r\left(s_{t}\right)+\gamma V_{\pi}\left(s_{t+1}\right)-V_{\pi}\left(s_{t}\right)\right)\right]\right]\\ & \text{see Appendix}\\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}\left(r\left(s_{t}\right)+\gamma V_{\pi}\left(s_{t+1}\right)-V_{\pi}\left(s_{t}\right)\right)\right]\\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right)+\sum_{t=0}^{\infty}\left(\gamma^{t+1} V_{\pi}\left(s_{t+1}\right)-\gamma^t V_{\pi}\left(s_{t}\right)\right)\right]\\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right)\right] + \mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[\sum_{t=0}^{\infty}\left(\gamma^{t+1} V_{\pi}\left(s_{t+1}\right)-\gamma^t V_{\pi}\left(s_{t}\right)\right)\right] \\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right)\right] + \mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \lim_{t \rightarrow \infty} \gamma^t V_\pi (s_t) - V_\pi (s_0) + \sum_{t=1}^{\infty-1}\left(\gamma^{t} V_{\pi}\left(s_{t}\right)-\gamma^t V_{\pi}\left(s_{t}\right)\right)\right] \\ & \text{assuming our discount factor $\gamma \in (0, 1)$}\\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right)\right] - \mathbb{E}_{s_0, a_0, s_1, \cdots \sim \rho(\cdot), \tilde{\pi}, P}\left[V_\pi(s_0) \right] \\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right)\right]- \mathbb{E}_{s_0 \sim \rho(\cdot)}\left[V_\pi(s_0) \right] \\ &=\mathbb{E}_{\tau \sim \rho(\cdot), \tilde{\pi}, P}\left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right)\right] - \mathbb{E}_{s_0 \sim \rho(\cdot)}\left[\mathbb{E}_{a_0, s_1, a_1, \cdots \sim \pi, P} \left[ \sum_{t=0}^{\infty} \gamma^{t}r\left(s_{t}\right) \right] \right]\\ &= \eta(\tilde{\pi}) - \eta(\pi) & \square \end{aligned}
The way in which Identity [4] is used is to theoretically guarantee an improving policy. If you can somehow update your policy from π π pi\pi to π ~ π ~ tilde(pi)\tilde{\pi} while guaranteeing the RHS of Identity [4] is non-negative, then the new policy is in the worst case just as good as the old policy.

Appendix: Some probability algebra

Below is a very important result of probability algebra that the TRPO paper assumes you know. Assume ρ 1 ρ 1 rho_(1)\rho_1 and ρ 2 ρ 2 rho_(2)\rho_2 are continuous probability distributions. Then,
(5) E x ρ 1 , y ρ 2 [ E x ρ 1 [ f ( x , y ) ] ] = E x ρ 1 , y ρ 2 [ f ( x , y ) ] (5) E x ρ 1 , y ρ 2 E x ρ 1 f ( x , y ) = E x ρ 1 , y ρ 2 f ( x , y ) {:(5)E_(x∼rho_(1),y∼rho_(2))[E_(x∼rho_(1))[f(x,y)]]=E_(x∼rho_(1),y∼rho_(2))[f(x,y)]:}\begin{equation} \mathbb{E}_{x \sim \rho_1, y \sim \rho_2} \left[ \mathbb{E}_{x \sim \rho_1} \left[ f(x, y) \right] \right] = \mathbb{E}_{x \sim \rho_1, y \sim \rho_2} \left[ f(x, y) \right] \end{equation}
Proof by rewriting the expectation on the LHS as integrals:
y ρ 2 ( y ) x ρ 1 ( x ) [ z ρ 1 ( z ) f ( z , y ) d z ] d x d y = y ρ 2 ( y ) x ρ 1 ( x ) d x [ z ρ 1 ( z ) f ( z , y ) d z ] d y = y ρ 2 ( y ) [ z ρ 1 ( z ) f ( z , y ) d z ] d y y ρ 2 ( y ) x ρ 1 ( x ) z ρ 1 ( z ) f ( z , y ) d z d x d y = y ρ 2 ( y ) x ρ 1 ( x ) d x z ρ 1 ( z ) f ( z , y ) d z d y = y ρ 2 ( y ) z ρ 1 ( z ) f ( z , y ) d z d y {:[int_(y)rho_(2)(y)int_(x)rho_(1)(x)[int_(z)rho_(1)(z)f(z,y)dz]dxdy=int_(y)rho_(2)(y)int_(x)rho_(1)(x)dx[int_(z)rho_(1)(z)f(z,y)dz]dy],[=int_(y)rho_(2)(y)[int_(z)rho_(1)(z)f(z,y)dz]dy]:}\begin{aligned} \int_{y} \rho_{2}(y) \int_{x} \rho_{1}(x)\left[\int_{z} \rho_{1}(z) f(z, y) d z\right] d x d y &= \int_{y} \rho_{2}(y) \int_{x} \rho_{1}(x) d x \left[\int_{z} \rho_{1}(z) f(z, y) d z\right] d y \\ &= \int_{y} \rho_{2}(y) \left[\int_{z} \rho_{1}(z) f(z, y) d z\right] d y \end{aligned}
because the integral of a probability distribution is 1: x ρ 1 ( x ) d x = 1 x ρ 1 ( x ) d x = 1 int_(x)rho_(1)(x)dx=1\int_{x} \rho_{1}(x) d x = 1. This is the continuous analogue of the discrete case, where the sum of chances of all possibilities must be equal to 100%.

Recommended for you

Jintang Li
A Review of Trustworthy Graph Learning
A Review of Trustworthy Graph Learning
In this review, we will explore how to develop trustworthy graph neural networks (GNNs) models from different aspects.
26 points
0 issues