Introduction to Deep Learning

You can read the notes from the previous lecture from Andrew Ng's CS229 course on Kernal Methods here.
We now begin our study of deep learning. In this set of notes, we give an overview of neural networks, discuss vectorization and discuss training neural networks with backpropagation.

1. Supervised Learning with Non-linear Models

In the supervised learning setting (predicting y y yy from the input x x xx), suppose our model/hypothesis is h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x). In the past lectures, we have considered the cases when h θ ( x ) = θ x h θ ( x ) = θ x h_(theta)(x)=theta^(TT)xh_{\theta}(x)=\theta^{\top} x (in linear regression or logistic regression) or h θ ( x ) = h θ ( x ) = h_(theta)(x)=h_{\theta}(x)= θ ϕ ( x ) θ ϕ ( x ) theta^(TT)phi(x)\theta^{\top} \phi(x) (where ϕ ( x ) ϕ ( x ) phi(x)\phi(x) is the feature map). A commonality of these two models is that they are linear in the parameters θ θ theta\theta. Next we will consider learning general family of models that are non-linear in both the parameters θ θ theta\theta and the inputs x x xx. The most common non-linear models are neural networks, which we will define staring from the next section. For this section, it suffices to think h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) as an abstract non-linear model.[1]
Suppose { ( x ( i ) , y ( i ) ) } i = 1 n x ( i ) , y ( i ) i = 1 n {(x^((i)),y^((i)))}_(i=1)^(n)\left\{\left(x^{(i)}, y^{(i)}\right)\right\}_{i=1}^{n} are the training examples. For simplicity, we start with the case where y ( i ) R y ( i ) R y^((i))inRy^{(i)} \in \mathbb{R} and h θ ( x ) R h θ ( x ) R h_(theta)(x)inRh_{\theta}(x) \in \mathbb{R}.
Cost/loss function. We define the least square cost function for the i i ii-th example ( x ( i ) , y ( i ) ) x ( i ) , y ( i ) (x^((i)),y^((i)))\left(x^{(i)}, y^{(i)}\right) as
(1.1) J ( i ) ( θ ) = 1 2 ( h θ ( x ( i ) ) y ( i ) ) 2 (1.1) J ( i ) ( θ ) = 1 2 h θ x ( i ) y ( i ) 2 {:(1.1)J^((i))(theta)=(1)/(2)(h_(theta)(x^((i)))-y^((i)))^(2):}\begin{equation} J^{(i)}(\theta)=\frac{1}{2}\left(h_{\theta}\left(x^{(i)}\right)-y^{(i)}\right)^{2}\tag{1.1} \end{equation}
and define the mean-square cost function for the dataset as
(1.2) J ( θ ) = 1 n i = 1 n J ( i ) ( θ ) (1.2) J ( θ ) = 1 n i = 1 n J ( i ) ( θ ) {:(1.2)J(theta)=(1)/(n)sum_(i=1)^(n)J^((i))(theta):}\begin{equation} J(\theta)=\frac{1}{n} \sum_{i=1}^{n} J^{(i)}(\theta) \tag{1.2} \end{equation}
which is same as in linear regression except that we introduce a constant 1 / n 1 / n 1//n1/n in front of the cost function to be consistent with the convention. Note that multiplying the cost function with a scalar will not change the local minima or global minima of the cost function. Also note that the underlying parameterization for h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) is different from the case of linear regression, even though the form of the cost function is the same mean-squared loss. Throughout the notes, we use the words "loss" and "cost" interchangeably.
Optimizers (SGD). Commonly, people use gradient descent (GD), stochastic gradient (SGD), or their variants to optimize the loss function J ( θ ) J ( θ ) J(theta)J(\theta). GD's update rule can be written as [2]
(1.3) θ := θ α θ J ( θ ) (1.3) θ := θ α θ J ( θ ) {:(1.3)theta:=theta-alphagrad_(theta)J(theta):}\begin{equation} \theta:=\theta-\alpha \nabla_{\theta} J(\theta) \tag{1.3} \end{equation}
where α > 0 α > 0 alpha > 0\alpha>0 is often referred to as the learning rate or step size. Next, we introduce a version of the SGD (Algorithm 1 1 11), which is lightly different from that in the first lecture notes.
Algorithm 1 Stochastic Gradient Descent
1: Hyperparameter: learning rate α α alpha\alpha, number of total iteration n iter n iter n_("iter")n_{\text{iter}}.
2: Initialize θ θ theta\theta randomly.
3: for i = 1 i = 1 i=1i = 1 to n iter n iter n_("iter")n_{\text{iter}} do
4: quad\quad Sample j j jj uniformly from { 1 , , n } { 1 , , n } {1,dots,n}\{1,\ldots,n\}, and update θ θ theta\theta by (1.4) θ := θ α θ J j ( θ ) (1.4) θ := θ α θ J j ( θ ) {:(1.4)theta:=theta-alphagrad_(theta)J^(j)(theta):}\begin{equation} \theta := \theta - \alpha \nabla_{\theta}J^{j}(\theta) \tag{1.4} \end{equation}
Oftentimes computing the gradient of B B BB examples simultaneously for the parameter θ θ theta\theta can be faster than computing B B BB gradients separately due to hardware parallelization. Therefore, a mini-batch version of SGD is most commonly used in deep learning, as shown in Algorithm 2 2 22. There are also other variants of the SGD or mini-batch SGD with slightly different sampling schemes.
Algorithm 2 Mini-batch Stochastic Gradient Descent
1: Hyperparameters: learning rate α α alpha\alpha, batch size B , # B , # B,#B, \# iterations n iter n iter n_("iter")n_{\text{iter}}.
2: Initialize θ θ theta\theta randomly
3: for i = 1 i = 1 i=1i = 1 to n iter n iter n_("iter")n_{\text{iter}} do
4: quad\quad Sample B B BB examples j 1 , , j B j 1 , , j B j_(1),dots,j_(B)j_{1},\ldots,j_{B} (without replacement) uniformly from { 1 , , n } { 1 , , n } {1,dots,n}\{1,\ldots, n \}, and update θ θ theta\theta by
(1.5) θ := θ α B k = 1 B θ J ( j k ) ( θ ) (1.5) θ := θ α B k = 1 B θ J ( j k ) ( θ ) {:(1.5)theta:=theta-(alpha )/(B)sum_(k=1)^(B)grad_(theta)J^((j_(k)))(theta):}\begin{equation} \theta := \theta - \frac{\alpha}{B} \sum^{B}_{k=1}\nabla_{\theta}J^{(j_{k})}(\theta) \tag{1.5} \end{equation}
With these generic algorithms, a typical deep learning model is learned with the following steps. 1. Define a neural network parametrization h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x), which we will introduce in Section 2, and 2. write the backpropagation algorithm to compute the gradient of the loss function J ( j ) ( θ ) J ( j ) ( θ ) J^((j))(theta)J^{(j)}(\theta) efficiently, which will be covered in Section 3, and 3. run SGD or mini-batch SGD (or other gradient-based optimizers) with the loss function J ( θ ) J ( θ ) J(theta)J(\theta).

2. Neural Networks

Neural networks refer to broad type of non-linear models/parametrizations h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) that involve combinations of matrix multiplications and other entrywise non-linear operations. We will start small and slowly build up a neural network, step by step.
A Neural Network with a Single Neuron. Recall the housing price prediction problem from before: given the size of the house, we want to predict the price. We will use it as a running example in this subsection.
Previously, we fit a straight line to the graph of size vs. housing price. Now, instead of fitting a straight line, we wish to prevent negative housing prices by setting the absolute minimum price as zero. This produces a "kink" in the graph as shown in Figure 1. How do we represent such a function with a single kink as h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) with unknown parameter? (After doing so, we can invoke the machinery in Section 1.)
We define a parameterized function h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) with input x x xx, parameterized by θ θ theta\theta, which outputs the price of the house y y yy. Formally, h θ : x y h θ : x y h_(theta):x rarr yh_{\theta}: x \rightarrow y. Perhaps one of the simplest parametrization would be
(2.1) h θ ( x ) = max ( w x + b , 0 ) , where θ = ( w , b ) R 2 (2.1) h θ ( x ) = max ( w x + b , 0 ) , where  θ = ( w , b ) R 2 {:(2.1)h_(theta)(x)=max(wx+b","0)", where "theta=(w","b)inR^(2):}\begin{equation} h_{\theta}(x)=\max (w x+b, 0) \text {, where } \theta=(w, b) \in \mathbb{R}^{2} \tag{2.1} \end{equation}
Here h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) returns a single value: ( w x + b ) ( w x + b ) (wx+b)(w x+b) or zero, whichever is greater. In the context of neural networks, the function max { t , 0 } max { t , 0 } max{t,0}\max \{t, 0\} is called a ReLU (pronounced "ray-lu"), or rectified linear unit, and often denoted by ReLU ( t ) ReLU ( t ) ReLU(t)≜\operatorname{ReLU}(t) \triangleq max { t , 0 } max { t , 0 } max{t,0}\max \{t, 0\}.
Generally, a one-dimensional non-linear function that maps R R R\mathbb{R} to R R R\mathbb{R} such as ReLU is often referred to as an activation function. The model h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) is said to have a single neuron partly because it has a single non-linear activation function. (We will discuss more about why a non-linear activation is called neuron.)
When the input x R d x R d x inR^(d)x \in \mathbb{R}^{d} has multiple dimensions, a neural network with a single neuron can be written as
(2.2) h θ ( x ) = ReLU ( w x + b ) , where w R d , b R , and θ = ( w , b ) (2.2) h θ ( x ) = ReLU w x + b , where  w R d , b R , and  θ = ( w , b ) {:(2.2)h_(theta)(x)=ReLU(w^(TT)x+b)", where "w inR^(d)","b inR", and "theta=(w","b):}\begin{equation} h_{\theta}(x)=\operatorname{ReLU}\left(w^{\top} x+b\right) \text {, where } w \in \mathbb{R}^{d}, b \in \mathbb{R} \text {, and } \theta=(w, b) \tag{2.2} \end{equation}
The term b b bb is often referred to as the "bias", and the vector w w ww is referred to as the weight vector. Such a neural network has 1 1 11 layer. (We will define what multiple layers mean in the sequel.)
Stacking Neurons. A more complex neural network may take the single neuron described above and "stack" them together such that one neuron passes its output as input into the next neuron, resulting in a more complex function.
Let us now deepen the housing prediction example. In addition to the size of the house, suppose that you know the number of bedrooms, the zip code and the wealth of the neighborhood. Building neural networks is analogous to Lego bricks: you take individual bricks and stack them together to build complex structures. The same applies to neural networks: we take individual neurons and stack them together to create complex neural networks.
Given these features (size, number of bedrooms, zip code, and wealth), we might then decide that the price of the house depends on the maximum family size it can accommodate. Suppose the family size is a function of the size of the house and number of bedrooms (see Figure 2 2 22). The zip code may provide additional information such as how walkable the neighborhood is (i.e., can you walk to the grocery store or do you need to drive everywhere). Combining the zip code with the wealth of the neighborhood may predict the quality of the local elementary school. Given these three derived features (family size, walkable, school quality), we may conclude that the price of the home ultimately depends on these three features.
Formally, the input to a neural network is a set of input features x 1 , x 2 , x 3 , x 4 x 1 , x 2 , x 3 , x 4 x_(1),x_(2),x_(3),x_(4)x_{1}, x_{2}, x_{3}, x_{4}. We denote the intermediate variables for "family size", "walkable", and "school quality" by a 1 , a 2 , a 3 a 1 , a 2 , a 3 a_(1),a_(2),a_(3)a_{1}, a_{2}, a_{3} (these a i a i a_(i)a_{i}'s are often referred to as
Figure 1: Housing prices with a "kink" in the graph.
Figure 2: Diagram of a small neural network for predicting housing prices.
"hidden units" or "hidden neurons"). We represent each of the a i a i a_(i)a_{i} 's as a neural network with a single neuron with a subset of x 1 , , x 4 x 1 , , x 4 x_(1),dots,x_(4)x_{1}, \ldots, x_{4} as inputs. Then as in Figure 1, we will have the parameterization:
a 1 = ReLU ( θ 1 x 1 + θ 2 x 2 + θ 3 ) a 2 = ReLU ( θ 4 x 3 + θ 5 ) a 3 = ReLU ( θ 6 x 3 + θ 7 x 4 + θ 8 ) a 1 = ReLU θ 1 x 1 + θ 2 x 2 + θ 3 a 2 = ReLU θ 4 x 3 + θ 5 a 3 = ReLU θ 6 x 3 + θ 7 x 4 + θ 8 {:[a_(1)=ReLU(theta_(1)x_(1)+theta_(2)x_(2)+theta_(3))],[a_(2)=ReLU(theta_(4)x_(3)+theta_(5))],[a_(3)=ReLU(theta_(6)x_(3)+theta_(7)x_(4)+theta_(8))]:}\begin{aligned} &a_{1}=\operatorname{ReLU}\left(\theta_{1} x_{1}+\theta_{2} x_{2}+\theta_{3}\right) \\ &a_{2}=\operatorname{ReLU}\left(\theta_{4} x_{3}+\theta_{5}\right) \\ &a_{3}=\operatorname{ReLU}\left(\theta_{6} x_{3}+\theta_{7} x_{4}+\theta_{8}\right) \end{aligned}
where ( θ 1 , , θ 8 ) θ 1 , , θ 8 (theta_(1),cdots,theta_(8))\left(\theta_{1}, \cdots, \theta_{8}\right) are parameters. Now we represent the final output h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) as another linear function with a 1 , a 2 , a 3 a 1 , a 2 , a 3 a_(1),a_(2),a_(3)a_{1}, a_{2}, a_{3} as inputs, and we get[3]
(2.3) h θ ( x ) = θ 9 a 1 + θ 10 a 2 + θ 11 a 3 + θ 12 (2.3) h θ ( x ) = θ 9 a 1 + θ 10 a 2 + θ 11 a 3 + θ 12 {:(2.3)h_(theta)(x)=theta_(9)a_(1)+theta_(10)a_(2)+theta_(11)a_(3)+theta_(12):}\begin{equation} h_{\theta}(x)=\theta_{9} a_{1}+\theta_{10} a_{2}+\theta_{11} a_{3}+\theta_{12} \tag{2.3} \end{equation}
where θ θ theta\theta contains all the parameters ( θ 1 , , θ 12 ) ( θ 1 , , θ 12 ) (theta_(1),dots,theta_(12))(\theta_{1},\ldots,\theta_{12}).
Now we represent the output as a quite complex function of x x xx with parameters θ θ theta\theta. Then you can use this parametrization h θ h θ h_(theta)h_{\theta} with the machinery of Section 1 to learn the parameters θ θ theta\theta.
Inspiration from Biological Neural Networks. As the name suggests, artificial neural networks were inspired by biological neural networks. The hidden units a 1 , , a m a 1 , , a m a_(1),dots,a_(m)a_{1}, \ldots, a_{m} correspond to the neurons in a biological neural network, and the parameters θ i θ i theta_(i)\theta_{i}'s correspond to the synapses. However, it's unclear how similar the modern deep artificial neural networks are to the biological ones. For example, perhaps not many neuroscientists think biological neural networks could have 1000 layers, while some modern artificial neural networks do (we will elaborate more on the notion of layers.) Moreover, it's an open question whether human brains update their neural networks in a way similar to the way that computer scientists learn artificial neural networks (using backpropagation, which we will introduce in the next section.).
Two-layer Fully-Connected Neural Networks. We constructed the neural network in equation (2.3) using a significant amount of prior knowledge/belief about how the "family size", "walkable", and "school quality" are determined by the inputs. We implicitly assumed that we know the family size is an important quantity to look at and that it can be determined by only the "size" and "# bedrooms". Such a prior knowledge might not be available for other applications. It would be more flexible and general to have a generic parameterization. A simple way would be to write the intermediate variable a 1 a 1 a_(1)a_{1} as a function of all x 1 , , x 4 x 1 , , x 4 x_(1),dots,x_(4)x_{1}, \ldots, x_{4}:
(2.4) a 1 = ReLU ( w 1 x + b 1 ) , where w 1 R 4 and b 1 R (2.4) a 1 = ReLU w 1 x + b 1 ,  where  w 1 R 4  and  b 1 R {:(2.4)a_(1)=ReLU(w_(1)^(TT)x+b_(1))","" where "w_(1)inR^(4)" and "b_(1)inR:}\begin{equation} a_{1}=\operatorname{ReLU}\left(w_{1}^{\top} x+b_{1}\right), \text { where } w_{1} \in \mathbb{R}^{4} \text { and } b_{1} \in \mathbb{R} \tag{2.4} \end{equation}
a 2 = ReLU ( w 2 x + b 2 ) , where w 2 R 4 and b 2 R a 2 = ReLU w 2 x + b 2 ,  where  w 2 R 4  and  b 2 R a_(2)=ReLU(w_(2)^(TT)x+b_(2))," where "w_(2)inR^(4)" and "b_(2)inRa_{2}=\operatorname{ReLU}\left(w_{2}^{\top} x+b_{2}\right), \text { where } w_{2} \in \mathbb{R}^{4} \text { and } b_{2} \in \mathbb{R}
a 3 = ReLU ( w 3 x + b 3 ) , where w 3 R 4 and b 3 R a 3 = ReLU w 3 x + b 3 ,  where  w 3 R 4  and  b 3 R a_(3)=ReLU(w_(3)^(TT)x+b_(3))," where "w_(3)inR^(4)" and "b_(3)inRa_{3}=\operatorname{ReLU}\left(w_{3}^{\top} x+b_{3}\right), \text { where } w_{3} \in \mathbb{R}^{4} \text { and } b_{3} \in \mathbb{R}
We still define h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) using equation (2.3) with a 1 , a 2 , a 3 a 1 , a 2 , a 3 a_(1),a_(2),a_(3)a_{1}, a_{2}, a_{3} being defined as above. Thus we have a so-called fully-connected neural network as visualized in the dependency graph in Figure 2 because all the intermediate variables a i a i a_(i)a_{i}'s depend on all the inputs x i x i x_(i)x_{i}'s.
For full generality, a two-layer fully-connected neural network with m m mm hidden units and d d dd dimensional input x R d x R d x inR^(d)x \in \mathbb{R}^{d} is defined as
(2.5) j [ 1 , , m ] , z j = w j [ 1 ] x + b j [ 1 ] where w j [ 1 ] R d , b j [ 1 ] R (2.5) j [ 1 , , m ] , z j = w j [ 1 ] x + b j [ 1 ]  where  w j [ 1 ] R d , b j [ 1 ] R {:(2.5)AA j in[1","dots","m]","quadz_(j)=w_(j)^([1]^(TT))x+b_(j)^([1])" where "w_(j)^([1])inR^(d)","b_(j)^([1])inR:}\begin{equation} \forall j \in[1, \ldots, m], \quad z_{j}=w_{j}^{[1]^{\top}} x+b_{j}^{[1]} \text { where } w_{j}^{[1]} \in \mathbb{R}^{d}, b_{j}^{[1]} \in \mathbb{R} \tag{2.5} \end{equation}
Figure 3: Diagram of a two-layer fully connected neural network. Each edge from node x i x i x_(i)x_{i} to node a j a j a_(j)a_{j} indicates that a j a j a_(j)a_{j} depends on x i x i x_(i)x_{i}. The edge from x i x i x_(i)x_{i} to a j a j a_(j)a_{j} is associated with the weight ( w j [ 1 ] ) i w j [ 1 ] i (w_(j)^([1]))_(i)\left(w_{j}^{[1]}\right)_{i} which denotes the i i ii-th coordinate of the vector w j [ 1 ] w j [ 1 ] w_(j)^([1])w_{j}^{[1]}. The activation a j a j a_(j)a_{j} can be computed by taking the ReLUof the weighted sum of x i x i x_(i)x_{i} 's with the weights being the weights associated with the incoming edges, that is, a j = ReLU ( i = 1 d ( w j [ 1 ] ) i x i ) a j = ReLU i = 1 d w j [ 1 ] i x i a_(j)=ReLU(sum_(i=1)^(d)(w_(j)^([1]))_(i)x_(i))a_{j}=\operatorname{ReLU}\left(\sum_{i=1}^{d}\left(w_{j}^{[1]}\right)_{i} x_{i}\right).
a j = ReLU ( z j ) , a = [ a 1 , , a m ] R m (2.6) h θ ( x ) = w [ 2 ] a + b [ 2 ] where w [ 2 ] R m , b [ 2 ] R , a j = ReLU z j , a = a 1 , , a m R m (2.6) h θ ( x ) = w [ 2 ] a + b [ 2 ]  where  w [ 2 ] R m , b [ 2 ] R , {:[a_(j)=ReLU(z_(j))","],[a=[a_(1),dots,a_(m)]^(TT)inR^(m)],(2.6)h_(theta)(x)=w^([2]^(TT))a+b^([2])" where "w^([2])inR^(m)","b^([2])inR",":}\begin{align} a_{j} &=\operatorname{ReLU}\left(z_{j}\right),\nonumber \\ a &=\left[a_{1}, \ldots, a_{m}\right]^{\top} \in \mathbb{R}^{m}\nonumber \\ h_{\theta}(x) &=w^{[2]^{\top}} a+b^{[2]} \text { where } w^{[2]} \in \mathbb{R}^{m}, b^{[2]} \in \mathbb{R}, \tag{2.6} \end{align}
Note that by default the vectors in R d R d R^(d)\mathbb{R}^{d} are viewed as column vectors, and in particular a a aa is a column vector with components a 1 , a 2 , , a m a 1 , a 2 , , a m a_(1),a_(2),dots,a_(m)a_{1}, a_{2}, \ldots, a_{m}. The indices [ 1 ] [ 1 ] ^([1]){ }^{[1]} and [ 2 ] [ 2 ] ^([2]){ }^{[2]} are used to distinguish two sets of parameters: the w j [ 1 ] w j [ 1 ] w_(j)^([1])w_{j}^{[1]},s (each of which is a vector in R d R d R^(d)\mathbb{R}^{d} ) and w [ 2 ] w [ 2 ] w^([2])w^{[2]} (which is a vector in R m R m R^(m)\mathbb{R}^{m} ). We will have more of these later.
Vectorization. Before we introduce neural networks with more layers and more complex structures, we will simplify the expressions for neural networks with more matrix and vector notations. Another important motivation of vectorization is the speed perspective in the implementation. In order to implement a neural network efficiently, one must be careful when using for loops. The most natural way to implement equation (2.5) in code is perhaps to use a for loop. In practice, the dimensionalities of the inputs and hidden units are high. As a result, code will run very slowly if you use for loops. Leveraging the parallelism in GPUs is/was crucial for the progress of deep learning.
This gave rise to vectorization. Instead of using for loops, vectorization takes advantage of matrix algebra and highly optimized numerical linear algebra packages (e.g., BLAS) to make neural network computations run quickly. Before the deep learning era, a for loop may have been sufficient on smaller datasets, but modern deep networks and state-of-the-art datasets will be infeasible to run with for loops.
We vectorize the two-layer fully-connected neural network as below. We define a weight matrix W [ 1 ] W [ 1 ] W^([1])W^{[1]} in R m × d R m × d R^(m xx d)\mathbb{R}^{m \times d} as the concatenation of all the vectors w j [ 1 ] , s w j [ 1 ] , s w_(j)^([1],s)w_{j}^{[1], s} in the following way:
(2.7) W [ 1 ] = [ w 1 [ 1 ] w 2 [ 1 ] w m [ 1 ] ] R m × d (2.7) W [ 1 ] = w 1 [ 1 ] w 2 [ 1 ] w m [ 1 ] R m × d {:(2.7)W^([1])=[[-w_(1)^([1]^(TT))-],[-w_(2)^([1]^(TT))-],[vdots],[-w_(m)^([1]^(TT))-]]inR^(m xx d):}\begin{equation} W^{[1]}=\left[\begin{array}{c} -w_{1}^{[1]^{\top}}- \\ -w_{2}^{[1]^{\top}}- \\ \vdots \\ -w_{m}^{[1]^{\top}}- \end{array}\right] \in \mathbb{R}^{m \times d} \tag{2.7} \end{equation}
Now by the definition of matrix vector multiplication, we can write z = z = z=z= [ z 1 , , z m ] R m z 1 , , z m R m [z_(1),dots,z_(m)]^(TT)inR^(m)\left[z_{1}, \ldots, z_{m}\right]^{\top} \in \mathbb{R}^{m} as
(2.8) [ z 1 z m ] z R m × 1 = [ w 1 [ 1 ] w 2 [ 1 ] w m [ 1 ] ] W [ 1 ] R m × d [ x 1 x 2 x d ] x R d × 1 + [ b 1 [ 1 ] b 2 [ 1 ] b m [ 1 ] ] b [ 1 ] R m × 1 (2.8) z 1 z m z R m × 1 = w 1 [ 1 ] w 2 [ 1 ] w m [ 1 ] W [ 1 ] R m × d x 1 x 2 x d x R d × 1 + b 1 [ 1 ] b 2 [ 1 ] b m [ 1 ] b [ 1 ] R m × 1 {:(2.8)ubrace([[z_(1)],[vdots],[vdots],[z_(m)]])_(z inR^(m xx1))=ubrace([[-w_(1)^([1]^(TT))-],[-w_(2)^([1]^(TT))-],[vdots],[-w_(m)^([1]^(TT))-]])_(W^([1])inR^(m xx d))ubrace([[x_(1)],[x_(2)],[vdots],[x_(d)]])_(x inR^(d xx1))+ubrace([[b_(1)^([1])],[b_(2)^([1])],[vdots],[b_(m)^([1])]])_(b^([1])inR^(m xx1)):}\begin{equation} \underbrace{\left[\begin{array}{c} z_{1} \\ \vdots \\ \vdots \\ z_{m} \end{array}\right]}_{z \in \mathbb{R}^{m \times 1}}=\underbrace{\left[\begin{array}{c} -w_{1}^{[1]^{\top}}- \\ -w_{2}^{[1]^{\top}}- \\ \vdots \\ -w_{m}^{[1]^{\top}}- \end{array}\right]}_{W^{[1]} \in \mathbb{R}^{m \times d}} \underbrace{\left[\begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{d} \end{array}\right]}_{x \in \mathbb{R}^{d \times 1}}+\underbrace{\left[\begin{array}{c} b_{1}^{[1]} \\ b_{2}^{[1]} \\ \vdots \\ b_{m}^{[1]} \end{array}\right]}_{b^{[1]} \in \mathbb{R}^{m \times 1}} \tag{2.8} \end{equation}
Or succinctly,
(2.9) z = W [ 1 ] x + b [ 1 ] (2.9) z = W [ 1 ] x + b [ 1 ] {:(2.9)z=W^([1])x+b^([1]):}\begin{equation} z=W^{[1]} x+b^{[1]} \tag{2.9} \end{equation}
We remark again that a vector in R d R d R^(d)\mathbb{R}^{d} in this notes, following the conventions previously established, is automatically viewed as a column vector, and can also be viewed as a d × 1 d × 1 d xx1d \times 1 dimensional matrix. (Note that this is different from numpy where a vector is viewed as a row vector in broadcasting.)
Computing the activations a R m a R m a inR^(m)a \in \mathbb{R}^{m} from z R m z R m z inR^(m)z \in \mathbb{R}^{m} involves an elementwise non-linear application of the ReLU function, which can be computed in parallel efficiently. Overloading ReLU for element-wise application of ReLU (meaning, for a vector t R d t R d t inR^(d)t \in \mathbb{R}^{d}, ReLU ( t ) ReLU ( t ) ReLU(t)\operatorname{ReLU}(t) is a vector such that ReLU ( t ) i = ReLU ( t ) i = ReLU(t)_(i)=\operatorname{ReLU}(t)_{i}= ReLU ( t i ) ) ReLU t i {: ReLU(t_(i)))\left.\operatorname{ReLU}\left(t_{i}\right)\right), we have
(2.10) a = ReLU ( z ) (2.10) a = ReLU ( z ) {:(2.10)a=ReLU(z):}\begin{equation} a=\operatorname{ReLU}(z) \tag{2.10} \end{equation}
Define W [ 2 ] = [ w [ 2 ] ] R 1 × m W [ 2 ] = w [ 2 ] R 1 × m W^([2])=[w^([2]^(TT))]inR^(1xx m)W^{[2]}=\left[w^{[2]^{\top}}\right] \in \mathbb{R}^{1 \times m} similarly. Then, the model in equation (2.6) can be summarized as
a = ReLU ( W [ 1 ] x + b [ 1 ] ) (2.11) h θ ( x ) = W [ 2 ] a + b [ 2 ] a = ReLU W [ 1 ] x + b [ 1 ] (2.11) h θ ( x ) = W [ 2 ] a + b [ 2 ] {:[a=ReLU(W^([1])x+b^([1]))],(2.11)h_(theta)(x)=W^([2])a+b^([2]):}\begin{align} a &=\operatorname{ReLU}\left(W^{[1]} x+b^{[1]}\right)\nonumber \\ h_{\theta}(x) &=W^{[2]} a+b^{[2]} \tag{2.11} \end{align}
Here θ θ theta\theta consists of W [ 1 ] , W [ 2 ] W [ 1 ] , W [ 2 ] W^([1]),W^([2])W^{[1]}, W^{[2]} (often referred to as the weight matrices) and b [ 1 ] , b [ 2 ] b [ 1 ] , b [ 2 ] b^([1]),b^([2])b^{[1]}, b^{[2]} (referred to as the biases). The collection of W [ 1 ] , b [ 1 ] W [ 1 ] , b [ 1 ] W^([1]),b^([1])W^{[1]}, b^{[1]} is referred to as the first layer, and W [ 2 ] , b [ 2 ] W [ 2 ] , b [ 2 ] W^([2]),b^([2])W^{[2]}, b^{[2]} the second layer. The activation a a aa is referred to as the hidden layer. A two-layer neural network is also called one-hidden-layer neural network.
Multi-layer fully-connected neural networks. With this succinct notations, we can stack more layers to get a deeper fully-connected neural network. Let r r rr be the number of layers (weight matrices). Let W [ 1 ] , , W [ r ] , b [ 1 ] , , b [ r ] W [ 1 ] , , W [ r ] , b [ 1 ] , , b [ r ] W^([1]),dots,W^([r]),b^([1]),dots,b^([r])W^{[1]}, \ldots, W^{[r]}, b^{[1]}, \ldots, b^{[r]} be the weight matrices and biases of all the layers. Then a multi-layer neural network can be written as
a [ 1 ] = ReLU ( W [ 1 ] x + b [ 1 ] ) a [ 2 ] = ReLU ( W [ 2 ] a [ 1 ] + b [ 2 ] ) a [ r 1 ] = ReLU ( W [ r 1 ] a [ r 2 ] + b [ r 1 ] ) (2.12) h θ ( x ) = W [ r ] a [ r 1 ] + b [ r ] a [ 1 ] = ReLU W [ 1 ] x + b [ 1 ] a [ 2 ] = ReLU W [ 2 ] a [ 1 ] + b [ 2 ] a [ r 1 ] = ReLU W [ r 1 ] a [ r 2 ] + b [ r 1 ] (2.12) h θ ( x ) = W [ r ] a [ r 1 ] + b [ r ] {:[a^([1])=ReLU(W^([1])x+b^([1]))],[a^([2])=ReLU(W^([2])a^([1])+b^([2]))],[cdots],[a^([r-1])=ReLU(W^([r-1])a^([r-2])+b^([r-1]))],(2.12)h_(theta)(x)=W^([r])a^([r-1])+b^([r]):}\begin{align} a^{[1]} &=\operatorname{ReLU}\left(W^{[1]} x+b^{[1]}\right)\nonumber \\ a^{[2]} &=\operatorname{ReLU}\left(W^{[2]} a^{[1]}+b^{[2]}\right)\nonumber \\ \cdots &\nonumber \\ a^{[r-1]} &=\operatorname{ReLU}\left(W^{[r-1]} a^{[r-2]}+b^{[r-1]}\right)\nonumber \\ h_{\theta}(x) &=W^{[r]} a^{[r-1]}+b^{[r]}\nonumber \tag{2.12} \end{align}
We note that the weight matrices and biases need to have compatible dimensions for the equations above to make sense. If a [ k ] a [ k ] a^([k])a^{[k]} has dimension m k m k m_(k)m_{k}, then the weight matrix W [ k ] W [ k ] W^([k])W^{[k]} should be of dimension m k × m k 1 m k × m k 1 m_(k)xxm_(k-1)m_{k} \times m_{k-1}, and the bias b [ k ] R m k b [ k ] R m k b^([k])inR^(m_(k))b^{[k]} \in \mathbb{R}^{m_{k}}. Moreover, W [ 1 ] R m 1 × d W [ 1 ] R m 1 × d W^([1])inR^(m_(1)xx d)W^{[1]} \in \mathbb{R}^{m_{1} \times d} and W [ r ] R 1 × m r 1 W [ r ] R 1 × m r 1 W^([r])inR^(1xxm_(r-1))W^{[r]} \in \mathbb{R}^{1 \times m_{r-1}}.
The total number of neurons in the network is m 1 + + m r m 1 + + m r m_(1)+cdots+m_(r)m_{1}+\cdots+m_{r}, and the total number of parameters in this network is ( d + 1 ) m 1 + ( m 1 + 1 ) m 2 + + ( d + 1 ) m 1 + m 1 + 1 m 2 + + (d+1)m_(1)+(m_(1)+1)m_(2)+cdots+(d+1) m_{1}+\left(m_{1}+1\right) m_{2}+\cdots+ ( m r 1 + 1 ) m r m r 1 + 1 m r (m_(r-1)+1)m_(r)\left(m_{r-1}+1\right) m_{r}.
Sometimes for notational consistency we also write a [ 0 ] = x a [ 0 ] = x a^([0])=xa^{[0]}=x, and a [ r ] = a [ r ] = a^([r])=a^{[r]}= h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x). Then we have simple recursion that
(2.13) a [ k ] = ReLU ( W [ k ] a [ k 1 ] + b [ k ] ) , k = 1 , , r 1 (2.13) a [ k ] = ReLU W [ k ] a [ k 1 ] + b [ k ] , k = 1 , , r 1 {:(2.13)a^([k])=ReLU(W^([k])a^([k-1])+b^([k]))","AA k=1","dots","r-1:}\begin{equation} a^{[k]}=\operatorname{ReLU}\left(W^{[k]} a^{[k-1]}+b^{[k]}\right), \forall k=1, \ldots, r-1 \tag{2.13} \end{equation}
Note that this would have be true for k = r k = r k=rk=r if there were an additional ReLU in equation (2.12), but often people like to make the last layer linear (aka without a ReLU) so that negative outputs are possible and it's easier to interpret the last layer as a linear model. (More on the interpretability at the "connection to kernel method" paragraph of this section.)
Other activation functions. The activation function ReLU can be replaced by many other non-linear function σ ( ) σ ( ) sigma(*)\sigma(\cdot) that maps R R R\mathbb{R} to R R R\mathbb{R} such as
(2.14) σ ( z ) = 1 1 + e z ( sigmoid ) (2.15) σ ( z ) = e z e z e z + e z ( tanh ) (2.14) σ ( z ) = 1 1 + e z ( sigmoid ) (2.15) σ ( z ) = e z e z e z + e z ( tanh ) {:(2.14)sigma(z)=(1)/(1+e^(-z))quad("sigmoid"),(2.15)sigma(z)=(e^(z)-e^(-z))/(e^(z)+e^(-z))quad("tanh"):}\begin{align} &\sigma(z)=\frac{1}{1+e^{-z}} \quad(\text {sigmoid}) \nonumber\tag{2.14}\\ &\sigma(z)=\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}} \quad(\text{tanh} )\nonumber\tag{2.15} \end{align}
Why do we not use the identity function for σ ( z ) σ ( z ) sigma(z)\sigma(z)? That is, why not use σ ( z ) = z σ ( z ) = z sigma(z)=z\sigma(z)=z? Assume for sake of argument that b [ 1 ] b [ 1 ] b^([1])b^{[1]} and b [ 2 ] b [ 2 ] b^([2])b^{[2]} are zeros. Suppose σ ( z ) = z σ ( z ) = z sigma(z)=z\sigma(z)=z, then for two-layer neural network, we have that
(2.16) h θ ( x ) = W [ 2 ] a [ 1 ] (2.17) = W [ 2 ] σ ( z [ 1 ] ) by definition (2.18) = W [ 2 ] z [ 1 ] since σ ( z ) = z (2.19) = W [ 2 ] W [ 1 ] x from Equation ( 2.8 ) (2.20) = W ~ x where W ~ = W [ 2 ] W [ 1 ] (2.16) h θ ( x ) = W [ 2 ] a [ 1 ] (2.17) = W [ 2 ] σ z [ 1 ]  by definition  (2.18) = W [ 2 ] z [ 1 ]  since  σ ( z ) = z (2.19) = W [ 2 ] W [ 1 ] x  from Equation  ( 2.8 ) (2.20) = W ~ x  where  W ~ = W [ 2 ] W [ 1 ] {:(2.16)h_(theta)(x)=W^([2])a^([1]),(2.17)=W^([2])sigma(z^([1]))" by definition ",(2.18)=W^([2])z^([1])" since "sigma(z)=z,(2.19)=W^([2])W^([1])x" from Equation "(2.8),(2.20)= tilde(W)x" where " tilde(W)=W^([2])W^([1]):}\begin{align} h_{\theta}(x) &=W^{[2]} a^{[1]} & & \nonumber \tag{2.16}\\ &=W^{[2]} \sigma\left(z^{[1]}\right) & & \text { by definition }\nonumber \tag{2.17} \\ &=W^{[2]} z^{[1]} & & \text { since } \sigma(z)=z \nonumber \tag{2.18}\\ &=W^{[2]} W^{[1]} x & & \text { from Equation }(2.8)\nonumber \tag{2.19} \\ &=\tilde{W} x & & \text { where } \tilde{W}=W^{[2]} W^{[1]}\nonumber \tag{2.20} \end{align}
Notice how W [ 2 ] W [ 1 ] W [ 2 ] W [ 1 ] W^([2])W^([1])W^{[2]} W^{[1]} collapsed into W ~ W ~ tilde(W)\tilde{W}.
This is because applying a linear function to another linear function will result in a linear function over the original input (i.e., you can construct a W ~ W ~ tilde(W)\tilde{W} such that W ~ x = W [ 2 ] W [ 1 ] x ) W ~ x = W [ 2 ] W [ 1 ] x {:( tilde(W))x=W^([2])W^([1])x)\left.\tilde{W} x=W^{[2]} W^{[1]} x\right). This loses much of the representational power of the neural network as often times the output we are trying to predict has a non-linear relationship with the inputs. Without non-linear activation functions, the neural network will simply perform linear regression.
Connection to the Kernel Method. In the previous lectures, we covered the concept of feature maps. Recall that the main motivation for feature maps is to represent functions that are non-linear in the input x x xx by θ ϕ ( x ) θ ϕ ( x ) theta^(TT)phi(x)\theta^{\top} \phi(x), where θ θ theta\theta are the parameters and ϕ ( x ) ϕ ( x ) phi(x)\phi(x), the feature map, is a handcrafted function non-linear in the raw input x x xx. The performance of the learning algorithms can significantly depends on the choice of the feature map ϕ ( x ) ϕ ( x ) phi(x)\phi(x). Oftentimes people use domain knowledge to design the feature map ϕ ( x ) ϕ ( x ) phi(x)\phi(x) that suits the particular applications. The process of choosing the feature maps is often referred to as feature engineering.
We can view deep learning as a way to automatically learn the right feature map (sometimes also referred to as "the representation") as follows. Suppose we denote by β β beta\beta the collection of the parameters in a fully-connected neural networks (equation (2.12)) except those in the last layer. Then we can abstract right a [ r 1 ] a [ r 1 ] a^([r-1])a^{[r-1]} as a function of the input x x xx and the parameters in β : a [ r 1 ] = ϕ β ( x ) β : a [ r 1 ] = ϕ β ( x ) beta:a^([r-1])=phi_(beta)(x)\beta: a^{[r-1]}=\phi_{\beta}(x). Now we can write the model as
(2.21) h θ ( x ) = W [ r ] ϕ β ( x ) + b [ r ] (2.21) h θ ( x ) = W [ r ] ϕ β ( x ) + b [ r ] {:(2.21)h_(theta)(x)=W^([r])phi_(beta)(x)+b^([r]):}\begin{equation} h_{\theta}(x)=W^{[r]} \phi_{\beta}(x)+b^{[r]} \tag{2.21} \end{equation}
When β β beta\beta is fixed, then ϕ β ( ) ϕ β ( ) phi_(beta)(*)\phi_{\beta}(\cdot) can viewed as a feature map, and therefore h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) is just a linear model over the features ϕ β ( x ) ϕ β ( x ) phi_(beta)(x)\phi_{\beta}(x). However, we will train the neural networks, both the parameters in β β beta\beta and the parameters W [ r ] , b [ r ] W [ r ] , b [ r ] W^([r]),b^([r])W^{[r]}, b^{[r]} are optimized, and therefore we are not learning a linear model in the feature space, but also learning a good feature map ϕ β ( ) ϕ β ( ) phi_(beta)(*)\phi_{\beta}(\cdot) itself so that it's possible to predict accurately with a linear model on top of the feature map. Therefore, deep learning tends to depend less on the domain knowledge of the particular applications and requires often less feature engineering. The penultimate layer a [ r 1 ] a [ r 1 ] a^([r-1])a^{[r-1]} is often (informally) referred to as the learned features or representations in the context of deep learning.
In the example of house price prediction, a fully-connected neural network does not need us to specify the intermediate quantity such "family size", and may automatically discover some useful features in the last penultimate layer (the activation a [ r 1 ] ) a [ r 1 ] {:a^([r-1]))\left.a^{[r-1]}\right), and use them to linearly predict the housing price. Often the feature map / representation obtained from one datasets (that is, the function ϕ β ( ) ϕ β ( ) phi_(beta)(*)\phi_{\beta}(\cdot) can be also useful for other datasets, which indicates they contain essential information about the data. However, oftentimes, the neural network will discover complex features which are very useful for predicting the output but may be difficult for a human to understand or interpret. This is why some people refer to neural networks as a black box, as it can be difficult to understand the features it has discovered.

3. Backpropagation

In this section, we introduce backpropgation or auto-differentiation, which computes the gradient of the loss J ( j ) ( θ ) J ( j ) ( θ ) gradJ^((j))(theta)\nabla J^{(j)}(\theta) efficiently. We will start with an informal theorem that states that as long as a real-valued function f f ff can be efficiently computed/evaluated by a differentiable network or circuit, then its gradient can be efficiently computed in a similar time. We will then show how to do this concretely for fully-connected neural networks.
Because the formality of the general theorem is not the main focus here, we will introduce the terms with informal definitions. By a differentiable circuit or a differentiable network, we mean a composition of a sequence of differentiable arithmetic operations (additions, subtraction, multiplication, divisions, etc) and elementary differentiable functions (ReLU, exp exp exp\exp, log log log\log, sin sin sin\sin, cos cos cos\cos, etc.). Let the size of the circuit be the total number of such operations and elementary functions. We assume that each of the operations and functions, and their derivatives or partial derivatives can be computed in O ( 1 ) O ( 1 ) O(1)O(1) time in the computer.
Theorem 3.1 : [backpropagation or auto-differentiation, informally stated] Suppose a differentiable circuit of size N N NN computes a real-valued function f : R R f : R R f:R^(ℓ)rarrRf: \mathbb{R}^{\ell} \rightarrow \mathbb{R}. Then, the gradient f f grad f\nabla f can be computed in time O ( N ) O ( N ) O(N)O(N), by a circuit of size O ( N ) O ( N ) O(N)O(N).[4]
We note that the loss function J ( j ) ( θ ) J ( j ) ( θ ) J^((j))(theta)J^{(j)}(\theta) for the j j jj-th example can be indeed computed by a sequence of operations and functions involving additions, subtraction, multiplications, and non-linear activations. Thus the theorem suggests that we should be able to compute J ( j ) ( θ ) J ( j ) ( θ ) gradJ^((j))(theta)\nabla J^{(j)}(\theta) in a similar time to that for computing J ( j ) ( θ ) J ( j ) ( θ ) J^((j))(theta)J^{(j)}(\theta) itself. This does not only apply to the fully-connected neural network introduced in Section 2, but also many other types of neural networks.
In the rest of the section, we will showcase how to compute the gradient of the loss efficiently for fully-connected neural networks using backpropagation. Even though auto-differentiation or backpropagation is implemented in all the deep learning packages such as TensorFlow and PyTorch, understanding it is very helpful for gaining insights into the workings of deep learning.

3.1. Preliminary: chain rule

We first recall the chain rule in calculus. Suppose the variable J J JJ depends on the variables θ 1 , , θ p θ 1 , , θ p theta_(1),dots,theta_(p)\theta_{1}, \ldots, \theta_{p} via the intermediate variables g 1 , , g k g 1 , , g k g_(1),dots,g_(k)g_{1}, \ldots, g_{k}:
(3.1) g j = g j ( θ 1 , , θ p ) , j { 1 , , k } (3.2) J = J ( g 1 , , g k ) (3.1) g j = g j θ 1 , , θ p , j { 1 , , k } (3.2) J = J g 1 , , g k {:(3.1)g_(j)=g_(j)(theta_(1),dots,theta_(p))","AA j in{1","cdots","k},(3.2)J=J(g_(1),dots,g_(k)):}\begin{align} &g_{j}=g_{j}\left(\theta_{1}, \ldots, \theta_{p}\right), \forall j \in\{1, \cdots, k\} \nonumber \tag{3.1} \\ &J=J\left(g_{1}, \ldots, g_{k}\right) \nonumber \tag{3.2} \end{align}
Here we overload the meaning of g j g j g_(j)g_{j}'s: they denote both the intermediate variables but also the functions used to compute the intermediate variables. Then, by the chain rule, we have that i i AA i\forall i,
(3.3) J θ i = j = 1 k J g j g j θ i (3.3) J θ i = j = 1 k J g j g j θ i {:(3.3)(del J)/(deltheta_(i))=sum_(j=1)^(k)(del J)/(delg_(j))(delg_(j))/(deltheta_(i)):}\begin{equation} \frac{\partial J}{\partial \theta_{i}}=\sum_{j=1}^{k} \frac{\partial J}{\partial g_{j}} \frac{\partial g_{j}}{\partial \theta_{i}} \tag{3.3} \end{equation}
For the ease of invoking the chain rule in the following subsections in various ways, we will call J J JJ the output variable, g 1 , , g k g 1 , , g k g_(1),dots,g_(k)g_{1}, \ldots, g_{k} intermediate variables, and θ 1 , , θ p θ 1 , , θ p theta_(1),dots,theta_(p)\theta_{1}, \ldots, \theta_{p} the input variables in the chain rule.

3.2. Backpropagation for two-layer neural networks

Now we consider the two-layer neural network defined in equation (2.11). Our general approach is to first unpack the vectorized notation to scalar form to apply the chain rule, but as soon as we finish the derivation, we will pack the scalar equations back to a vectorized form to keep the notations succinct.
Recall the following equations are used for the computation of the loss J J JJ :
z = W [ 1 ] x + b [ 1 ] a = ReLU ( z ) h θ ( x ) o = W [ 2 ] a + b [ 2 ] (3.4) J = 1 2 ( y o ) 2 z = W [ 1 ] x + b [ 1 ] a = ReLU ( z ) h θ ( x ) o = W [ 2 ] a + b [ 2 ] (3.4) J = 1 2 ( y o ) 2 {:[z=W^([1])x+b^([1])],[a=ReLU(z)],[h_(theta)(x)≜o=W^([2])a+b^([2])],(3.4)J=(1)/(2)(y-o)^(2):}\begin{align} z &=W^{[1]} x+b^{[1]} \nonumber\\ a &=\operatorname{ReLU}(z) \nonumber\\ h_{\theta}(x) \triangleq o &=W^{[2]} a+b^{[2]} \nonumber\\ J &=\frac{1}{2}(y-o)^{2}\nonumber \tag{3.4} \end{align}
Recall that W [ 1 ] R m × d , W [ 2 ] R 1 × m W [ 1 ] R m × d , W [ 2 ] R 1 × m W^([1])inR^(m xx d),W^([2])inR^(1xx m)W^{[1]} \in \mathbb{R}^{m \times d}, W^{[2]} \in \mathbb{R}^{1 \times m}, and b [ 1 ] , z , a R m b [ 1 ] , z , a R m b^([1]),z,a inR^(m)b^{[1]}, z, a \in \mathbb{R}^{m}, and o , y , b [ 2 ] R o , y , b [ 2 ] R o,y,b^([2])inRo, y, b^{[2]} \in \mathbb{R}. Recall that a vector in R d R d R^(d)\mathbb{R}^{d} is automatically interpreted as a column vector (like a matrix in R d × 1 R d × 1 R^(d xx1)\mathbb{R}^{d \times 1}) if need be.[5]
Computing J W [ 2 ] J W [ 2 ] (del J)/(delW^([2]))\frac{\partial J}{\partial W^{[2]}}. Suppose W [ 2 ] = [ W 1 [ 2 ] , , W m [ 2 ] ] W [ 2 ] = W 1 [ 2 ] , , W m [ 2 ] W^([2])=[W_(1)^([2]),dots,W_(m)^([2])]W^{[2]}=\left[W_{1}^{[2]}, \ldots, W_{m}^{[2]}\right]. We start by computing J W i [ 2 ] J W i [ 2 ] (del J)/(delW_(i)^([2]))\frac{\partial J}{\partial W_{i}^{[2]}} using the chain rule (3.3) with o o oo as the intermediate variable.
J W i [ 2 ] = J o o W i [ 2 ] = ( o y ) o W i [ 2 ] = ( o y ) a i ( because o = i = 1 m W i [ 2 ] a i + b [ 2 ] ) J W i [ 2 ] = J o o W i [ 2 ] = ( o y ) o W i [ 2 ] = ( o y ) a i ( because  o = i = 1 m W i [ 2 ] a i + b [ 2 ] ) {:[(del J)/(delW_(i)^([2]))=(del J)/(del o)*(del o)/(delW_(i)^([2]))],[=(o-y)*(del o)/(delW_(i)^([2]))],[=(o-y)*a_(i)qquadqquad("because "o=sum_(i=1)^(m)W_(i)^([2])a_(i)+b^([2]))]:}\begin{align} \frac{\partial J}{\partial W_{i}^{[2]}}&=\frac{\partial J}{\partial o} \cdot \frac{\partial o}{\partial W_{i}^{[2]}} \nonumber\\ &=(o-y) \cdot \frac{\partial o}{\partial W_{i}^{[2]}} \nonumber\\ &=(o-y) \cdot a_{i} \qquad\qquad (\text {because } o=\sum_{i=1}^{m} W_{i}^{[2]} a_{i}+b^{[2]})\nonumber \end{align}
Vectorized notation. The equation above in vectorized notation becomes
(3.5) J W [ 2 ] = ( o y ) a R 1 × m (3.5) J W [ 2 ] = ( o y ) a R 1 × m {:(3.5)(del J)/(delW^([2]))=(o-y)*a^(TT)inR^(1xx m):}\begin{equation} \frac{\partial J}{\partial W^{[2]}}=(o-y) \cdot a^{\top} \in \mathbb{R}^{1 \times m} \tag{3.5} \end{equation}
Similarly, we leave the reader to verify that
(3.6) J b [ 2 ] = ( o y ) R (3.6) J b [ 2 ] = ( o y ) R {:(3.6)(del J)/(delb^([2]))=(o-y)inR:}\begin{equation} \frac{\partial J}{\partial b^{[2]}}=(o-y) \in \mathbb{R} \tag{3.6} \end{equation}
Clarification for the dimensionality of the partial derivative notation. We will use the notation J A J A (del J)/(del A)\frac{\partial J}{\partial A} frequently in the rest of the lecture notes. We note that here we only use this notation for the case when J J JJ is a real-valued variable,[6] but A A AA can be a vector or a matrix. Moreover, J A J A (del J)/(del A)\frac{\partial J}{\partial A} has the same dimensionality as A A AA. For example, when A A AA is a matrix, the ( i , j ) ( i , j ) (i,j)(i, j)-th entry of J A J A (del J)/(del A)\frac{\partial J}{\partial A} is equal to J A i j J A i j (del J)/(delA_(ij))\frac{\partial J}{\partial A_{i j}}. If you are familiar with the notion of total derivatives, we note that the convention for dimensionality here is different from that for total derivatives.
Computing J W [ 1 ] [ 1 ] J W [ 1 ] [ 1 ] (del J)/(delW_([1])^([1]))\frac{\partial J}{\partial W_{[1]}^{[1]}}. Next we compute J W [ 1 ] J W [ 1 ] (del J)/(delW^([1]))\frac{\partial J}{\partial W^{[1]}}. We first unpack the vectorized notation: let W i j [ 1 ] W i j [ 1 ] W_(ij)^([1])W_{i j}^{[1]} denote the ( i , j ) ( i , j ) (i,j)(i, j)-the entry of W [ 1 ] W [ 1 ] W^([1])W^{[1]}, where i [ m ] i [ m ] i in[m]i \in[m] and j [ d ] j [ d ] j in[d]j \in[d]. We compute J W i j [ I ] J W i j [ I ] (del J)/(delW_(ij)^([I]))\frac{\partial J}{\partial W_{i j}^{[I]}} using chain rule (3.3) with z i z i z_(i)z_{i} as the intermediate variable.
J W i j [ 1 ] = J z i z i W i j [ 1 ] = J z i x j ( because z i = k = 1 d W i k [ 1 ] x k + b i [ 1 ] ) J W i j [ 1 ] = J z i z i W i j [ 1 ] = J z i x j ( because  z i = k = 1 d W i k [ 1 ] x k + b i [ 1 ] ) {:[(del J)/(delW_(ij)^([1]))=(del J)/(delz_(i))*(delz_(i))/(delW_(ij)^([1]))],[=(del J)/(delz_(i))*x_(j)qquad("because "z_(i)=sum_(k=1)^(d)W_(ik)^([1])x_(k)+b_(i)^([1]))]:}\begin{align} \frac{\partial J}{\partial W_{i j}^{[1]}} & =\frac{\partial J}{\partial z_{i}} \cdot \frac{\partial z_{i}}{\partial W_{i j}^{[1]}} \nonumber \\ & =\frac{\partial J}{\partial z_{i}} \cdot x_{j} \qquad (\text {because } z_{i}=\sum_{k=1}^{d} W_{i k}^{[1]} x_{k}+b_{i}^{[1]}) \nonumber \end{align}
Vectorized notation. The equation above can be written compactly as
(3.7) J W [ 1 ] = J z x (3.7) J W [ 1 ] = J z x {:(3.7)(del J)/(delW^([1]))=(del J)/(del z)*x^(TT):}\begin{equation} \frac{\partial J}{\partial W^{[1]}}=\frac{\partial J}{\partial z} \cdot x^{\top} \tag{3.7} \end{equation}
We can verify that the dimensions match: J W [ 1 ] R m × d , J z R m × 1 J W [ 1 ] R m × d , J z R m × 1 (del J)/(delW^([1]))inR^(m xx d),(del J)/(del z)inR^(m xx1)\frac{\partial J}{\partial W^{[1]}} \in \mathbb{R}^{m \times d}, \frac{\partial J}{\partial z} \in \mathbb{R}^{m \times 1} and x R 1 × d x R 1 × d x^(TT)inR^(1xx d)x^{\top} \in \mathbb{R}^{1 \times d}.
Abstraction: For future usage, the computations for J W [ 1 ] J W [ 1 ] (del J)/(delW^([1]))\frac{\partial J}{\partial W^{[1]}} and J W [ 2 ] J W [ 2 ] (del J)/(delW^([2]))\frac{\partial J}{\partial W^{[2]}} above can be abstractified into the following claim:
Claim 3.2: Suppose J J JJ is a real-valued output variable, z R m z R m z inR^(m)z \in \mathbb{R}^{m} is the intermediate variable, and W R m × d , u R d , b R m W R m × d , u R d , b R m W inR^(m xx d),u inR^(d),b inR^(m)W \in \mathbb{R}^{m \times d}, u \in \mathbb{R}^{d}, b \in \mathbb{R}^{m} are the input variables, and suppose they satisfy the following:
(3.8) z = W u + b (3.9) J = J ( z ) (3.8) z = W u + b (3.9) J = J ( z ) {:(3.8)z=Wu+b,(3.9)J=J(z):}\begin{align} z &=W u+b \nonumber\tag{3.8}\\ J &=J(z) \nonumber\tag{3.9} \end{align}
Then J W J W (del J)/(del W)\frac{\partial J}{\partial W} and J b J b (del J)/(del b)\frac{\partial J}{\partial b} satisfy:
(3.10) J W = J z u (3.11) J b = J z (3.10) J W = J z u (3.11) J b = J z {:(3.10)(del J)/(del W)=(del J)/(del z)*u^(TT),(3.11)(del J)/(del b)=(del J)/(del z):}\begin{align} \frac{\partial J}{\partial W} &=\frac{\partial J}{\partial z} \cdot u^{\top}\nonumber\tag{3.10} \\ \frac{\partial J}{\partial b} &=\frac{\partial J}{\partial z}\nonumber\tag{3.11} \end{align}
Computing J z J z (del J)/(del z)\frac{\partial J}{\partial z}. Equation (3.7) tells us that to compute J W [ 1 ] J W [ 1 ] (del J)/(del W[1])\frac{\partial J}{\partial W[1]}, it suffices to compute J z J z (del J)/(del z)\frac{\partial J}{\partial z}, which is the goal of the next few derivations.
We invoke the chain rule with J J JJ as the output variable, a i a i a_(i)a_{i} as the intermediate variable, and z i z i z_(i)z_{i} as the input variable,
J z i = J a i a i z i = J a i 1 { z i 0 } J z i = J a i a i z i = J a i 1 z i 0 {:[(del J)/(delz_(i))=(del J)/(dela_(i))(dela_(i))/(delz_(i))],[=(del J)/(dela_(i))*1{z_(i) >= 0}]:}\begin{aligned} \frac{\partial J}{\partial z_{i}} &=\frac{\partial J}{\partial a_{i}} \frac{\partial a_{i}}{\partial z_{i}} \\ &=\frac{\partial J}{\partial a_{i}} \cdot 1\left\{z_{i} \geq 0\right\} \end{aligned}
Vectorization and abstraction. The computation above can be summarized into:
Claim 3.3: Suppose the real-valued output variable J J JJ and vectors z , a R m z , a R m z,a inR^(m)z, a \in \mathbb{R}^{m} satisfy the following:
a = σ ( z ) , where σ is an element-wise activation, z , a R m J = J ( a ) a = σ ( z ) ,  where  σ  is an element-wise activation,  z , a R m J = J ( a ) {:[a=sigma(z)","" where "sigma" is an element-wise activation, "z","a inR^(m)],[J=J(a)]:}\begin{aligned} a &=\sigma(z), \text { where } \sigma \text { is an element-wise activation, } z, a \in \mathbb{R}^{m} \\ J &=J(a) \end{aligned}
Then, we have that
(3.12) J z = J a σ ( z ) (3.12) J z = J a σ ( z ) {:(3.12)(del J)/(del z)=(del J)/(del a)o.sigma^(')(z):}\begin{equation} \frac{\partial J}{\partial z}=\frac{\partial J}{\partial a} \odot \sigma^{\prime}(z) \tag{3.12} \end{equation}
where σ ( ) σ ( ) sigma^(')(*)\sigma^{\prime}(\cdot) is the element-wise derivative of the activation function σ σ sigma\sigma, and o.\odot denotes the element-wise product of two vectors of the same dimensionality.
Computing J a J a (del J)/(del a)\frac{\partial J}{\partial a}. Now it suffices to compute J a J a (del J)/(del a)\frac{\partial J}{\partial a}. We invoke the chain rule with J J JJ as the output variable, o o oo as the intermediate variable, and a i a i a_(i)a_{i} as the input variable,
J a i = J o o a i = ( o y ) W i [ 2 ] ( because o = i = 1 m W i [ 2 ] a i + b [ 2 ] ) J a i = J o o a i = ( o y ) W i [ 2 ] ( because  o = i = 1 m W i [ 2 ] a i + b [ 2 ] ) {:[(del J)/(dela_(i))=(del J)/(del o)(del o)/(dela_(i))],[=(o-y)*W_(i)^([2])qquad("because "o=sum_(i=1)^(m)W_(i)^([2])a_(i)+b^([2]))]:}\begin{align} \frac{\partial J}{\partial a_{i}} & =\frac{\partial J}{\partial o} \frac{\partial o}{\partial a_{i}}\nonumber \\ & =(o-y) \cdot W_{i}^{[2]} \qquad (\text {because } o=\sum_{i=1}^{m} W_{i}^{[2]} a_{i}+b^{[2]}) \nonumber \end{align}
Vectorization. In vectorized notation, we have
(3.13) J a = W [ 2 ] ( o y ) (3.13) J a = W [ 2 ] ( o y ) {:(3.13)(del J)/(del a)=W^([2]^(TT))*(o-y):}\begin{equation} \frac{\partial J}{\partial a}=W^{[2]^{\top}} \cdot(o-y) \tag{3.13} \end{equation}
Abstraction. We now present a more general form of the computation above.
Claim 3.4: Suppose J J JJ is a real-valued output variable, v R m v R m v inR^(m)v \in \mathbb{R}^{m} is the intermediate variable, and W R m × d , u R d , b R m W R m × d , u R d , b R m W inR^(m xx d),u inR^(d),b inR^(m)W \in \mathbb{R}^{m \times d}, u \in \mathbb{R}^{d}, b \in \mathbb{R}^{m} are the input variables, and suppose they satisfy the following:
v = W u + b J = J ( v ) v = W u + b J = J ( v ) {:[v=Wu+b],[J=J(v)]:}\begin{aligned} v &=W u+b \\ J &=J(v) \end{aligned}
Then,
J u = W J v (3.14) J u = W J v (3.14) {:[(del J)/(del u)=W^(TT)(del J)/(del v)],(3.14)quad:}\begin{align} \frac{\partial J}{\partial u}=W^{\top} \frac{\partial J}{\partial v}\nonumber\\ \quad \nonumber \tag{3.14} \end{align}
Summary for two-layer neural networks. Now combining the equations above, we arrive at Algorithm 3 which computes the gradients for twolayer neural networks.

3.3. Multi-layer neural networks

In this section, we will derive the backpropagation algorithms for the model defined in (2.12). With the notation a [ 0 ] = x a [ 0 ] = x a^([0])=xa^{[0]}=x, recall that we have
a [ 1 ] = ReLU ( W [ 1 ] a [ 0 ] + b [ 1 ] ) a [ 2 ] = ReLU ( W [ 2 ] a [ 1 ] + b [ 2 ] ) a [ r 1 ] = ReLU ( W [ r 1 ] a [ r 2 ] + b [ r 1 ] ) a [ 1 ] = ReLU W [ 1 ] a [ 0 ] + b [ 1 ] a [ 2 ] = ReLU W [ 2 ] a [ 1 ] + b [ 2 ] a [ r 1 ] = ReLU W [ r 1 ] a [ r 2 ] + b [ r 1 ] {:[a^([1])=ReLU(W^([1])a^([0])+b^([1]))],[a^([2])=ReLU(W^([2])a^([1])+b^([2]))],[cdots],[a^([r-1])=ReLU(W^([r-1])a^([r-2])+b^([r-1]))]:}\begin{aligned} a^{[1]} &=\operatorname{ReLU}\left(W^{[1]} a^{[0]}+b^{[1]}\right) \\ a^{[2]} &=\operatorname{ReLU}\left(W^{[2]} a^{[1]}+b^{[2]}\right) \\ \cdots & \\ a^{[r-1]} &=\operatorname{ReLU}\left(W^{[r-1]} a^{[r-2]}+b^{[r-1]}\right) \end{aligned}
Algorithm 3 Back-propagation for two-layer neural networks
1: Compute the values of z R m , a R m , and o R z R m , a R m , and  o R z inR^(m),a inR^(m)", and "o inRz \in \mathbb{R}^{m}, a \in \mathbb{R}^{m}\text{, and }o \in \mathbb{R}
2: Compute
δ [ 2 ] J o = ( o y ) R δ [ 1 ] J z ( W [ 2 ] ( o y ) ) 1 { z 0 } R m × 1 (by eqn. (3.12) and (3.13)) δ [ 2 ] J o = ( o y ) R δ [ 1 ] J z ( W [ 2 ] ( o y ) ) 1 { z 0 } R m × 1 (by eqn. (3.12) and (3.13)) {:[delta^([2])≜(del J)/(del o)=(o-y)inR],[delta^([1])≜(del J)/(del z)(W^([2]^(TT))(o-y))o.1{z >= 0}inR^(m xx1)],(by eqn. (3.12) and (3.13))quad:}\begin{align} &\delta^{[2]}\triangleq \frac{\partial J}{\partial o}=(o-y)\in \mathbb{R}\nonumber\\ &\delta^{[1]}\triangleq \frac{\partial J}{\partial z}(W^{[2]^{\top}}(o-y))\odot1\{ z\geq 0\}\in \mathbb{R}^{m \times 1}\nonumber\\ \quad \tag{by eqn. (3.12) and (3.13)} \end{align}
3: Compute
(by eqn. (3.5)) J W [ 2 ] = δ [ 2 ] a R 1 × m (by eqn. (3.5)) J b [ 2 ] = δ [ 2 ] R (by eqn. (3.7)) J W [ 1 ] = δ [ 1 ] x R m × d (as an exercise) J b [ 1 ] = δ [ 1 ] R m (by eqn. (3.5)) J W [ 2 ] = δ [ 2 ] a R 1 × m (by eqn. (3.5)) J b [ 2 ] = δ [ 2 ] R (by eqn. (3.7)) J W [ 1 ] = δ [ 1 ] x R m × d (as an exercise) J b [ 1 ] = δ [ 1 ] R m {:(by eqn. (3.5))(del J)/(delW^([2]))=delta^([2])a^(TT)inR^(1xx m),(by eqn. (3.5))(del J)/(delb^([2]))=delta^([2])inR,(by eqn. (3.7))(del J)/(delW^([1]))=delta^([1])x^(TT)inR^(m xx d),(as an exercise)(del J)/(delb^([1]))=delta^([1])inR^(m):}\begin{align} \frac{\partial J}{\partial W^{[2]}}&=\delta^{[2]}a^{\top}\in \mathbb{R}^{1\times m}\nonumber \tag{by eqn. (3.5)}\\ \frac{\partial J}{\partial b^{[2]}}&=\delta^{[2]}\in \mathbb{R}\nonumber\tag{by eqn. (3.5)}\\ \frac{\partial J}{\partial W^{[1]}}&=\delta^{[1]}x^{\top}\in \mathbb{R}^{m \times d}\nonumber \tag{by eqn. (3.7)}\\ \frac{\partial J}{\partial b^{[1]}}&=\delta^{[1]}\in \mathbb{R}^{m}\nonumber\tag{as an exercise} \end{align}
a [ r ] = z [ r ] = W [ r ] a [ r 1 ] + b [ r ] J = 1 2 ( a [ r ] y ) 2 a [ r ] = z [ r ] = W [ r ] a [ r 1 ] + b [ r ] J = 1 2 a [ r ] y 2 {:[a^([r])=z^([r])=W^([r])a^([r-1])+b^([r])],[J=(1)/(2)(a^([r])-y)^(2)]:}\begin{aligned} a^{[r]} &=z^{[r]}=W^{[r]} a^{[r-1]}+b^{[r]} \\ J &=\frac{1}{2}\left(a^{[r]}-y\right)^{2} \end{aligned}
Here we define both a [ r ] a [ r ] a^([r])a^{[r]} and z [ r ] z [ r ] z^([r])z^{[r]} as h θ ( x ) h θ ( x ) h_(theta)(x)h_{\theta}(x) for notational simplicity.
First, we note that we have the following local abstraction for k k k ink \in { 1 , , r } { 1 , , r } {1,dots,r}\{1, \ldots, r\} :
z [ k ] = W [ k ] a [ k 1 ] + b [ k ] J = J ( z [ k ] ) z [ k ] = W [ k ] a [ k 1 ] + b [ k ] J = J z [ k ] {:[z^([k])=W^([k])a^([k-1])+b^([k])],[J=J(z^([k]))]:}\begin{aligned} z^{[k]} &=W^{[k]} a^{[k-1]}+b^{[k]} \\ J &=J\left(z^{[k]}\right) \end{aligned}
Invoking Claim 3.2, we have that
J W [ k ] = J z [ k ] a [ k 1 ] (3.15) J b [ k ] = J z [ k ] J W [ k ] = J z [ k ] a [ k 1 ] (3.15) J b [ k ] = J z [ k ] {:[(del J)/(delW^([k]))=(del J)/(delz^([k]))*a^([k-1]^(TT))],(3.15)(del J)/(delb^([k]))=(del J)/(delz^([k])):}\begin{align} \frac{\partial J}{\partial W^{[k]}} &=\frac{\partial J}{\partial z^{[k]}} \cdot a^{[k-1]^{\top}} \nonumber\\ \frac{\partial J}{\partial b^{[k]}} &=\frac{\partial J}{\partial z^{[k]}}\nonumber \tag{3.15} \end{align}
Therefore, it suffices to compute J z l k ] J z l k ] (del J)/(delz^(lk]))\frac{\partial J}{\partial z^{l k]}}. For simplicity, let's define δ [ k ] J z k k δ [ k ] J z k k delta^([k])≜(del J)/(delz^(kk))\delta^{[k]} \triangleq \frac{\partial J}{\partial z^{k k}}. We compute δ [ k ] δ [ k ] delta^([k])\delta^{[k]} from k = r k = r k=rk=r to 1 inductively. First we have that
(3.16) δ [ r ] J z [ r ] = ( z [ r ] y ) (3.16) δ [ r ] J z [ r ] = z [ r ] y {:(3.16)delta^([r])≜(del J)/(delz^([r]))=(z^([r])-y):}\begin{equation} \delta^{[r]} \triangleq \frac{\partial J}{\partial z^{[r]}}=\left(z^{[r]}-y\right) \tag{3.16} \end{equation}
Next for k r 1 k r 1 k <= r-1k \leq r-1, suppose we have computed the value of δ [ k + 1 ] δ [ k + 1 ] delta^([k+1])\delta^{[k+1]}, then we will compute δ [ k ] δ [ k ] delta^([k])\delta^{[k]}. First, using Claim 3.3, we have that
δ [ k ] J z [ k ] = J a [ k ] ReLU ( z [ k ] ) δ [ k ] J z [ k ] = J a [ k ] ReLU z [ k ] delta^([k])≜(del J)/(delz^([k]))=(del J)/(dela^([k]))o.ReLU^(')(z^([k]))\delta^{[k]} \triangleq \frac{\partial J}{\partial z^{[k]}}=\frac{\partial J}{\partial a^{[k]}} \odot \operatorname{ReLU}^{\prime}\left(z^{[k]}\right)
Then we note that the relationship between a [ k ] a [ k ] a^([k])a^{[k]} and z [ k + 1 ] z [ k + 1 ] z^([k+1])z^{[k+1]} can be abstractly written as
(3.17) z [ k + 1 ] = W [ k + 1 ] a [ k ] + b [ k + 1 ] (3.18) J = J ( z [ k + 1 ] ) (3.17) z [ k + 1 ] = W [ k + 1 ] a [ k ] + b [ k + 1 ] (3.18) J = J z [ k + 1 ] {:(3.17)z^([k+1])=W^([k+1])a^([k])+b^([k+1]),(3.18)J=J(z^([k+1])):}\begin{align} z^{[k+1]} &=W^{[k+1]} a^{[k]}+b^{[k+1]} \nonumber\tag{3.17}\\ J &=J\left(z^{[k+1]}\right)\nonumber\tag{3.18} \end{align}
Therefore by Claim 3.4 3.4 3.43.4 we have that
(3.19) J a [ k ] = W [ k + 1 ] J z [ k + 1 ] (3.19) J a [ k ] = W [ k + 1 ] J z [ k + 1 ] {:(3.19)(del J)/(dela^([k]))=W^([k+1]^(TT))(del J)/(delz^([k+1])):}\begin{equation} \frac{\partial J}{\partial a^{[k]}}=W^{[k+1]^{\top}} \frac{\partial J}{\partial z^{[k+1]}} \tag{3.19} \end{equation}
It follows that
δ [ k ] = ( W [ k + 1 ] J z [ k + 1 ] ) ReLU ( z [ k ] ) = ( W [ k + 1 ] δ [ k + 1 ] ) ReLU ( z [ k ] ) δ [ k ] = W [ k + 1 ] J z [ k + 1 ] ReLU z [ k ] = W [ k + 1 ] δ [ k + 1 ] ReLU z [ k ] {:[delta^([k])=(W^([k+1]^(TT))(del J)/(delz^([k+1])))o.ReLU^(')(z^([k]))],[=(W^([k+1]^(TT))delta^([k+1]))o.ReLU^(')(z^([k]))]:}\begin{aligned} \delta^{[k]} &=\left(W^{[k+1]^{\top}} \frac{\partial J}{\partial z^{[k+1]}}\right) \odot \operatorname{ReLU}^{\prime}\left(z^{[k]}\right) \\ &=\left(W^{[k+1]^{\top}} \delta^{[k+1]}\right) \odot \operatorname{ReLU}^{\prime}\left(z^{[k]}\right) \end{aligned}
Algorithm 4 Back-propagation for multi-layer neural networks.
1: Compute and store the values of a [ k ] a [ k ] a^([k])a^{[k]}'s and z [ k ] z [ k ] z^([k])z^{[k]}'s for k = 1 , , r , k = 1 , , r , k=1,dots,r,k=1,\ldots ,r, and J J JJ.
quad\quad. qquadqquadqquadqquadqquadqquad⊳\qquad\qquad\qquad\qquad\qquad\qquad\vartriangleright This is often called the “forward pass”
2: .
3: for k = r k = r k=rk=r to 1 1 11 do qquadqquadquad⊳\qquad\qquad\quad\vartriangleright This is often called the “backward pass”
4: qquad\qquadif k = r k = r k=rk=r then
5: qquadquad\qquad\quad compute δ [ r ] J z [ r ] δ [ r ] J z [ r ] delta^([r])≜(del J)/(delz^([r]))\delta^{[r]}\triangleq\frac{\partial J}{\partial z^{[r]}}
6: qquad\qquadelse
7: qquadquad\qquad\quad compute
[ k ] J z [ k ] = ( W [ k + 1 ] [ k + 1 ] ) ReLU' ( z [ k ] ) [ k ] J z [ k ] = W [ k + 1 ] [ k + 1 ] ReLU' ( z [ k ] ) del^([k])≜(del J)/(delz^([k]))=(W^([k+1]^(TT))del^([k+1]))o."ReLU'"(z^([k]))\partial^{[k]} \triangleq \frac{\partial J}{\partial z^{[k]}}=\left(W^{[k+1]^{\top}}\partial^{[k+1]} \right)\odot \text{ReLU'}(z^{[k]})
8: qquad\qquad Compute
J W [ k ] = δ [ k ] a [ k 1 ] J b [ k ] = δ [ k ] J W [ k ] = δ [ k ] a [ k 1 ] J b [ k ] = δ [ k ] {:[(del J)/(delW^([k]))=delta^([k])a^([k-1]^(TT))],[(del J)/(delb^([k]))=delta^([k])]:}\begin{aligned} \frac{\partial J}{\partial W^{[k]}}&=\delta^{[k]}a^{[k-1]^{\top}}\\ \frac{\partial J}{\partial b^{[k]}}&=\delta^{[k]} \end{aligned}

4. Vectorization Over Training Examples

As we discussed in Section 1, in the implementation of neural networks, we will leverage the parallelism across multiple examples. This means that we will need to write the forward pass (the evaluation of the outputs) of the neural network and the backward pass (backpropagation) for multiple training examples in matrix notation.
The basic idea. The basic idea is simple. Suppose you have a training set with three examples x ( 1 ) , x ( 2 ) , x ( 3 ) x ( 1 ) , x ( 2 ) , x ( 3 ) x^((1)),x^((2)),x^((3))x^{(1)}, x^{(2)}, x^{(3)}. The first-layer activations for each example are as follows:
z [ 1 ] ( 1 ) = W [ 1 ] x ( 1 ) + b [ 1 ] z [ 1 ] ( 2 ) = W [ 1 ] x ( 2 ) + b [ 1 ] z [ 1 ] ( 3 ) = W [ 1 ] x ( 3 ) + b [ 1 ] z [ 1 ] ( 1 ) = W [ 1 ] x ( 1 ) + b [ 1 ] z [ 1 ] ( 2 ) = W [ 1 ] x ( 2 ) + b [ 1 ] z [ 1 ] ( 3 ) = W [ 1 ] x ( 3 ) + b [ 1 ] {:[z^([1](1))=W^([1])x^((1))+b^([1])],[z^([1](2))=W^([1])x^((2))+b^([1])],[z^([1](3))=W^([1])x^((3))+b^([1])]:}\begin{aligned} &z^{[1](1)}=W^{[1]} x^{(1)}+b^{[1]} \\ &z^{[1](2)}=W^{[1]} x^{(2)}+b^{[1]} \\ &z^{[1](3)}=W^{[1]} x^{(3)}+b^{[1]} \end{aligned}
Note the difference between square brackets [ ] [ ] [*][\cdot], which refer to the layer number, and parenthesis ( ) ( ) (*)(\cdot), which refer to the training example number. Intuitively, one would implement this using a for loop. It turns out, we can vectorize these operations as well. First, define:
(4.1) X = [ x ( 1 ) x ( 2 ) x ( 3 ) ] R d × 3 (4.1) X = x ( 1 ) x ( 2 ) x ( 3 ) R d × 3 {:(4.1)X=[[∣,∣,∣],[x^((1)),x^((2)),x^((3))],[∣,∣,∣]]inR^(d xx3):}\begin{equation} X=\left[\begin{array}{ccc} \mid & \mid & \mid \\ x^{(1)} & x^{(2)} & x^{(3)} \\ \mid & \mid & \mid \end{array}\right] \in \mathbb{R}^{d \times 3} \tag{4.1} \end{equation}
Note that we are stacking training examples in columns and not rows. We can then combine this into a single unified formulation:
(4.2) Z [ 1 ] = [ z [ 1 ] ( 1 ) z [ 1 ] ( 2 ) z [ 1 ] ( 3 ) ] = W [ 1 ] X + b [ 1 ] (4.2) Z [ 1 ] = z [ 1 ] ( 1 ) z [ 1 ] ( 2 ) z [ 1 ] ( 3 ) = W [ 1 ] X + b [ 1 ] {:(4.2)Z^([1])=[[∣,∣,∣],[z^([1](1)),z^([1](2)),z^([1](3))],[∣,∣,∣]]=W^([1])X+b^([1]):}\begin{equation} Z^{[1]}=\left[\begin{array}{ccc} \mid & \mid & \mid \\ z^{[1](1)} & z^{[1](2)} & z^{[1](3)} \\ \mid & \mid & \mid \end{array}\right]=W^{[1]} X+b^{[1]} \tag{4.2} \end{equation}
You may notice that we are attempting to add b [ 1 ] R 4 × 1 b [ 1 ] R 4 × 1 b^([1])inR^(4xx1)b^{[1]} \in \mathbb{R}^{4 \times 1} to W [ 1 ] X W [ 1 ] X W^([1])X inW^{[1]} X \in R 4 × 3 R 4 × 3 R^(4xx3)\mathbb{R}^{4 \times 3}. Strictly following the rules of linear algebra, this is not allowed. In practice however, this addition is performed using broadcasting. We create an intermediate b ~ [ 1 ] R 4 × 3 b ~ [ 1 ] R 4 × 3 tilde(b)^([1])inR^(4xx3)\tilde{b}^{[1]} \in \mathbb{R}^{4 \times 3} :
(4.3) b ~ [ 1 ] = [ b [ 1 ] b [ 1 ] b [ 1 ] ] (4.3) b ~ [ 1 ] = b [ 1 ] b [ 1 ] b [ 1 ] {:(4.3) tilde(b)^([1])=[[∣,∣,∣],[b^([1]),b^([1]),b^([1])],[∣,∣,∣]]:}\begin{equation} \tilde{b}^{[1]}=\left[\begin{array}{ccc} \mid & \mid & \mid \\ b^{[1]} & b^{[1]} & b^{[1]} \\ \mid & \mid & \mid \end{array}\right] \tag{4.3} \end{equation}
We can then perform the computation: Z [ 1 ] = W [ 1 ] X + b ~ [ 1 ] Z [ 1 ] = W [ 1 ] X + b ~ [ 1 ] Z^([1])=W^([1])X+ tilde(b)^([1])Z^{[1]}=W^{[1]} X+\tilde{b}^{[1]}. Often times, it is not necessary to explicitly construct b ~ [ 1 ] b ~ [ 1 ] tilde(b)^([1])\tilde{b}^{[1]}. By inspecting the dimensions in (4.2), you can assume b [ 1 ] R 4 × 1 b [ 1 ] R 4 × 1 b^([1])inR^(4xx1)b^{[1]} \in \mathbb{R}^{4 \times 1} is correctly broadcast to W [ 1 ] X R 4 × 3 W [ 1 ] X R 4 × 3 W^([1])X inR^(4xx3)W^{[1]} X \in \mathbb{R}^{4 \times 3}.
The matricization approach as above can easily generalize to multiple layers, with one subtlety though, as discussed below.
Complications/Subtlety in the Implementation. All the deep learning packages or implementations put the data points in the rows of a data matrix. (If the data point itself is a matrix or tensor, then the data are concentrated along the zero-th dimension.) However, most of the deep learning papers use a similar notation to these notes where the data points are treated as column vectors.[7] There is a simple conversion to deal with the mismatch: in the implementation, all the columns become row vectors, row vectors become column vectors, all the matrices are transposed, and the orders of the matrix multiplications are flipped. In the example above, using the row major convention, the data matrix is X R 3 × d X R 3 × d X inR^(3xx d)X \in \mathbb{R}^{3 \times d}, the first layer weight matrix has dimensionality d × m d × m d xx md \times m (instead of m × d m × d m xx dm \times d as in the two layer neural net section), and the bias vector b [ 1 ] R 1 × m b [ 1 ] R 1 × m b^([1])inR^(1xx m)b^{[1]} \in \mathbb{R}^{1 \times m}. The computation for the hidden activation becomes
(4.4) Z [ 1 ] = X W [ 1 ] + b [ 1 ] R 3 × m (4.4) Z [ 1 ] = X W [ 1 ] + b [ 1 ] R 3 × m {:(4.4)Z^([1])=XW^([1])+b^([1])inR^(3xx m):}\begin{equation} Z^{[1]}=X W^{[1]}+b^{[1]} \in \mathbb{R}^{3 \times m} \tag{4.4} \end{equation}
You can read the notes from the next lecture from CS229 on Regularization and Model Selection here.

  1. If a concrete example is helpful, perhaps think about the model h θ ( x ) = θ 1 2 x 1 2 + θ 2 2 x 2 2 + h θ ( x ) = θ 1 2 x 1 2 + θ 2 2 x 2 2 + h_(theta)(x)=theta_(1)^(2)x_(1)^(2)+theta_(2)^(2)x_(2)^(2)+h_{\theta}(x)=\theta_{1}^{2} x_{1}^{2}+\theta_{2}^{2} x_{2}^{2}+ + θ d 2 x d 2 + θ d 2 x d 2 cdots+theta_(d)^(2)x_(d)^(2)\cdots+\theta_{d}^{2} x_{d}^{2} in this subsection, even though it's not a neural network. ↩︎
  2. Recall that, as defined in the previous lecture notes, we use the notation " a := b a := b a:=ba:=b" to denote an operation (in a computer program) in which we set the value of a variable a a aa to be equal to the value of b b bb. In other words, this operation overwrites a a aa with the value of b b bb. In contrast, we will write " a = b a = b a=ba=b" when we are asserting a statement of fact, that the value of a a aa is equal to the value of b b bb. ↩︎
  3. Typically, for multi-layer neural network, at the end, near the output, we don’t apply ReLU, especially when the output is not necessarily a positive number. ↩︎
  4. We note if the output of the function f f ff does not depend on some of the input coordinates, then we set by default the gradient w.r.t that coordinate to zero. Setting to zero does not count towards the total runtime here in our accounting scheme. This is why when N N N <= ℓN \leq \ell, we can compute the gradient in O ( N ) O ( N ) O(N)O(N) time, which might be potentially even less than \ell. ↩︎
  5. We also note that even though this is the convention in math, it’s different from the convention in numpy where an one dimensional array will be automatically interpreted as a row vector. ↩︎
  6. There is an extension of this notation to vector or matrix variable J J JJ. However, in practice, it's often impractical to compute the derivatives of high-dimensional outputs. Thus, we will avoid using the notation J A J A (del J)/(del A)\frac{\partial J}{\partial A} for J J JJ that is not a real-valued variable. ↩︎
  7. The instructor suspects that this is mostly because in mathematics we naturally multiply a matrix to a vector on the left hand side. ↩︎

Recommended for you

Juntao Jiang
Group Equivariant Convolutional Networks in Medical Image Analysis
Group Equivariant Convolutional Networks in Medical Image Analysis
This is a brief review of G-CNNs' applications in medical image analysis, including fundamental knowledge of group equivariant convolutional networks, and applications in medical images' classification and segmentation.
9 points
0 issues