Processing math: 100%

Friday, May 15, 2015

Implementation Summary : Neural Networks

Training a Neural network:

Pick a network architecture (connectivity pattern between neurons)

No. of input units: Dimension of features x(i)
No. of output units: Number of classes

Reasonable default: 1 hidden layer, or if >1 hidden layer, have same no. of hidden units in every layer (usually the more the better)

Training:

  1. Randomly initialize weights
  2. Implement forward propagation to get hΘ(x(i)) for any x(i)
  3. Implement code to calculate the cost function J(Θ)
  4. Implement Backpropagation to compute partial derivatives Θ(l)jk
for i = 1 to m
     Perform forward propagation and backpropagation  using examples (x(i),y(i))
     (Get activations a(l) and delta terms δ(l) for l=2,3,4....,L
     Calculate $\Delta^{(l)}:=\Delta^{(l)} + \delta^{(l+1)}(a^{(l)})^T
     ...
end;

Compute Θ(l)jkJ(Θ)

     5.  Use Gradient Checking to compare  Θjk$(l)J(Θ) computed using backpropagation  vs. using numerical estimate  of gradient of J(Θ)
     6.  Use Gradient Descent or advanced optimization method  with backpropagation  to try  to minimize J(Θ) as a function of parameters Θ


Neural Networks: Unrolling Parameters, Gradient Checking & Random Initialization

Neural Networks: Unrolling Parameters

We need to unroll parameters from matrices to vectors to use in our optimization function like fminunc in Matlab/Octave

Advanced Optimization Functions:

function[jVal, gradient] = costFunction(theta);
...
optTheta = fminunc(@costFunction, initialTheta, options);

The above function assumes that the parameters initialTheta are vectors, not matrices. In Logistic regression, these parameters are vectors. However, in a Neural Network, these are matrices

For a Neural Network with 4 Layers (L=4)
Θ(1),Θ(2),Θ(3) - matrices (Theta1, Theta2, Theta3)
D(1),D(2),D(3) = matrices (D1, D2, D3)

"Unroll into Vectors"

Example: lets say we have a 3 layer NN with the following details:
s1=10,s2=10,s3=1

The dimension of matrices Θ and D are given by:

Θ(1)10×11, Θ(2)10×11, Θ(3)1×11

D(1)10×11, D(2)10×11, D(3)1×11

The command below will convert the matrices into vectors after combining them:
thetaVec = [Theta1(:);Theta2(:);Theta3(:)];
DVec = [D1(:);D2(:);D3(:)]

To recombine the vectors in the matrices in the initial format,
Theta1 = reshape(thetaVec(1:110),10,11);
Theta2 = reshape(thetaVec(111:220),10,11);
Theta3 = reshape(thetaVec(221:231),1,11);

Learning Algorithm:

Here is how we use the unrolling algorithm:

Have initial parameters Θ(1),Θ(2),Θ(3)
Unroll to get initialTheta  to pass to fminunc(@costFunction, initialTheta,options);

function [jVal,gradientvec] = costFunction(thetaVec);

  • From thetaVec, get  Θ(1),Θ(2),Θ(3) (reshape to get the original matrices back)
  • Use forward propagation/back propagation to compute D(1),D(2),D(3) and J(Θ)
  • Unroll D(1),D(2),D(3) to get gradientVec




Numerical Gradient Checking:

The backpropagation algorithm is a bit complex, and even though the cost function J might seem to be decreasing, there could be a bug in the algorithm which could give erroneous results. The NN might end up with a high value of J.

How to check the gradient: Numerical Estimation of gradients (Gradient Checking)

We approximate the partial derivative of the function J(Θ), which is defined as the slope of the curve at that value of Θ by calculating the slope in a more geometrical way.


We select a point Θ+ϵ just ahead of Θ, and a point Θϵ just less than Θ. The slope of the line connecting the values of J(Θ) at these points is given by the equation:

slope at J(Θ)=J(Θ+ϵ)J(Θϵ)2ϵ

The value of ϵ approx104


Implementation in Matlab or Octave:
gradApprox = (J(theta+EPSILON) - J(theta - EPSILON))/(2*EPSILON)

This will give a numerical estimate of slope at this point.

General case : Parameter vector θ
θn E.g. θ is 'unrolled' version of Θ(1),Θ(2),Θ(3)...
θ=θ1,θ2,θ3...θn



The partial derivatives of each of θ1,θ2...θn are calculated separately.

In Matlab/Octave, the following equation is implemented:
for i = 1 to n;
     thetaPlus = theta;
     thetaPlus(i) = thetaPlus(i) + EPSILON;
     thetaMinus = theta;
     thetaMinus(i) = thetaMinus(i) - EPSILON;
     gradApprox(i)  = (J(thetaPlus) - J(thetaMinus))/(2*EPSILON)
end;

Check that gradApprox DVec. If the values are very close, then we will be more confident that the algorithm is calculating the cost function J correctly and we will correctly optimize Θ.

Implementation Note:

  • Implement Backprop to compute DVec (unrolled D(1),D(2),D(3))
  • Implement Numerical Gradient Checking to calculate gradApprox
  • Make sure they give similar values
  • Turn off gradient checking. using backprop code for learning
Important:
  • Be sure to disable your code before training your classifier. if you run numerical gradient computation on every iteration of gradient descent ( or in the inner loop of the costFunction, your code will be very slow).

Random Initialization of Θ:

Initial value of Θ

What if we set all values of initialTheta = zeros(n,1)? The values of Θ determine the values in the activation units in each layer. In case the values are same, the cost function will not decrease as the partial derivaties of the cost function will be the same, so will the values of δ and a1,a2...an

After each update, the parameters corresponding to inputs going into each of the two hidden units are identical.


Random Initialization : Symmetry breaking

Initialize each Θ(l)ij to a random variable in [ϵ,ϵ], (i.e. ϵΘ(l)ijϵ)

E.g.:

Theta1 = rand(10,11) *(2*INIT_EPSILON)-INIT_EPSILON;

Theta2 = rand(1,11) *(2*INIT_EPSILON)-INIT_EPSILON;

rand(10,11) will give a random 10x11 matrix between 0 and 1;




Neural Networks: BackPropagation Algorithm

Gradient Computation:

The cost function for a Neural Network: We need to optimize the values of Θ.

J(Θ)=1m[mi=1Kk=1y(i)klog(hΘ(x)(i))k + (1y(i)k)log(1(hΘ(x)(i))k)] + λ2mL1l=1sli=1sl+1j=1(Θ(l)ji)2

Goal: minΘ J(Θ)

We need to write the code to compute:
- J(Θ)
- Θ(l)ijJ(Θ)

Θlij

How to compute the Partial Derivative Terms?


Example : Given One Training Example: Forward Propagation

Given one training example (x,y):

Forward Propagation: Vectorized implementation to calculate the activation values for all layers

a(1)=x
z(2)=Θ(1)a(1) 

a(2)=g(z(2))  (add a(2)0)
z(3)=Θ(2)a(2)

a(3)=g(z(3))  (add a(3)0)
z(4)=Θ(3)a(3)

a(4)=hΘ(x)=g(z(4))

Gradient Computation: Backpropagation Algorithm




Intuition : We need to compute  δ(l)j  which is the "error" of node j in layer l, or error in the activation values

For each output unit (Layer L=4)
δ(4)j=a(4)jyj [a(4)j=(hΘ(x))j]

This δ(4)j is essentially the difference between what out hypothesis outputs, and the actual values of y in our training set.

Writing in vectorized format:
δ(4)=a(4)y

Next step is to compute the values of δ for other layers.
δ(3)=(Θ(3))Tδ(4) .g(z(3))
δ(2)=(Θ(2))Tδ(3) .g(z(2))

There is no δ(1) because the first layer correspond to the input units which asre used as such, and there is no error terms associated with it.

g() is the derivative of the sigmoid activation function.
It can be proved that g(z(3))=a(3).(1a(3)). Similarily, g(z(2))=a(2).(1a(2))

Finally, Θ(l)ijJ(Θ)=a(l)jδ(l+1)i (Ignoring λ; or λ=0)


Backpropagation Algorithm:

The name Backpropagation comes from the fact that we calculate the δ terms for the output layer first and then use it to calculate the values of other δ terms going backward.

Suppose we have a training Set: {(x(1),y(1)),(x(2),y(2)),.,.,.,.,(x(m),y(m))}

First, set Δ(l)ij=0 for all values of l,i,j - This is used to compute Θ(l)ijJ(Θ). Eventually, this  Δ(l)ij will be used to calculate the partial derivative  Θ(l)ijJ(Θ). Δ(l)ij  are used as accumulators.

For i=1 to m
     Set a(1)=x(i) --> Activation for input layer
     Perform Forward Propagation to compute a(l) for l=2,3,...L
     Using y(i), compute the error term for the output layer δ(L)=a(L)y(i)
     Use backpropagation algorithm for computing δ(L1),δ(L2),δ(L3),....,δ(2), there is no δ(1)
     Finally, Δ terms are used to accumulate these partial derivatives terms Δ(l)ij:=Δ(l)ij+a(l)jδ(l+1)i. In vectorized format it can be written as (assuming Δ, a and δ are all matrices : Δ(l):=Δ(l)+δ(l+1)(a(l))T 
end;

D(l)ij:=1mΔ(l)ij+λΘ(l)ij if j0
D(l)ij:=1mΔ(l)ij            if j=0

Finally, we can calculate the partial derivatives as : Θ(l)ijJ(Θ)=D(l)ij

---------------------------------------------------------------------------------


Backpropagation Intuition with Example:

Mechanical steps of Backpropagation Agorithm

Forward Propagation: Consider a simple NN with 2 input units (not counting the bias unit), 2 activation unit in two layers (not counting bias unit in each layer) and one output unit in layer 4.




The input (x(i),y(i) as represented as below. We first use Forward propagation to compute the values of z(2)1, and apply sigmoid activation function on z(2)1 to get the values of a(2)1. Similarly, the values of z(2)2 and a(2)2 are computed to complete the values of layer 2. The bias unit a(2)0=1 is added to the layer 2.




Once the values of layer 2 are calculated, we apply the same methodology to compute the values of layer 3 (a & z)


For the values of Layer 3, we have values of Θ which is use to compute the values of z(3)i.

z(3)1=Θ(2)10a(2)0+Θ(2)11a(2)1+Θ(2)12a(2)2

What is Backpropagation doing? Backpropagation is almost doing the same thing as forward propagation in the opposite direction (right to left, from output to input)

The cost function again:
J(Θ)=1m[mi=1Kk=1y(i)klog(hΘ(x)(i))k + (1y(i)k)log(1(hΘ(x)(i))k)] + λ2mL1l=1sli=1sl+1j=1(Θ(l)ji)2

Assume λ = 0 (remove the regularization term, ignoring for now).
Focusing on single example x(i),y(i), the case of 1 output unit

cost(i) = y(i)log hΘ(x(i))+(1y(i))log hΘ(x(i)) Cost associated with xi,yi training example

Think of cost(i)(hΘ(x(i))y(i))2 i.e. how well is the network doing on example (i) just for the purpose of intuition, how close is the output compared to actual observed values y



The δ terms are actually the partial derivatives of the cost associated with the example (i). They are a measure of how much the weights Θ needs to be changed to bring the hypothesis output hΘ(x) closer to the actual observed value of y.


For the output layer, we first set the value of δ(4)1=y(i)a(4)1
Next, we calculate the values of δ(3) in the layer 3 using δ(4), and the values of δ(2) in the layer 2 using δ(3). Please note that there is no δ(1).

Focusing on calculating the value of δ(2)2, this is actually the weighted sum (weights being the parameters Θ and the values of δ(3)1 and δ(3)2



δ(2)2=Θ(2)12δ(3)1+Θ(2)22δ(3)2
This corresponds to the magenta values + red values in the above diagram

Similarly, to calculate δ(3)2,

δ(3)2=Θ(3)12δ(4)1

Once we calculate the values of δ, we can use the optimizer function to calculate the optimized vaues of Θ














Neural networks : Cost Function

Neural Network Classification:

Above: A Neural Network with 4 layers

Input Unit:  {(x(1),y(1)),(x(2),y(2)),(x(3),y(3)),.,.,.,.,(x(m),y(m))}
L = Total number of layers in the network (L=4 in the above case)
sl = number of units (not counting bias units) in the layer l

There are two types of Neural Network outcomes:

Binary Classification:
y = 0 or 1 ; 1 Output Unit
sL=1, K=1

Multiclass Classification:

Number of output units: K

y(k)

E.g. if K=4

Output will be K vectors: [1000], [0100],[0010],[0001]



Cost Function:

Cost function for a Neural Network is a generalization of the Cost Function of a Logistic Regression.

Logistic Regression Cost Function:

J(θ)=1m[mi=1y(i)log(hθ(x(i))+(1y(i))log(1hθ(x(i)))]+λ2mnj=1θ2j

Neural Network Cost Function:

Neural Network outputs vectors in K

hΘ(x)K;
(hΘ(x))i=ith output

Cost Function:
J(Θ)=1m[mi=1Kk=1y(i)klog(hΘ(x)(i))k + (1y(i)k)log(1(hΘ(x)(i))k)] + λ2mL1l=1sli=1sl+1j=1(Θ(l)ji)2

The Summation Kk=1 is over the 'K' output units i.e. summing the cost function for each of the output units K.

Regularization Term : We don't sum over the terms corresponding to the bias units a0, corresponding to Θi0x0. Even if we include the bias terms, it will output similar result.


Neural Networks: MultiClass Classification


Tuesday, May 12, 2015

Neural Network: Model Representation

Anatomy of a Neuron:

A simgle neuron can be explained by the following system: Dendrites to collect information and transmit information, a cell body as a node/processing unit and Axon as an output wire





We can mimic a simple Logistic Model as a neuron by the following diagram:



The Output function hθ(x) is defined by the sigmoid (logistic) activation function 11+eθTx

The matrix of weights / parameters is defined by θ as [θ0θ1θ2θ3], the input vector x is defined by [x0x1x2x3]

Sigmoid (logistic) activation function:

g(z)=11+ez

The abovementioned representation is for a very simple and basic network with only one hidden layer having only one unit. Typically, a Neural Network has multiple input layer units, multiple hidden layers with multiple units, and multiple output units for multi-class classification.

The input layer has an additiona unit called 'The Bias Unit'(x0) which is equal to 1. The activation layers will also have an additional Biad Units equal to one.

Neural Network:



Details:

The Neural Network above consists of three layers : an Input Layer (Layer 1), a Hidden Layer (Layer 2), and an Output Layer (Layer 3). Both the Input and Hidden Layers (Layers 1 and 2) contain a bias unit x0 and a(2)0 respectively.

The Hidden Layer: The Layer 2 or the 'Activation Layer' consists of xxactivation units aji which are defined by weight from the Input Layers. Each Input unit feeds to each Activation unit, and the interaction is characterized by the weight parameters θ.

Number of Units: 4 (including a bias input unit) in  Layer 1, 4 (including a bias activation unit) in Layer 2, 1 in Output Layer (Layer 3).

Definitions:
a(j)i: Activation of unit i in layer (j);
Θ(j): matrix of weights controlling the function mapping from layer (j) to layer (j+1).

Θ(1) 3x4 : 3 rows for each activation unit a(21),a(2)2 and a(2)3. The fourth unit in activation layer a(2)0  is the bias unit equal to 1. The rows 1 to 4 are for the input parameters (including the bias input unit) x0,x1,x2 and x3.

The subscript denotes the units (0,1,2, and 3), and the superscript denotes the layer number (1,2 or 3).

a(2)1 denotes the first unit in the second layer.

The sigmoid function g(z) is defined by  g(z)=11+ez

z is defined as follows:

Layer 2: The activation layer
a(2)1=g(Θ(1)10x0+Θ(1)11x1+Θ(1)12x2+Θ(1)13x3)
a(2)2=g(Θ(1)20x0+Θ(1)21x1+Θ(1)22x2+Θ(1)23x3)
a(2)3=g(Θ(1)30x0+Θ(1)31x1+Θ(1)32x2+Θ(1)33x3)

Layer 3: The output layer
hΘ(x)=a(3)1=g(Θ(2)10a(2)0+Θ(2)11a(2)1+Θ(2)12a(2)2+Θ(2)13a(2)3)

Dimensions of Θ(j): If the network has sj units in layer j, sj+1 units in layer (j+1), then Θ(j) will be of the dimension sj+1×(sj+1)

The value of z:
a(2)1=g(z(2)1)
a(2)2=g(z(2)2)
a(2)3=g(z(2)3)

a(3)1=g(z(3))


Forward Propogation Model: Vectorized Implementation:

Define X, Θ, zji, aji in a vector notation

x=[x0x1x2x3]=[a(1)0a(1)1a(1)2a(1)3]

a(1)=x=[x0x1x2x3]=[a(1)0a(1)1a(1)2a(1)3]

Θ(1)=[Θ(1)10Θ(1)11Θ(1)12Θ(1)13Θ(1)20Θ(1)21Θ(1)22Θ(1)23Θ(1)30Θ(1)31Θ(1)32Θ(1)33]3×4

z(2)=Θ(1)a(1)

Θ(1):3×4, a(1):4×1; z(2):3×1

a(2)=g(z(2)), 3

Add a(2)0=0 as a bias unit in activation layer (layer 2)

Θ(2)=[Θ(2)10Θ(2)11Θ(2)12Θ(2)13]1×4

z(3)=Θ(2)a(2)

Θ(2):1×4, a(2):4×1; z(3):1×1

hΘ(x)=g(Θ(2)10a(2)0+Θ(2)11a(2)1+Θ(2)12a(2)2+Θ(2)13a(2)3)

hΘ(x)=a(3)=g(z(3))


Neural Network learning its own features: Forward Propagation

This is called a forward propagation because we map the function from layer 1 to layer 2, establish the weight parameters, and then map the function from layer 2 to layer 3. Each layer and parameters Θ works as an input for the next layer, till it reaches the output layer.


Network Architecture: 

A Neural Network can be more complex than the one shown above. A Neural Network can have multiple activation layers a(j), and also multiple output layers.








Neural Networks: Introduction

Neural Networks: Introduction - How is it different from Logistic Regression

Neural Networks are a class of algorithms that is used widely for many purposes. It mimics the functioning of the brain (Neurons, hence the name) and try to simulate the network in the human brain to teach / train a computer.

Why not logistic regression: Logistic regression is also a class of algorithm pretty much used for solving similar set of problems as is the case with neural networks, but there are variour constraints with Logistic Regression. Logistic Regression is typically for a small set of features, where all the polynomial terms can be included in the model (x21x2,x1x22, etc). The problem occurs when we have too many features : what if we have 100 variables, and wee need every combination of all the variables which will include terms like x323x774x3312 and so on. It becomes extremely hard to create and analyze these features. Also, this will lead to the problem of overfitting, and these computations will be extremely computationally expensive. To solve this, if we reduce the number of features, we will lose information about the data.

Neural Networks aim to solve this issue. It is built to optimize for evaluating a huge number of features which is a typical case in any of the image recognition or handwriting recognition problem. It can be used for simple classification, multiclass classification or prediction models.






Tuesday, May 5, 2015

Regularization : Logistic Regression

Regularization: Logistic Regression

The problem of overfitting can occur in a Logistic regression model in case the model includes high order polynomial terms, like the following quation

hθ(x)=g(θ0+θ1x1+θ2x21+θ3x21x2+θ4x21x22+θ5x21x32...)

The cost function of a Logistic Regression model is given by:

J(θ)=1mmi=1y(i)log(hθ(x(i))+(1y(i))log(1hθ(x(i)))

In a similar way as regularization using Linear Regression, we add a regularization term to the cost function which is defined as λ2mnj=1θ2j

We do not add θ0 in the regularization term, and the regularization parameter is defined for θ1,θ2,θ3.....θn

The Cost Function for Logistic regression becomes:

J(θ)=[1mmi=1y(i)log(hθ(x(i))+(1y(i))log(1hθ(x(i)))]+λ2mnj=1θ2j

Gradient Descent with Regularization:
θ0:=θ0α[1mmi=1(hθ(x(i))y(i)).x(i)j]

θj:=θjα[1mmi=1(hθ(x(i))y(i)).x(i)jλmθj] Simultaneously update for all θj

The value of θ0 is calculated separately without adding the regularization term. The value of j ranges from 1...n in the regularization term.

Regularization with Advanced Optimization:

Estimating θ using advanced optimization


Code:

function [jVal, gradient] = costFunction(theta)

jVal = [code to compute J(θ)]

J(θ)=[1mmi=1y(i)log(hθ(x(i))+(1y(i))log(1hθ(x(i)))]+λ2mnj=1θ2j

gradient(1) = [code to compute θ0J(θ)]
No regularization term for θ0

gradient(2) = [code to compute θ1J(θ)]

1mmi=1(hθ(x(i))y(i)).x(i)1λmθ1

gradient(3) = [code to compute θ2J(θ)]

1mmi=1(hθ(x(i))y(i)).x(i)2λmθ2
.
.
.

gradient(n+1) = [code to compute θnJ(θ)]

1mmi=1(hθ(x(i))y(i)).x(i)nλmθ1






Monday, May 4, 2015

Regularization : Linear Regression

Regularization : Linear Regression

J(θ) = min12m[mi=1(hθ(x(i))y(i))2+λnj=1θ2j]

Goal: minimize J(θ)

Gradient Descent:

GD Algorithm without the regularization term:

repeat until convergence {

θ0:=θ0α1mmi=1(hθ(x(i))y(i)).x(i)0

θj:=θjα1mmi=1(hθ(x(i))y(i)).x(i)j

}

Gradient Descent with Regularization term λ added:

Since we are not regularizing θ0, it is updated separately and the regularization term is not included in the calculation of θ0

repeat until convergence {

θ0:=θ0α1mmi=1(hθ(x(i))y(i)).x(i)0

θj:=θjα[1mmi=1(hθ(x(i))y(i)).x(i)jλmθj]
(j= 1,2,3,....n)
}

Simplifying:

θj:=θj(1αλm)α1mmi=1(hθ(x(i))y(i)).x(i)j


The term (1αλm) is less than 1 (0.9 or 0.95) and this reduces the value of θj in every step by multiplying it repeatedly with a number less than 1, thus achieveing the goal of regularization.


Normal Equation Method:

Matrices:

X : m x (n+1) matrix
y:  m- dimensional vector

Without Regularization:

θ=(XTX)1XTy

With Regularization

θ=(XTX+λ[000000010000001000000...000...000..1])1XTy

The Matrix multiplied with λ is a (n+1) x(n+1) matrix, where the first row, first columns and all other non diagonal elements are 0. Only the elements on the main diagonal (except the first one) are equal to 1

The Normal equation method with regularization takes care of the non-invertibility of the matrix XTX

Regularization for Linear & Logistic Regression : Overfitting & Cost Function

The Problem of Over fitting:

The parameters that generate a model might throw up three type of results: Overfit, Right Fit and Underfit


Overfitting:
  • High Variance scenario, the model fits too well with the training data
  • Usually occurs due to the presence of large number of higher order parameters
  • Fits the training examples quite well, but fails to generalize to new examples
Underfitting:
  • High Bias scenario, the model fits poorly with the data and usually biased (generalized one result for all data)
  • Doesn't fit the training data well

Addressing Overfitting:

  1. Reduce the number of features
    1. Manually Select which features to keep
    2. Model selection algorithm
  2. Regularization
    1. Keep all the features, but reduce the magnitude/values of parameters θj
    2. Works well when we have a lot of features, each one of which contributes a bit in predicting y

 Regularization: Cost Function

Intuition:

hθ(x) for an overfitted function :

θ0+θ1x+θ2x2+θ3x3+θ4x4

If we penalize and make θ3 and θ4 very small, by changing the cost function to:

Cost (J(θ)) = min12mmi=1(hθ(x)y)2+1000θ23+1000θ24

Multiplying 1000 to θ3 and θ4 in the cost function forces the algorithm to reduce the value of both  θ3 and θ4 and make it very close to 0.

The smaller values of all θ's will generate a simpler hypothesis which will be less prone to overfitting.

Implementaion:

The Regularization Parameter λ:
 
The cost function can be rewritten as :

Cost (J(θ)) = min12m[mi=1(hθ(x(i))y(i))2+λnj=1θ2j]

Points to note:
  • An extra term is added to regularize all the θ parameters, θ0 is not penalized (the summation term starts at j=1 and not at j=0, leaving θ0)
  • The parameter λ controls a tradeoff between two goals of the equation : Fit the data well (first part), and keep the coefficients small (second part)
  • Selecting the value of λ: If λ is too large, the parameters θ will be penalized very heavily and will become very close to 0. The hypothesis will become hθ(x)=θ0 which is a highly biased hypothesis
  • If λ is too large, the Gradient Descent Algorithm will fail to converge

Logistic Regression : One Vs All Classification

MultiClass Classification : Logistic Regression

Examples:

  • Email Folder tagging: Work (y=1), Friends (y=2), Family (y=3), Travel (y=4)
  • Weather : Sunny (y=1), Cloudy (y=2), Rain (y=3)
The outcome variable y is not restricted to only two outcomes y=(01) but is defined by y=(1234) depending on the number of classifications/ groups we need to make.


In the multiclass classification, we train the model separately for y=1, y=2, y=3 and so on, and for each outcome, we select the class that maximizes h(i)θ(x)

h(i)θ(x)=P(y=i|x;θ)

Train a logistic regression classifier h(i)θ(x) for each class i to predict the probability that y=i.

On a new input x, to make a prediction, pick the class i that maximizes h(i)θ(x)




Logistic Regression : Advanced Optimization

Advanced Optimization for Logistic Regression : Finding the values of θ

Gradient Descent Algorithm is one way to calculate the value of parameters θ. However, it involves selecting a appropriate value of α which might cause multiple iterations.

There are various other optimization algorithms available for minimizing a cost function J(θ) which are a bit more complex, but there is no need to manually pick a value of α and are much faster than Gradient Descent. The other algorithms are Conjugate Gradient, BFGS and L-BFGS.

Coding the Advanced Optimization Algorithms in MATLAB/Octave:

Example:

say we have to optimize θ1andθ2, and J(θ) is given by
J(θ)=(θ15)2+(θ25)2
θ1J(θ)=2(θ15)
θ2J(θ)=2(θ25)

We write the function in Matlab/Octave which calculates the value of the cost function J(θ) and the partial derivatives (gradient 1 for θ1 and gradient 2 for θ2)

Note: The index in Octave/Matlab starts from 1; so θ0,θ1.....θn in the equation is equal to θ1,θ2,....θn+1

Code:
function[jVal, gradient] = costFunction(theta)

jVal = (theta(1)-5)^2 + (theta(2)-5)^2;
gradient = zeros(2,1)
gradient(1) = 2*(theta(1)-5)
gradient(2) = 2*(theta(2)-5)
....

Once the code for calculating jVal (the cost function) and gradient is written, the values of θ are optimized by the following code:

options = optimset('GradObj','on','MaxIter','100');
initialTheta = zeros(2,1)
[optTheta, functionVal, exitFlag]...
     = fminunc(@costFunction, intitalTheta,options);



Recap:

theta = (θ0θ1..θn)
function[jVal, gradient] = costFunction(theta)

jVal = [code to compute J(θ)]

gradient(1) = [code to compute θ0J(θ)]
gradient(2) = [code to compute θ1J(θ)]

gradient(n+1) = [code to compute θnJ(θ)]

Logistic Regression :Cost Function & Gradient Descent

Logistic Regression: Cost Function

The Logistic Regression hypotheses function is given by
hθ(x)=11+eθTx

The question is : how do we choose the parameters θ?

Recap: Linear Regression Cost Function

J(θ)=1mmi=112(hθ(x(i))y(i))2

Logistic Regression Cost Function:

In Logistic Regression, hθ(x)=11+eθTx  as opposed to linear regression where hθ(x)=θ0+θ1x1....

The problem with representing the cost function of Logistic regression as 12(hθ(x(i))y(i))2 is that the curve of J(θ) is a non convex one, i.e. it has multiple local minima which cannot be optimized by the Gradient Descent function. For Gradient Descent to converge, the cost function has to be a convex function as is the case with Linear Regression.


Cost Function :

Cost(hθ(x),y)=log(hθ(x)) if y = 1
Cost(hθ(x),y)=log(1hθ(x)) if y = 0

or 

$Cost (h_\theta(x),y) = -ylog(h_\theta(x)) -(1-y)log(1- h_\theta(x))$

If y=1:
Cost = 0 if y=1 and hθ(x) = 1 (i.e. if the actual value of y = 1 and the predicted value of y is also 1)

But as hθ(x)0, Cost

If y=0:
Cost = 0 if y=0 and hθ(x) = 0 (i.e. if the actual value of y = 0 and the predicted value of y is also 0)

But as hθ(x)1, Cost

Simplified Cost Function:

J(θ)=1mmi=1Cost(hθ(x(i))y(i))2

J(θ)=1mmi=1y(i)log(hθ(x(i))+(1y(i))log(1hθ(x(i)))

Now, to fit parameters θ, we need to minimize J(θ)

To make a prediction given a new x:
Output hθ(x)=11+eθTx

Gradient Descent Function for Logistic Regression:

Pseudocode : repeat until convergence {

θj:=θjαθjJ(θ)

}

Putting in the value of the Cost Function:

θj:θjα1mmi=1(hθ(x(i))y(i)).x(i)j Simultaneously update for all θj

The algorithm of Gradient Descent for Logistic Regression is same as that for linear regression, the only difference is the value of hθ(x) which in this case is hθ(x)=11+eθTx instead of hθ(x)=θTx for Linear Regression.





Logistic Regression: Decision Boundary

Logistic Regression : Decision Boundary

hθ(x)=g(θTx)

g(z)=11+ez

Threshold:

Predict y=1 if hθ(x)0.5 or θTx0
g(z) 0.5 when z 0
hθ(x)=g(θTx)

Predict y=0 if hθ(x)<0.5 or θTx<0
g(z) < 0.5 when z < 0
hθ(x)=g(θTx)

Decision Boundary:

Decision Boundary is the property of the hypothesis and the parameters, and not the property of the dataset



In the above example, the two datasets (red cross and blue circles) can be separated by a decision boundary whose equation is given by:

hθ(x)=g(θ0+θ1x1+θ2x2)

Suppose the parameters θ is defined by the vector

θ=(311)

The model becomes:

Predict "y=1" if 3+x1+x20
or x1+x23

Similarily, Predict "y=0" if x1+x2<3

At the Decision Boundary, when x1+x2=3, hθ(x)=0.5

A Decision boundary can be nonlinear, depending on the parameters. For example, a logistic regression optimized by the following hypothesis will result in a circular Decision Boundary

hθ(x)=g(θ0+θ1x1+θ2x2+θ3x21+θ4x22)
where the vector θ is given by θ=(10011)

In this case, predict "y=1" when z0, where z=1+x21+x22

The Cost Function  and the Gradient Descent Algorithm works much in the similar manner as the Linear Regressionm, with the exception of the function hθ(x) which is different for Linear and Logistic Regression.




Classification using Logistic Regression

Classification using Logistic Regression:

Classification is different from linear regression as it classifies the data into two or more categories. It is still called regression as it takes the input from the training data and creates a model much like linear regression, but since it uses the Logit function to classify the data points, it is named Logistic Regression.

Why can't we use Linear Regression for Classification : Linear regression can be used to separate two sets of data points using a higher order polynomial (in case of non-linear decision boundary) but presence of any outlier seriously affects the classification when we use Linear Regression. This problem can be easily solved using Logistic Regression.

Logistic regression examples:

  • Emails: Spam/Not Spam
  • Online Transactions: Fraud/Not Fraud
  • Tumor: Malignant/Benign
Typically the target variable (outcome) is classified as 

yϵ{0,1}
0: Negative Class (e.g. benign Tumor)
1: Positive Class (e.g. malignant tumor)

Differences with Linear Regression:

Logistic: y = 0 or 1

Logistic:0hθ(x)1
Linear: hθ(x) can be >1 or <0

Logistic Regression Model:

Want: 0hθ(x)1

hθ(x)=g(θTx)

g(z)=11+ez

where z=θTx

The function g(z) is also called the Logit function or the Sigmoid Function


Interpretation of Hypothesized Output:

hθ(x) = estimated probability that y=1 on input x

Example: If hθ(x) = 0.7, there is a 70% chance that y = 1

hθ(x)=P(y=1|x;θ)




Thursday, April 30, 2015

Linear Regression : Implementation using Normal Equation

Normal Equation: Intuition

Normal Equations is a method to solve for θ analytically.

The first step is to convert or represent the dataset in a Matrix and vector form.

Consider the dataset:


The variables here are the predictors (x1,x2,x3,x4) and the outcome y. The coefficients will be θ0,θ1,θ2,θ3andθ4.

The hypothesis:
hθ(x)=θ0x0+θ1x1+θ2x2+θ3x3+θ4x4+θ5x5

Another column needs to be added for x0 which will just be filled with 1's to make the dataset look like:



The matrix X will be denoted as:

X=(1210442311400215135005231960117)
m x (n+1)

The matrix Y  will be denoted as
Y=(14659001000435)
m-dimensional vector

m training examples, (n+1) features

θ=(XTX)1XTy

When to use Normal Equation and when to use Gradient Descent:

The Gradient Descent algorithm needs an arbitary parameter α which is not needed in Normal Equations. Also, there is no need to do feature normalization in Normal Equation method. However, if the number of features are too large (n>10,000), Normal Equation method will be too slow because of difficulty in calculating the inverse of a very large matrix. Gradient Descent works well even if the number of features are in the order of 106.


Wednesday, April 29, 2015

Linear Regression : Learning Rate

Linear Regression : Learning Rate

How to correctly choose the Learning Rate α?

The correct value of α would make the Gradient Descent Algorithm to converge to a minima and the cost J will reach a minimum. If α is large, the algorithm may not converge, and if α is small, the algorithm will be very very slow. It can be shown mathematically that for a sufficiently small value of α, the algorithm will always converge, though it may be slow.

The ideal way is to see the value of the cost function after few iterations, and see if the value of J is decreasing. For example, compare the values of J after 100, 200 and 300 iterations and see if it is decreasing (it should be, for the algorithm to converge). Ideally, declare convergence if the value of J is decreasing by less than 103 in subsequent steps.

To choose α, try setting it to 0.001, 0.01, 0.1, 1, 10,100 and so on in the multiples of 10; or set α to  0.001, 0.003, 0.01, 0.03, 0.1, 0.3,1,3,10,30.... in multiples of 3


Cost Function : Multivariate Linear Regression

The concepts for calculating the cost function J, and estimating the parameters θ0 and θ1 can be easily extended to the case where we have multiple features in the dataset, i.e. instead of only one variable x we have multiple variables x1,x2,x3... and so on.

Notations:

Consider a training dataset with 5 variables x1,x2,x3,x4andx5. The outcome variable is still y. There are m examples in the dataset (number of rows)

n = number of features
x(i) = input (features) of the ith training example
x(i)j = value of feature j in ith training example
Linear Regression : Multivariate [Multiple features]

Hypothesis:
Univariate: hθ(x)=θ0+θ1x
Multivariate: hθ(x)=θ0+θ1x1+θ2x2+θ3x3+θ4x4.....+θnxn

For convenience of notation, define x0 =1

 hθ(x)=θ0x0+θ1x1+θ2x2+θ3x3+θ4x4.....+θnxn

The vector X contains [x0,x1,x2......xn] and is a vector of dimension (n+1)
The vector θ contains [θ0,θ1,θ2......θn] and is a vector of dimension (n+1)

The hypothesis can be written as

hθ(x)=θTX

Cost Function & Gradient Descent Algorithm for a Multivariate Linear Regression

Cost Function
J(θ0,θ1,θ2.....θn)=12mmi=1(hθ(x(i))y(i))2

Gradient Descent for θ0,θ1,θ2....θn
repeat until convergence {

θj:θjα1mmi=1(hθ(x(i))y(i)).x(i)j

} Simultaneously update θj for j= 0,1,....n


The value of x0 is always 1, so this generalizes the Gradient Descent Algorithm for univariate as well as multivariate linear regression

Cost Function : Gradient Descent

Understanding the Gradient Descent Algorithm for minimizing the cost function of a Linear Regression Model

Gradient Descent is a generic algorithm which is used in many scenarios, apart from finding parameters to minimize a cost function in linear regression. Its a robust algorithm, though a little tricky to implement (as compared to other methods like a Normal Equation method). The generalization of the algorithm applies to univariate and multivariate models to find parameters θ0,θ1,θ2,..... corresponding to the variables x0,x1,x2...etc to minimize the Cost Function Jθ0,θ1,...

More about the Cost Function : Understanding Linear Regression

Gradient Descent Algorithm :

Case: GD algorithm applied for minimizing θ0andθ1. The same case can be generalized over multiple values of θ

The Gradient Descent (GD) Algorithm works in the following way:

  1. Have some function J(θ0,θ1)
  2. Want to minimize Jθ0,θ1
  3. Outline:
    1. Start with some value of θ0,θ1
    2. Keep changing θ0,θ1 to reduce  J(θ0,θ1) until we hopefully end at a minimum

The Gradient Descent starts on a point on the curve generated by plotting  J(θ0,θ1), θ0 and θ1. It then takes a small step downwards to find a point where the value of J is lower than the original, and resets θ0 and θ1 to the new values. It again takes the step in the downward direction to find a lower cost function J and repeats the steps until we reach the local minima of the curve. The values of θ0 and θ1 can be found at this minimum value of J.

Key pointers:
  • How do we define how big a step we need to take in the downward direction? This is defined by the parameter α which is called the Learning Rate of the GD algorithm
  • The above curve shows there are two local minima. What if the GD algorithm reaches a local minima but the global minima is something else? The reason why GD algorithm works is because the curve in a Linear Regression model is a convex function with only one Global Minima, so irrespective of where you start, you will reach the Global Minima of the function  J(θ0,θ1)



Convex function:


Gradient Descent Algorithm: Pseudocode

repeat until convergence {

θj:=θjαθjJ(θ0,θ1) for j=0 and j=1

}

Simultaneously update both θ0 and θ1 i.e. don't plug the value of θ0 calculated in one step into the function J to calculate the value of θ1

The Learning Rate α: This defines how fast an algorithm converges, i.e. are we taking a small step or a big step in moving to the next minimum value of J. If the value of α is too small, the steps will be small and the algorithm will be slow. If the value of α is large, the steps will be big and we might overshoot the minimum value of J.

Also, as we approach to a global minimum, the GD algorithm will automatically take smaller steps. Thus there is no need to readjust or decrease α over time.

The partial derivative function: This defines the slope of the curve at any given point (θ0,θ1). This could either be negative or positive, and will pull the values of θ0andθ1 to the minima of the curve.


Application : Gradient Descent Algorithm for Linear Regression

Linear Regression Model: hθ(x)=θ0+θ1(x)
Cost Function: J(θ0,θ1)=12mmi=1(hθ(x(i))y(i))2


Step 1: Calculate the partial derivatives of J w.r.t. θ0andθ1

j=0 : θ0Jθ0,θ1 = 1mmi=1(hθ(x(i))y(i))

j=1 : θ1Jθ0,θ1 = 1mmi=1(hθ(x(i))y(i)).x(i)

This is simple partial differentiation, once with θ0 and again with θ1, keeping the other parameter constant. If you get it, good, and if you don't get it, it doesn't matter.

Step 2: The Gradient Descent Algorithm now becomes:

repeat until convergence {

θ0:θ0α1mmi=1(hθ(x(i))y(i))

θ1:θ1α1mmi=1(hθ(x(i))y(i)).x(i)

} Update θ0 and θ1 simultaneously

Batch Gradient Descent:  Each step of gradient Descent uses all the training examples

The Normal Equation Method: The exists a method in Linear Algebra where the concept of Metrices and Vectors are used to calculate the parameters θ0 and θ1 with iteration. This method is an efficient one, except in the case of very large datasets where it becomes computationally expensive due to the calculation of inverse of very large metrices.

To use the Normal Equation method of solving for θ0 and θ1 , the dataset has to be represented in a Matrix format.


Tuesday, April 28, 2015

Linear Regression : Understanding the Cost Function

Defining the Cost:

When we fit a regression line through an existing data, the line passes through the data and not all points will lie on the line (which will be the case for a perfect correlation between X and Y). There will be some points which are above the line and which are below the line.

The whole idea of regression is to fit the regression line such that the distance between the existing data points and the regression line is minimum (not the orthogonal distance, but the distance from the line measured in parallel to y axis as shown in the above figure). The points above the line would yield positive distance while the points below the line would yield negative distances.

The residual e is defined as ^yiyi

The errors are squared to eliminate negative values, and the Sum of Squared Erros (SSE) is minimized to obtain the coefficients of X in the equation y=θ0+θ1x+e

SSE=mi=1(^yiyi)2

This method is known as Ordinary Least Squares (OLS)

The Hypothesis hθ(x) is written as

 hθ(x)=θ0+θ1(x)



The Cost Function (J) is typically written as
J(θ0,θ1)=12mmi=1(hθ(x(i))y(i))2

The goal of the regression analysis is to minimize the coefficients θ0 and θ1 for a univariate regression. For Multivariate, there will be multiple variables for X (x1,x2,x3..) with their respective coefficients (θ1,θ2,θ3.....) The coefficient term θ0 is the constant term in this equation.
--------------------------------------------------------------------------------------------------
To summarize:
Hypothesis: hθ(x)=θ0+θ1(x)
Parameters: θ0,θ1
Cost Function:

J(θ0,θ1)=12mmi=1(hθ(x(i))y(i))2

Goal: Minimize Jθ0,θ1(θ0,θ1)
--------------------------------------------------------------------------------------------------





Monday, April 27, 2015

Linear Regression : The Basics



Some Definitions of Linear Regression:
  • When a player plays well in the team, the team always wins. Given that the player in the next match has played well, its highly probable that the team will win again
  • Predicting the value of a variable by observing its relationship with another variable.
  • Wiki :  Linear Regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variable) denoted X
  • Example: Price of a house is dependent on its area and number of bedrooms. Given the data about area of house and number of bedrooms (X: area(x1) & num_bedrooms(x2)) and their price (Y), find a relationship between Price of house and its area & number of bedrooms. Now, if you get a house's area and number of bedrooms, predict the price of tha house based on the historical data
All the above statements and examples are from the Supervised Learning paradigm of Machine Learning. They take some initial data, understand the pattern in it, and use the pattern on the new data to figure out the outcome


The Predictor variables (X) are known as predictors of independent variables, since they are independent and their values can change without being affected by any other variable

The Outcome Variable (Y) is known as dependent variable since its value is 'dependent' of the value of independent variables (X's), and if any of the X's change, the value of Y will have to change.

An equation is typically written as 

y: Outcome Variable
X: vector of Predictor variables (independent variables)
ϵ: error term
β: Coefficient of the terms


Another way of representing the model is :

The Training Dataset goes into a learning model, which proposes a hypothesis 'h' which is used to take independent variables (X's) as input and produce a result (Outcome variable/Dependent Variable)

What is there is no relationship? Or how accurate is our model? How to measure it? What is the error term?  - All models come with a cost function which estimates the difference between the actual values of y's in the training data, and the gap with the prediction model. This gap between actual and estimated values of Ys is known as the Cost.

All optimization objectives are aimed at reducing the cost. Lesser the cost, better is the model (however there are other factors too).