Here’s the original paper that proposes the algorithm that we’re going to implement.
SVMs are a very popular classification learning tool, and in the original form, the task of learning an SVM is actually a loss minimization problem with a penalty term for the norm of the classifier being learned. So for a given training set of training examples , where and , we want to build a minimizer for the following function (note that here and are vectors):
where is the loss function, given by where denotes the inner product of the two vectors. To have an intuitive insight into why this loss function works, let’s consider two examples:
The variable has the classification information, where means belonging to the class, and means not belonging to the class for which the classifier is being trained. Therefore, a positive value of along with , or a negative value of along with is favorable because both these cases denote that the prediction works right. It also means that the loss function will be minimized in these two scenarios (work it out if needed), which is a part of our main function that we need to build the minimizer for.
PEGASOS stands for Primal Estimated sub-GrAdient SOlver for SVM, where stochastic means having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely. Let’s dive into the algorithm and see why it’s stochastic. The PEGASOS algorithm performs stochastic gradient descent on the primal objective function with a carefully chosen step size.
Here is the indicator function, which takes the value if its argument is true, and otherwise, so we know that the value will be only when yields some non-zero loss in the example
Let’s try to see why this approximation is okay for a suitable . For this, we’ll look at the value of the complete gradient, and the subgradient computed with the approximated function above, and see a relation between them. The complete gradient of will be ,
Now what will be the expected value of the subgradient that we computed above? To find the expected value, we observe that the example taken to approximate the objective is chosen uniformly at random, which means that the probability of any example being selected is . Thus the expected value of the subgradient turns out to be equal to the complete gradient of , our primal objective function. And that is why intuitively this approximation is expected to work well enough. Next section deals with mini-batch iterations, to approximate the objective with more determinism.
As an extension of the basic procedure, now we would select a subset of examples, rather than selecting a single example for approximating the objective. So for a given , we choose a subset of size and approximate the objective as before. Note that is the case we already saw above. So now the objective can be written as where is the subset chosen in iteration.
A potential variation in the above algorithm is that we limit the set of admissible solutions to a ball of radius . To enforce this, project after each iteration onto a sphere as . The revised analysis as presented in the paper does not compulsorily require this projection step. It mentions this as an optional step because no major difference was found during the experiments between the projected and the unprojected variants.
It is proved in the paper that the number of iterations required to obtain a solution of accuracy is , where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs required iterations because in previously devised SVM solvers, the number of iterations also scales linearly with , where is the regularization parameter of the SVM; while with PEGASOS, this is not the case. PEGASOS works on an approximation, so the runtime of the algorithm is not dependent on the number of training examples or with some function of . It just depends on , the size of the subset we are taking, and , the number of iterations that we are making. The implemented code (in C++) will be put up later when I’m not feeling lazy.