Skip to content
Snippets Groups Projects
  • Xiangrui Meng's avatar
    26d35f3f
    [SPARK-1506][MLLIB] Documentation improvements for MLlib 1.0 · 26d35f3f
    Xiangrui Meng authored
    Preview: http://54.82.240.23:4000/mllib-guide.html
    
    Table of contents:
    
    * Basics
      * Data types
      * Summary statistics
    * Classification and regression
      * linear support vector machine (SVM)
      * logistic regression
      * linear linear squares, Lasso, and ridge regression
      * decision tree
      * naive Bayes
    * Collaborative Filtering
      * alternating least squares (ALS)
    * Clustering
      * k-means
    * Dimensionality reduction
      * singular value decomposition (SVD)
      * principal component analysis (PCA)
    * Optimization
      * stochastic gradient descent
      * limited-memory BFGS (L-BFGS)
    
    Author: Xiangrui Meng <meng@databricks.com>
    
    Closes #422 from mengxr/mllib-doc and squashes the following commits:
    
    944e3a9 [Xiangrui Meng] merge master
    f9fda28 [Xiangrui Meng] minor
    9474065 [Xiangrui Meng] add alpha to ALS examples
    928e630 [Xiangrui Meng] initialization_mode -> initializationMode
    5bbff49 [Xiangrui Meng] add imports to labeled point examples
    c17440d [Xiangrui Meng] fix python nb example
    28f40dc [Xiangrui Meng] remove localhost:4000
    369a4d3 [Xiangrui Meng] Merge branch 'master' into mllib-doc
    7dc95cc [Xiangrui Meng] update linear methods
    053ad8a [Xiangrui Meng] add links to go back to the main page
    abbbf7e [Xiangrui Meng] update ALS argument names
    648283e [Xiangrui Meng] level down statistics
    14e2287 [Xiangrui Meng] add sample libsvm data and use it in guide
    8cd2441 [Xiangrui Meng] minor updates
    186ab07 [Xiangrui Meng] update section names
    6568d65 [Xiangrui Meng] update toc, level up lr and svm
    162ee12 [Xiangrui Meng] rename section names
    5c1e1b1 [Xiangrui Meng] minor
    8aeaba1 [Xiangrui Meng] wrap long lines
    6ce6a6f [Xiangrui Meng] add summary statistics to toc
    5760045 [Xiangrui Meng] claim beta
    cc604bf [Xiangrui Meng] remove classification and regression
    92747b3 [Xiangrui Meng] make section titles consistent
    e605dd6 [Xiangrui Meng] add LIBSVM loader
    f639674 [Xiangrui Meng] add python section to migration guide
    c82ffb4 [Xiangrui Meng] clean optimization
    31660eb [Xiangrui Meng] update linear algebra and stat
    0a40837 [Xiangrui Meng] first pass over linear methods
    1fc8271 [Xiangrui Meng] update toc
    906ed0a [Xiangrui Meng] add a python example to naive bayes
    5f0a700 [Xiangrui Meng] update collaborative filtering
    656d416 [Xiangrui Meng] update mllib-clustering
    86e143a [Xiangrui Meng] remove data types section from main page
    8d1a128 [Xiangrui Meng] move part of linear algebra to data types and add Java/Python examples
    d1b5cbf [Xiangrui Meng] merge master
    72e4804 [Xiangrui Meng] one pass over tree guide
    64f8995 [Xiangrui Meng] move decision tree guide to a separate file
    9fca001 [Xiangrui Meng] add first version of linear algebra guide
    53c9552 [Xiangrui Meng] update dependencies
    f316ec2 [Xiangrui Meng] add migration guide
    f399f6c [Xiangrui Meng] move linear-algebra to dimensionality-reduction
    182460f [Xiangrui Meng] add guide for naive Bayes
    137fd1d [Xiangrui Meng] re-organize toc
    a61e434 [Xiangrui Meng] update mllib's toc
    26d35f3f
    History
    [SPARK-1506][MLLIB] Documentation improvements for MLlib 1.0
    Xiangrui Meng authored
    Preview: http://54.82.240.23:4000/mllib-guide.html
    
    Table of contents:
    
    * Basics
      * Data types
      * Summary statistics
    * Classification and regression
      * linear support vector machine (SVM)
      * logistic regression
      * linear linear squares, Lasso, and ridge regression
      * decision tree
      * naive Bayes
    * Collaborative Filtering
      * alternating least squares (ALS)
    * Clustering
      * k-means
    * Dimensionality reduction
      * singular value decomposition (SVD)
      * principal component analysis (PCA)
    * Optimization
      * stochastic gradient descent
      * limited-memory BFGS (L-BFGS)
    
    Author: Xiangrui Meng <meng@databricks.com>
    
    Closes #422 from mengxr/mllib-doc and squashes the following commits:
    
    944e3a9 [Xiangrui Meng] merge master
    f9fda28 [Xiangrui Meng] minor
    9474065 [Xiangrui Meng] add alpha to ALS examples
    928e630 [Xiangrui Meng] initialization_mode -> initializationMode
    5bbff49 [Xiangrui Meng] add imports to labeled point examples
    c17440d [Xiangrui Meng] fix python nb example
    28f40dc [Xiangrui Meng] remove localhost:4000
    369a4d3 [Xiangrui Meng] Merge branch 'master' into mllib-doc
    7dc95cc [Xiangrui Meng] update linear methods
    053ad8a [Xiangrui Meng] add links to go back to the main page
    abbbf7e [Xiangrui Meng] update ALS argument names
    648283e [Xiangrui Meng] level down statistics
    14e2287 [Xiangrui Meng] add sample libsvm data and use it in guide
    8cd2441 [Xiangrui Meng] minor updates
    186ab07 [Xiangrui Meng] update section names
    6568d65 [Xiangrui Meng] update toc, level up lr and svm
    162ee12 [Xiangrui Meng] rename section names
    5c1e1b1 [Xiangrui Meng] minor
    8aeaba1 [Xiangrui Meng] wrap long lines
    6ce6a6f [Xiangrui Meng] add summary statistics to toc
    5760045 [Xiangrui Meng] claim beta
    cc604bf [Xiangrui Meng] remove classification and regression
    92747b3 [Xiangrui Meng] make section titles consistent
    e605dd6 [Xiangrui Meng] add LIBSVM loader
    f639674 [Xiangrui Meng] add python section to migration guide
    c82ffb4 [Xiangrui Meng] clean optimization
    31660eb [Xiangrui Meng] update linear algebra and stat
    0a40837 [Xiangrui Meng] first pass over linear methods
    1fc8271 [Xiangrui Meng] update toc
    906ed0a [Xiangrui Meng] add a python example to naive bayes
    5f0a700 [Xiangrui Meng] update collaborative filtering
    656d416 [Xiangrui Meng] update mllib-clustering
    86e143a [Xiangrui Meng] remove data types section from main page
    8d1a128 [Xiangrui Meng] move part of linear algebra to data types and add Java/Python examples
    d1b5cbf [Xiangrui Meng] merge master
    72e4804 [Xiangrui Meng] one pass over tree guide
    64f8995 [Xiangrui Meng] move decision tree guide to a separate file
    9fca001 [Xiangrui Meng] add first version of linear algebra guide
    53c9552 [Xiangrui Meng] update dependencies
    f316ec2 [Xiangrui Meng] add migration guide
    f399f6c [Xiangrui Meng] move linear-algebra to dimensionality-reduction
    182460f [Xiangrui Meng] add guide for naive Bayes
    137fd1d [Xiangrui Meng] re-organize toc
    a61e434 [Xiangrui Meng] update mllib's toc
layout: global
title: <a href="mllib-guide.html">MLlib</a> - Optimization
  • Table of contents {:toc}

\[ \newcommand{\R}{\mathbb{R}} \newcommand{\E}{\mathbb{E}} \newcommand{\x}{\mathbf{x}} \newcommand{\y}{\mathbf{y}} \newcommand{\wv}{\mathbf{w}} \newcommand{\av}{\mathbf{\alpha}} \newcommand{\bv}{\mathbf{b}} \newcommand{\N}{\mathbb{N}} \newcommand{\id}{\mathbf{I}} \newcommand{\ind}{\mathbf{1}} \newcommand{\0}{\mathbf{0}} \newcommand{\unit}{\mathbf{e}} \newcommand{\one}{\mathbf{1}} \newcommand{\zero}{\mathbf{0}} \]

Mathematical description

Gradient descent

The simplest method to solve optimization problems of the form $\min_{\wv \in\R^d} \; f(\wv)$ is gradient descent. Such first-order optimization methods (including gradient descent and stochastic variants thereof) are well-suited for large-scale and distributed computation.

Gradient descent methods aim to find a local minimum of a function by iteratively taking steps in the direction of steepest descent, which is the negative of the derivative (called the gradient) of the function at the current point, i.e., at the current parameter value. If the objective function $f$ is not differentiable at all arguments, but still convex, then a sub-gradient is the natural generalization of the gradient, and assumes the role of the step direction. In any case, computing a gradient or sub-gradient of $f$ is expensive --- it requires a full pass through the complete dataset, in order to compute the contributions from all loss terms.

Stochastic gradient descent (SGD)

Optimization problems whose objective function $f$ is written as a sum are particularly suitable to be solved using stochastic gradient descent (SGD). In our case, for the optimization formulations commonly used in supervised machine learning, \begin{equation} f(\wv) := \lambda\, R(\wv) + \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i) \label{eq:regPrimal} \ . \end{equation} this is especially natural, because the loss is written as an average of the individual losses coming from each datapoint.

A stochastic subgradient is a randomized choice of a vector, such that in expectation, we obtain a true subgradient of the original objective function. Picking one datapoint $i\in[1..n]$ uniformly at random, we obtain a stochastic subgradient of $\eqref{eq:regPrimal}$, with respect to $\wv$ as follows: \[ f'_{\wv,i} := L'_{\wv,i} + \lambda\, R'_\wv \ , \] where $L'_{\wv,i} \in \R^d$ is a subgradient of the part of the loss function determined by the $i$-th datapoint, that is $L'_{\wv,i} \in \frac{\partial}{\partial \wv} L(\wv;\x_i,y_i)$. Furthermore, $R'_\wv$ is a subgradient of the regularizer $R(\wv)$, i.e. $R'_\wv \in \frac{\partial}{\partial \wv} R(\wv)$. The term $R'_\wv$ does not depend on which random datapoint is picked. Clearly, in expectation over the random choice of $i\in[1..n]$, we have that $f'_{\wv,i}$ is a subgradient of the original objective $f$, meaning that $\E\left[f'_{\wv,i}\right] \in \frac{\partial}{\partial \wv} f(\wv)$.

Running SGD now simply becomes walking in the direction of the negative stochastic subgradient $f'_{\wv,i}$, that is \begin{equation}\label{eq:SGDupdate} \wv^{(t+1)} := \wv^{(t)} - \gamma \; f'_{\wv,i} \ . \end{equation} Step-size. The parameter $\gamma$ is the step-size, which in the default implementation is chosen decreasing with the square root of the iteration counter, i.e. $\gamma := \frac{s}{\sqrt{t}}$ in the $t$-th iteration, with the input parameter $s=$ stepSize. Note that selecting the best step-size for SGD methods can often be delicate in practice and is a topic of active research.

Gradients. A table of (sub)gradients of the machine learning methods implemented in MLlib, is available in the classification and regression section.

Proximal Updates. As an alternative to just use the subgradient $R'(\wv)$ of the regularizer in the step direction, an improved update for some cases can be obtained by using the proximal operator instead. For the L1-regularizer, the proximal operator is given by soft thresholding, as implemented in L1Updater.

Update schemes for distributed SGD

The SGD implementation in GradientDescent uses a simple (distributed) sampling of the data examples. We recall that the loss part of the optimization problem $\eqref{eq:regPrimal}$ is $\frac1n \sum_{i=1}^n L(\wv;\x_i,y_i)$, and therefore $\frac1n \sum_{i=1}^n L'_{\wv,i}$ would be the true (sub)gradient. Since this would require access to the full data set, the parameter miniBatchFraction specifies which fraction of the full data to use instead. The average of the gradients over this subset, i.e. \[ \frac1{|S|} \sum_{i\in S} L'_{\wv,i} \ , \] is a stochastic gradient. Here $S$ is the sampled subset of size $|S|=$ miniBatchFraction $\cdot n$.

In each iteration, the sampling over the distributed dataset (RDD), as well as the computation of the sum of the partial results from each worker machine is performed by the standard spark routines.

If the fraction of points miniBatchFraction is set to 1 (default), then the resulting step in each iteration is exact (sub)gradient descent. In this case there is no randomness and no variance in the used step directions. On the other extreme, if miniBatchFraction is chosen very small, such that only a single point is sampled, i.e. $|S|=$ miniBatchFraction $\cdot n = 1$, then the algorithm is equivalent to standard SGD. In that case, the step direction depends from the uniformly random sampling of the point.

Implementation in MLlib

Gradient descent methods including stochastic subgradient descent (SGD) as included as a low-level primitive in MLlib, upon which various ML algorithms are developed, see the linear methods section for example.

The SGD method GradientDescent.runMiniBatchSGD has the following parameters:

  • gradient is a class that computes the stochastic gradient of the function being optimized, i.e., with respect to a single training example, at the current parameter value. MLlib includes gradient classes for common loss functions, e.g., hinge, logistic, least-squares. The gradient class takes as input a training example, its label, and the current parameter value.
  • updater is a class that performs the actual gradient descent step, i.e. updating the weights in each iteration, for a given gradient of the loss part. The updater is also responsible to perform the update from the regularization part. MLlib includes updaters for cases without regularization, as well as L1 and L2 regularizers.
  • stepSize is a scalar value denoting the initial step size for gradient descent. All updaters in MLlib use a step size at the t-th step equal to stepSize $/ \sqrt{t}$.
  • numIterations is the number of iterations to run.
  • regParam is the regularization parameter when using L1 or L2 regularization.
  • miniBatchFraction is the fraction of the total data that is sampled in each iteration, to compute the gradient direction.

Available algorithms for gradient descent: