Skip to content
Snippets Groups Projects
  • Dongjoon Hyun's avatar
    88fa8666
    [MINOR][DOC] Fix supported hive version in doc · 88fa8666
    Dongjoon Hyun authored
    ## What changes were proposed in this pull request?
    
    Today, Spark 1.6.1 and updated docs are release. Unfortunately, there is obsolete hive version information on docs: [Building Spark](http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support). This PR fixes the following two lines.
    ```
    -By default Spark will build with Hive 0.13.1 bindings.
    +By default Spark will build with Hive 1.2.1 bindings.
    -# Apache Hadoop 2.4.X with Hive 13 support
    +# Apache Hadoop 2.4.X with Hive 1.2.1 support
    ```
    `sql/README.md` file also describe
    
    ## How was this patch tested?
    
    Manual.
    
    (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
    
    Author: Dongjoon Hyun <dongjoon@apache.org>
    
    Closes #11639 from dongjoon-hyun/fix_doc_hive_version.
    88fa8666
    History
    [MINOR][DOC] Fix supported hive version in doc
    Dongjoon Hyun authored
    ## What changes were proposed in this pull request?
    
    Today, Spark 1.6.1 and updated docs are release. Unfortunately, there is obsolete hive version information on docs: [Building Spark](http://spark.apache.org/docs/latest/building-spark.html#building-with-hive-and-jdbc-support). This PR fixes the following two lines.
    ```
    -By default Spark will build with Hive 0.13.1 bindings.
    +By default Spark will build with Hive 1.2.1 bindings.
    -# Apache Hadoop 2.4.X with Hive 13 support
    +# Apache Hadoop 2.4.X with Hive 1.2.1 support
    ```
    `sql/README.md` file also describe
    
    ## How was this patch tested?
    
    Manual.
    
    (If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
    
    Author: Dongjoon Hyun <dongjoon@apache.org>
    
    Closes #11639 from dongjoon-hyun/fix_doc_hive_version.
building-spark.md 11.20 KiB
layout: global
title: Building Spark
redirect_from: "building-with-maven.html"
  • This will become a table of contents (this text will be scraped). {:toc}

Building Spark using Maven requires Maven 3.3.9 or newer and Java 7+. The Spark build can supply a suitable Maven binary; see below.

Building with build/mvn

Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the build/ directory. This script will automatically download and setup all necessary build requirements (Maven, Scala, and Zinc) locally within the build/ directory itself. It honors any mvn binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. build/mvn execution acts as a pass through to the mvn call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:

{% highlight bash %} build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package {% endhighlight %}

Other build examples can be found below.

Note: When building on an encrypted filesystem (if your home directory is encrypted, for example), then the Spark build might fail with a "Filename too long" error. As a workaround, add the following in the configuration args of the scala-maven-plugin in the project pom.xml:

<arg>-Xmax-classfile-name</arg>
<arg>128</arg>

and in project/SparkBuild.scala add:

scalacOptions in Compile ++= Seq("-Xmax-classfile-name", "128"),

to the sharedSettings val. See also this PR if you are unsure of where to add these lines.

Building a Runnable Distribution

To create a Spark distribution like those distributed by the Spark Downloads page, and that is laid out so as to be runnable, use ./dev/make-distribution.sh in the project root directory. It can be configured with Maven profile settings and so on like the direct Maven build. Example:

./dev/make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn

For more information on usage, run ./dev/make-distribution.sh --help

Setting up Maven's Memory Usage

You'll need to configure Maven to use more memory than usual by setting MAVEN_OPTS. We recommend the following settings:

{% highlight bash %} export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m" {% endhighlight %}

If you don't run this, you may see errors like the following:

[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
[ERROR] PermGen space -> [Help 1]

[INFO] Compiling 203 Scala sources and 9 Java sources to /Users/me/Development/spark/core/target/scala-{{site.SCALA_BINARY_VERSION}}/classes...
[ERROR] Java heap space -> [Help 1]

You can fix this by setting the MAVEN_OPTS variable as discussed before.

Note:

  • For Java 8 and above this step is not required.
  • If using build/mvn with no MAVEN_OPTS set, the script will automate this for you.

Specifying the Hadoop Version

Because HDFS is not protocol-compatible across versions, if you want to read from HDFS, you'll need to build Spark against the specific HDFS version in your environment. You can do this through the hadoop.version property. If unset, Spark will build against Hadoop 2.2.0 by default. Note that certain build profiles are required for particular Hadoop versions:

Hadoop version Profile required
2.2.x hadoop-2.2
2.3.x hadoop-2.3
2.4.x hadoop-2.4
2.6.x hadoop-2.6
2.7.x and later 2.x hadoop-2.7

You can enable the yarn profile and optionally set the yarn.version property if it is different from hadoop.version. Spark only supports YARN versions 2.2.0 and later.

Examples:

{% highlight bash %}