diff --git a/docs/building-spark.md b/docs/building-spark.md index e478954c6267b88d76929b5d06e7ec7f2aba8c88..1e202acb9e2cfdb92990cc798ac75694b9c55d44 100644 --- a/docs/building-spark.md +++ b/docs/building-spark.md @@ -98,8 +98,11 @@ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package # Apache Hadoop 2.4.X or 2.5.X mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=VERSION -DskipTests clean package -Versions of Hadoop after 2.5.X may or may not work with the -Phadoop-2.4 profile (they were -released after this version of Spark). +# Apache Hadoop 2.6.X +mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean package + +# Apache Hadoop 2.7.X and later +mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=VERSION -DskipTests clean package # Different versions of HDFS and YARN. mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -Dyarn.version=2.2.0 -DskipTests clean package @@ -140,10 +143,10 @@ It's possible to build Spark sub-modules using the `mvn -pl` option. For instance, you can build the Spark Streaming module using: {% highlight bash %} -mvn -pl :spark-streaming_2.10 clean install +mvn -pl :spark-streaming_2.11 clean install {% endhighlight %} -where `spark-streaming_2.10` is the `artifactId` as defined in `streaming/pom.xml` file. +where `spark-streaming_2.11` is the `artifactId` as defined in `streaming/pom.xml` file. # Continuous Compilation diff --git a/docs/index.md b/docs/index.md index 9dfc52a2bdc9bd998d3a7abc823fd4cef76ec228..20eab567a50df07e36936e9514cd9151d48456dd 100644 --- a/docs/index.md +++ b/docs/index.md @@ -130,8 +130,8 @@ options for deployment: * [StackOverflow tag `apache-spark`](http://stackoverflow.com/questions/tagged/apache-spark) * [Mailing Lists](http://spark.apache.org/mailing-lists.html): ask questions about Spark here * [AMP Camps](http://ampcamp.berkeley.edu/): a series of training camps at UC Berkeley that featured talks and - exercises about Spark, Spark Streaming, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/3/), - [slides](http://ampcamp.berkeley.edu/3/) and [exercises](http://ampcamp.berkeley.edu/3/exercises/) are + exercises about Spark, Spark Streaming, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/6/), + [slides](http://ampcamp.berkeley.edu/6/) and [exercises](http://ampcamp.berkeley.edu/6/exercises/) are available online for free. * [Code Examples](http://spark.apache.org/examples.html): more are also available in the `examples` subfolder of Spark ([Scala]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples), [Java]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/examples), diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 3a832de95f10db0fcc378453d247ca7e28999dff..293a82882e41294fa9428ce131fac815571b09f9 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -167,8 +167,8 @@ For example: ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master mesos://207.184.161.138:7077 \ - --deploy-mode cluster - --supervise + --deploy-mode cluster \ + --supervise \ --executor-memory 20G \ --total-executor-cores 100 \ http://path/to/examples.jar \ diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md index 8045f8c5b84837090c12c1c2c2ac14f1e89d1ad6..c775fe710ffd593de1ff7e69aef27533164c467c 100644 --- a/docs/running-on-yarn.md +++ b/docs/running-on-yarn.md @@ -49,8 +49,8 @@ In `cluster` mode, the driver runs on a different machine than the client, so `S $ ./bin/spark-submit --class my.main.Class \ --master yarn \ --deploy-mode cluster \ - --jars my-other-jar.jar,my-other-other-jar.jar - my-main-jar.jar + --jars my-other-jar.jar,my-other-other-jar.jar \ + my-main-jar.jar \ app_arg1 app_arg2