Skip to content
Snippets Groups Projects
Commit 9ee1e9db authored by Matei Zaharia's avatar Matei Zaharia
Browse files

Doc improvements

parent 141f5427
No related branches found
No related tags found
No related merge requests found
...@@ -315,7 +315,7 @@ Apart from these, the following properties are also available, and may be useful ...@@ -315,7 +315,7 @@ Apart from these, the following properties are also available, and may be useful
# Environment Variables # Environment Variables
Certain Spark settings can also be configured through environment variables, which are read from the `conf/spark-env.sh` Certain Spark settings can also be configured through environment variables, which are read from the `conf/spark-env.sh`
script in the directory where Spark is installed. These variables are meant to be for machine-specific settings, such script in the directory where Spark is installed (or `conf/spark-env.cmd` on Windows). These variables are meant to be for machine-specific settings, such
as library search paths. While Java system properties can also be set here, for application settings, we recommend setting as library search paths. While Java system properties can also be set here, for application settings, we recommend setting
these properties within the application instead of in `spark-env.sh` so that different applications can use different these properties within the application instead of in `spark-env.sh` so that different applications can use different
settings. settings.
...@@ -325,6 +325,8 @@ Note that `conf/spark-env.sh` does not exist by default when Spark is installed. ...@@ -325,6 +325,8 @@ Note that `conf/spark-env.sh` does not exist by default when Spark is installed.
The following variables can be set in `spark-env.sh`: The following variables can be set in `spark-env.sh`:
* `JAVA_HOME`, the location where Java is installed (if it's not on your default `PATH`)
* `PYSPARK_PYTHON`, the Python binary to use for PySpark
* `SPARK_LOCAL_IP`, to configure which IP address of the machine to bind to. * `SPARK_LOCAL_IP`, to configure which IP address of the machine to bind to.
* `SPARK_LIBRARY_PATH`, to add search directories for native libraries. * `SPARK_LIBRARY_PATH`, to add search directories for native libraries.
* `SPARK_CLASSPATH`, to add elements to Spark's classpath that you want to be present for _all_ applications. * `SPARK_CLASSPATH`, to add elements to Spark's classpath that you want to be present for _all_ applications.
......
...@@ -11,6 +11,8 @@ Spark can run on the Apache Mesos cluster manager, Hadoop YARN, Amazon EC2, or w ...@@ -11,6 +11,8 @@ Spark can run on the Apache Mesos cluster manager, Hadoop YARN, Amazon EC2, or w
Get Spark by visiting the [downloads page](http://spark.incubator.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}. Get Spark by visiting the [downloads page](http://spark.incubator.apache.org/downloads.html) of the Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run it is to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation.
# Building # Building
Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with it. To compile the code, go into the top-level Spark directory and run Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with it. To compile the code, go into the top-level Spark directory and run
...@@ -50,6 +52,8 @@ In addition, if you wish to run Spark on [YARN](running-on-yarn.md), set ...@@ -50,6 +52,8 @@ In addition, if you wish to run Spark on [YARN](running-on-yarn.md), set
SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
(Note that on Windows, you need to set the environment variables on separate lines, e.g., `set SPARK_HADOOP_VERSION=1.2.1`.)
# Where to Go from Here # Where to Go from Here
**Programming guides:** **Programming guides:**
......
...@@ -4,7 +4,7 @@ title: Python Programming Guide ...@@ -4,7 +4,7 @@ title: Python Programming Guide
--- ---
The Spark Python API (PySpark) exposes most of the Spark features available in the Scala version to Python. The Spark Python API (PySpark) exposes the Spark programming model to Python.
To learn the basics of Spark, we recommend reading through the To learn the basics of Spark, we recommend reading through the
[Scala programming guide](scala-programming-guide.html) first; it should be [Scala programming guide](scala-programming-guide.html) first; it should be
easy to follow even if you don't know Scala. easy to follow even if you don't know Scala.
...@@ -15,12 +15,8 @@ This guide will show how to use the Spark features described there in Python. ...@@ -15,12 +15,8 @@ This guide will show how to use the Spark features described there in Python.
There are a few key differences between the Python and Scala APIs: There are a few key differences between the Python and Scala APIs:
* Python is dynamically typed, so RDDs can hold objects of different types. * Python is dynamically typed, so RDDs can hold objects of multiple types.
* PySpark does not currently support the following Spark features: * PySpark does not yet support a few API calls, such as `lookup`, `sort`, and `persist` at custom storage levels. See the [API docs](api/pyspark/index.html) for details.
- `lookup`
- `sort`
- `persist` at storage levels other than `MEMORY_ONLY`
- Execution on Windows -- this is slated for a future release
In PySpark, RDDs support the same methods as their Scala counterparts but take Python functions and return Python collection types. In PySpark, RDDs support the same methods as their Scala counterparts but take Python functions and return Python collection types.
Short functions can be passed to RDD methods using Python's [`lambda`](http://www.diveintopython.net/power_of_introspection/lambda_functions.html) syntax: Short functions can be passed to RDD methods using Python's [`lambda`](http://www.diveintopython.net/power_of_introspection/lambda_functions.html) syntax:
...@@ -30,7 +26,7 @@ logData = sc.textFile(logFile).cache() ...@@ -30,7 +26,7 @@ logData = sc.textFile(logFile).cache()
errors = logData.filter(lambda line: "ERROR" in line) errors = logData.filter(lambda line: "ERROR" in line)
{% endhighlight %} {% endhighlight %}
You can also pass functions that are defined using the `def` keyword; this is useful for more complicated functions that cannot be expressed using `lambda`: You can also pass functions that are defined with the `def` keyword; this is useful for longer functions that can't be expressed using `lambda`:
{% highlight python %} {% highlight python %}
def is_error(line): def is_error(line):
...@@ -38,7 +34,7 @@ def is_error(line): ...@@ -38,7 +34,7 @@ def is_error(line):
errors = logData.filter(is_error) errors = logData.filter(is_error)
{% endhighlight %} {% endhighlight %}
Functions can access objects in enclosing scopes, although modifications to those objects within RDD methods will not be propagated to other tasks: Functions can access objects in enclosing scopes, although modifications to those objects within RDD methods will not be propagated back:
{% highlight python %} {% highlight python %}
error_keywords = ["Exception", "Error"] error_keywords = ["Exception", "Error"]
...@@ -51,17 +47,20 @@ PySpark will automatically ship these functions to workers, along with any objec ...@@ -51,17 +47,20 @@ PySpark will automatically ship these functions to workers, along with any objec
Instances of classes will be serialized and shipped to workers by PySpark, but classes themselves cannot be automatically distributed to workers. Instances of classes will be serialized and shipped to workers by PySpark, but classes themselves cannot be automatically distributed to workers.
The [Standalone Use](#standalone-use) section describes how to ship code dependencies to workers. The [Standalone Use](#standalone-use) section describes how to ship code dependencies to workers.
In addition, PySpark fully supports interactive use---simply run `./pyspark` to launch an interactive shell.
# Installing and Configuring PySpark # Installing and Configuring PySpark
PySpark requires Python 2.6 or higher. PySpark requires Python 2.6 or higher.
PySpark jobs are executed using a standard cPython interpreter in order to support Python modules that use C extensions. PySpark jobs are executed using a standard CPython interpreter in order to support Python modules that use C extensions.
We have not tested PySpark with Python 3 or with alternative Python interpreters, such as [PyPy](http://pypy.org/) or [Jython](http://www.jython.org/). We have not tested PySpark with Python 3 or with alternative Python interpreters, such as [PyPy](http://pypy.org/) or [Jython](http://www.jython.org/).
By default, PySpark's scripts will run programs using `python`; an alternate Python executable may be specified by setting the `PYSPARK_PYTHON` environment variable in `conf/spark-env.sh`.
By default, PySpark requires `python` to be available on the system `PATH` and use it to run programs; an alternate Python executable may be specified by setting the `PYSPARK_PYTHON` environment variable in `conf/spark-env.sh` (or `.cmd` on Windows).
All of PySpark's library dependencies, including [Py4J](http://py4j.sourceforge.net/), are bundled with PySpark and automatically imported. All of PySpark's library dependencies, including [Py4J](http://py4j.sourceforge.net/), are bundled with PySpark and automatically imported.
Standalone PySpark jobs should be run using the `pyspark` script, which automatically configures the Java and Python environment using the settings in `conf/spark-env.sh`. Standalone PySpark jobs should be run using the `pyspark` script, which automatically configures the Java and Python environment using the settings in `conf/spark-env.sh` or `.cmd`.
The script automatically adds the `pyspark` package to the `PYTHONPATH`. The script automatically adds the `pyspark` package to the `PYTHONPATH`.
...@@ -101,7 +100,7 @@ $ MASTER=local[4] ./pyspark ...@@ -101,7 +100,7 @@ $ MASTER=local[4] ./pyspark
## IPython ## IPython
It is also possible to launch PySpark in [IPython](http://ipython.org), the enhanced Python interpreter. It is also possible to launch PySpark in [IPython](http://ipython.org), the enhanced Python interpreter.
To do this, simply set the `IPYTHON` variable to `1` when running `pyspark`: To do this, set the `IPYTHON` variable to `1` when running `pyspark`:
{% highlight bash %} {% highlight bash %}
$ IPYTHON=1 ./pyspark $ IPYTHON=1 ./pyspark
...@@ -132,15 +131,16 @@ sc = SparkContext("local", "Job Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg ...@@ -132,15 +131,16 @@ sc = SparkContext("local", "Job Name", pyFiles=['MyFile.py', 'lib.zip', 'app.egg
Files listed here will be added to the `PYTHONPATH` and shipped to remote worker machines. Files listed here will be added to the `PYTHONPATH` and shipped to remote worker machines.
Code dependencies can be added to an existing SparkContext using its `addPyFile()` method. Code dependencies can be added to an existing SparkContext using its `addPyFile()` method.
# API Docs
[API documentation](api/pyspark/index.html) for PySpark is available as Epydoc.
Many of the methods also contain [doctests](http://docs.python.org/2/library/doctest.html) that provide additional usage examples.
# Where to Go from Here # Where to Go from Here
PySpark includes several sample programs in the [`python/examples` folder](https://github.com/apache/incubator-spark/tree/master/python/examples). PySpark also includes several sample programs in the [`python/examples` folder](https://github.com/apache/incubator-spark/tree/master/python/examples).
You can run them by passing the files to the `pyspark` script; e.g.: You can run them by passing the files to `pyspark`; e.g.:
./pyspark python/examples/wordcount.py ./pyspark python/examples/wordcount.py
Each program prints usage help when run without arguments. Each program prints usage help when run without arguments.
We currently provide [API documentation](api/pyspark/index.html) for the Python API as Epydoc.
Many of the RDD method descriptions contain [doctests](http://docs.python.org/2/library/doctest.html) that provide additional usage examples.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment