Skip to content
Snippets Groups Projects
Commit 5227043f authored by Reynold Xin's avatar Reynold Xin
Browse files

Documentation update for compression codec.

parent ad7e9d0d
No related branches found
No related tags found
No related merge requests found
...@@ -35,7 +35,7 @@ for these variables. ...@@ -35,7 +35,7 @@ for these variables.
* `SPARK_JAVA_OPTS`, to add JVM options. This includes any system properties that you'd like to pass with `-D`. * `SPARK_JAVA_OPTS`, to add JVM options. This includes any system properties that you'd like to pass with `-D`.
* `SPARK_CLASSPATH`, to add elements to Spark's classpath. * `SPARK_CLASSPATH`, to add elements to Spark's classpath.
* `SPARK_LIBRARY_PATH`, to add search directories for native libraries. * `SPARK_LIBRARY_PATH`, to add search directories for native libraries.
* `SPARK_MEM`, to set the amount of memory used per node. This should be in the same format as the * `SPARK_MEM`, to set the amount of memory used per node. This should be in the same format as the
JVM's -Xmx option, e.g. `300m` or `1g`. Note that this option will soon be deprecated in favor of JVM's -Xmx option, e.g. `300m` or `1g`. Note that this option will soon be deprecated in favor of
the `spark.executor.memory` system property, so we recommend using that in new code. the `spark.executor.memory` system property, so we recommend using that in new code.
...@@ -77,7 +77,7 @@ there are at least five properties that you will commonly want to control: ...@@ -77,7 +77,7 @@ there are at least five properties that you will commonly want to control:
Class to use for serializing objects that will be sent over the network or need to be cached Class to use for serializing objects that will be sent over the network or need to be cached
in serialized form. The default of Java serialization works with any Serializable Java object but is in serialized form. The default of Java serialization works with any Serializable Java object but is
quite slow, so we recommend <a href="tuning.html">using <code>spark.KryoSerializer</code> quite slow, so we recommend <a href="tuning.html">using <code>spark.KryoSerializer</code>
and configuring Kryo serialization</a> when speed is necessary. Can be any subclass of and configuring Kryo serialization</a> when speed is necessary. Can be any subclass of
<a href="api/core/index.html#spark.Serializer"><code>spark.Serializer</code></a>). <a href="api/core/index.html#spark.Serializer"><code>spark.Serializer</code></a>).
</td> </td>
</tr> </tr>
...@@ -86,7 +86,7 @@ there are at least five properties that you will commonly want to control: ...@@ -86,7 +86,7 @@ there are at least five properties that you will commonly want to control:
<td>(none)</td> <td>(none)</td>
<td> <td>
If you use Kryo serialization, set this class to register your custom classes with Kryo. If you use Kryo serialization, set this class to register your custom classes with Kryo.
You need to set it to a class that extends You need to set it to a class that extends
<a href="api/core/index.html#spark.KryoRegistrator"><code>spark.KryoRegistrator</code></a>). <a href="api/core/index.html#spark.KryoRegistrator"><code>spark.KryoRegistrator</code></a>).
See the <a href="tuning.html#data-serialization">tuning guide</a> for more details. See the <a href="tuning.html#data-serialization">tuning guide</a> for more details.
</td> </td>
...@@ -180,6 +180,21 @@ Apart from these, the following properties are also available, and may be useful ...@@ -180,6 +180,21 @@ Apart from these, the following properties are also available, and may be useful
Can save substantial space at the cost of some extra CPU time. Can save substantial space at the cost of some extra CPU time.
</td> </td>
</tr> </tr>
<tr>
<td>spark.io.compression.codec</td>
<td>spark.io.SnappyCompressionCodec</td>
<td>
The compression codec class to use for various compressions. By default, Spark provides two
codecs: <code>spark.io.LZFCompressionCodec</code> and <code>spark.io.SnappyCompressionCodec</code>.
</td>
</tr>
<tr>
<td>spark.io.compression.snappy.block.size</td>
<td>32768</td>
<td>
Block size (in bytes) used in Snappy compression, in the case when Snappy compression codec is used.
</td>
</tr>
<tr> <tr>
<td>spark.reducer.maxMbInFlight</td> <td>spark.reducer.maxMbInFlight</td>
<td>48</td> <td>48</td>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment