Skip to content
Snippets Groups Projects
Commit 32fa611b authored by Dice's avatar Dice Committed by Sean Owen
Browse files

[SPARK-7704] Updating Programming Guides per SPARK-4397

The change per SPARK-4397 makes implicit objects in SparkContext to be found by the compiler automatically. So that we don't need to import the o.a.s.SparkContext._ explicitly any more and can remove some statements around the "implicit conversions" from the latest Programming Guides (1.3.0 and higher)

Author: Dice <poleon.kd@gmail.com>

Closes #6234 from daisukebe/patch-1 and squashes the following commits:

b77ecd9 [Dice] fix a typo
45dfcd3 [Dice] rewording per Sean's advice
a094bcf [Dice] Adding a note for users on any previous releases
a29be5f [Dice] Updating Programming Guides per SPARK-4397
parent 6845cb2f
No related branches found
No related tags found
No related merge requests found
......@@ -41,14 +41,15 @@ In addition, if you wish to access an HDFS cluster, you need to add a dependency
artifactId = hadoop-client
version = <your-hdfs-version>
Finally, you need to import some Spark classes and implicit conversions into your program. Add the following lines:
Finally, you need to import some Spark classes into your program. Add the following lines:
{% highlight scala %}
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
{% endhighlight %}
(Before Spark 1.3.0, you need to explicitly `import org.apache.spark.SparkContext._` to enable essential implicit conversions.)
</div>
<div data-lang="java" markdown="1">
......@@ -821,11 +822,9 @@ by a key.
In Scala, these operations are automatically available on RDDs containing
[Tuple2](http://www.scala-lang.org/api/{{site.SCALA_VERSION}}/index.html#scala.Tuple2) objects
(the built-in tuples in the language, created by simply writing `(a, b)`), as long as you
import `org.apache.spark.SparkContext._` in your program to enable Spark's implicit
conversions. The key-value pair operations are available in the
(the built-in tuples in the language, created by simply writing `(a, b)`). The key-value pair operations are available in the
[PairRDDFunctions](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions) class,
which automatically wraps around an RDD of tuples if you import the conversions.
which automatically wraps around an RDD of tuples.
For example, the following code uses the `reduceByKey` operation on key-value pairs to count how
many times each line of text occurs in a file:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment