From b458854977c437e85fd89056e5d40383c8fa962e Mon Sep 17 00:00:00 2001 From: Matei Zaharia <matei@eecs.berkeley.edu> Date: Sun, 8 Sep 2013 21:25:49 -0700 Subject: [PATCH] Fix some review comments --- docs/cluster-overview.md | 2 +- docs/quick-start.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md index cf6b48c05e..7025c23657 100644 --- a/docs/cluster-overview.md +++ b/docs/cluster-overview.md @@ -80,7 +80,7 @@ The following table summarizes terms you'll see used to refer to cluster concept <tbody> <tr> <td>Application</td> - <td>Any user program invoking Spark</td> + <td>User program built on Spark. Consists of a <em>driver program</em> and <em>executors</em> on the cluster.</td> </tr> <tr> <td>Driver program</td> diff --git a/docs/quick-start.md b/docs/quick-start.md index 1b069ce982..8f782db5b8 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -36,7 +36,7 @@ scala> textFile.count() // Number of items in this RDD res0: Long = 74 scala> textFile.first() // First item in this RDD -res1: String = Welcome to the Spark documentation! +res1: String = # Apache Spark {% endhighlight %} Now let's use a transformation. We will use the [`filter`](scala-programming-guide.html#transformations) transformation to return a new RDD with a subset of the items in the file. -- GitLab