diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md
index cf6b48c05eb5fdf19ae078b085b817fde9928081..7025c236574b63d3c6179472c720463a8b4b609f 100644
--- a/docs/cluster-overview.md
+++ b/docs/cluster-overview.md
@@ -80,7 +80,7 @@ The following table summarizes terms you'll see used to refer to cluster concept
   <tbody>
     <tr>
       <td>Application</td>
-      <td>Any user program invoking Spark</td>
+      <td>User program built on Spark. Consists of a <em>driver program</em> and <em>executors</em> on the cluster.</td>
     </tr>
     <tr>
       <td>Driver program</td>
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 1b069ce9829fdb9a65ba76ee0faa8750f03689b6..8f782db5b822b5a60a22c94be84b01cbb5bf9eeb 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -36,7 +36,7 @@ scala> textFile.count() // Number of items in this RDD
 res0: Long = 74
 
 scala> textFile.first() // First item in this RDD
-res1: String = Welcome to the Spark documentation!
+res1: String = # Apache Spark
 {% endhighlight %}
 
 Now let's use a transformation. We will use the [`filter`](scala-programming-guide.html#transformations) transformation to return a new RDD with a subset of the items in the file.