diff --git a/docs/scala-programming-guide.md b/docs/scala-programming-guide.md index a07cd2e0a32a265876e39dc28fc038f51450ae21..2b0a51e9dfc548f3faa921b681e1db18bde813bf 100644 --- a/docs/scala-programming-guide.md +++ b/docs/scala-programming-guide.md @@ -189,8 +189,8 @@ The following tables list the transformations and actions currently supported (s <tr> <td> <b>groupByKey</b>([<i>numTasks</i>]) </td> <td> When called on a dataset of (K, V) pairs, returns a dataset of (K, Seq[V]) pairs. <br /> -<b>Note:</b> By default, this uses only 8 parallel tasks to do the grouping. You can pass an optional <code>numTasks</code> argument to set a different number of tasks. -</td> +<b>Note:</b> By default, if the RDD already has a partitioner, the task number is decided by the partition number of the partitioner, or else relies on the value of <code>spark.default.parallelism</code> if the property is set , otherwise depends on the partition number of the RDD. You can pass an optional <code>numTasks</code> argument to set a different number of tasks. + </td> </tr> <tr> <td> <b>reduceByKey</b>(<i>func</i>, [<i>numTasks</i>]) </td>