From ee5a07955c222dce16d0ffb9bde7f61033763c16 Mon Sep 17 00:00:00 2001 From: Matei Zaharia <matei@eecs.berkeley.edu> Date: Sun, 20 Jan 2013 02:11:58 -0800 Subject: [PATCH] Fix Python guide to say accumulators are available --- docs/python-programming-guide.md | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/python-programming-guide.md b/docs/python-programming-guide.md index 78ef310a00..a840b9b34b 100644 --- a/docs/python-programming-guide.md +++ b/docs/python-programming-guide.md @@ -16,7 +16,6 @@ There are a few key differences between the Python and Scala APIs: * Python is dynamically typed, so RDDs can hold objects of different types. * PySpark does not currently support the following Spark features: - - Accumulators - Special functions on RDDs of doubles, such as `mean` and `stdev` - `lookup` - `persist` at storage levels other than `MEMORY_ONLY` -- GitLab