- Jan 01, 2014
-
-
Matei Zaharia authored
Also replaced SparkConf.getOrElse with just a "get" that takes a default value, and added getInt, getLong, etc to make code that uses this simpler later on.
-
- Dec 31, 2013
-
-
Patrick Wendell authored
-
- Dec 30, 2013
-
-
Patrick Wendell authored
1. Adds a default log4j file that gets loaded if users haven't specified a log4j file. 2. Isolates use of the tools assembly jar. I found this produced SLF4J warnings after building with SBT (and I've seen similar warnings on the mailing list).
-
- Dec 28, 2013
-
-
Matei Zaharia authored
- Got rid of global SparkContext.globalConf - Pass SparkConf to serializers and compression codecs - Made SparkConf public instead of private[spark] - Improved API of SparkContext and SparkConf - Switched executor environment vars to be passed through SparkConf - Fixed some places that were still using system properties - Fixed some tests, though others are still failing This still fails several tests in core, repl and streaming, likely due to properties not being set or cleared correctly (some of the tests run fine in isolation).
-
- Dec 24, 2013
-
-
Prashant Sharma authored
-
- Dec 15, 2013
-
-
Mark Hamstra authored
-
- Dec 13, 2013
-
-
Prashant Sharma authored
-
- Dec 10, 2013
-
-
Prashant Sharma authored
-
- Dec 07, 2013
-
-
Prashant Sharma authored
Incorporated Patrick's feedback comment on #211 and made maven build/dep-resolution atleast a bit faster.
-
- Nov 26, 2013
-
-
Prashant Sharma authored
-
- Nov 15, 2013
-
-
Aaron Davidson authored
I've diff'd this patch against my own -- since they were both created independently, this means that two sets of eyes have gone over all the merge conflicts that were created, so I'm feeling significantly more confident in the resulting PR. @rxin has looked at the changes to the repl and is resoundingly confident that they are correct.
-
- Nov 09, 2013
-
-
Reynold Xin authored
Propagate the SparkContext local property from the thread that calls the spark-repl to the actual execution thread.
-
- Oct 24, 2013
-
-
Ali Ghodsi authored
-
- Oct 17, 2013
-
-
Aaron Davidson authored
Mainly, this occurs if you provide a messed up MASTER url (one that doesn't match one of our regexes). Previously, we would default to Mesos, fail, and then start the shell anyway, except that any Spark command would fail.
-
- Oct 06, 2013
-
-
Patrick Wendell authored
-
- Sep 26, 2013
-
-
Patrick Wendell authored
-
Prashant Sharma authored
-
- Sep 24, 2013
-
-
Patrick Wendell authored
-
- Sep 15, 2013
-
-
Prashant Sharma authored
-
Prashant Sharma authored
-
- Sep 10, 2013
-
-
Prashant Sharma authored
-
- Sep 06, 2013
-
-
Jey Kottalam authored
-
- Sep 02, 2013
-
-
Matei Zaharia authored
-
- Sep 01, 2013
-
-
Matei Zaharia authored
* RDD, *RDDFunctions -> org.apache.spark.rdd * Utils, ClosureCleaner, SizeEstimator -> org.apache.spark.util * JavaSerializer, KryoSerializer -> org.apache.spark.serializer
-
Matei Zaharia authored
-
Matei Zaharia authored
-
- Aug 21, 2013
-
-
Mark Hamstra authored
-
- Aug 18, 2013
-
-
Jey Kottalam authored
-
- Aug 16, 2013
-
-
Jey Kottalam authored
-
Jey Kottalam authored
-
Jey Kottalam authored
-
Jey Kottalam authored
-
- Aug 13, 2013
-
-
Shivaram Venkataraman authored
-
- Jul 30, 2013
-
-
Benjamin Hindman authored
-
Benjamin Hindman authored
requiring Spark to be installed. Using 'make_distribution.sh' a user can put a Spark distribution at a URI supported by Mesos (e.g., 'hdfs://...') and then set that when launching their job. Also added SPARK_EXECUTOR_URI for the REPL.
-
- Jul 16, 2013
-
-
Matei Zaharia authored
-
- Jul 12, 2013
-
-
Prashant Sharma authored
-
- Jul 11, 2013
-
-
Prashant Sharma authored
-
- Jun 25, 2013
-
-
Matei Zaharia authored
-
Matei Zaharia authored
-