Skip to content
Snippets Groups Projects
  1. Aug 26, 2014
    • Andrew Or's avatar
      [SPARK-2886] Use more specific actor system name than "spark" · b21ae5bb
      Andrew Or authored
      As of #1777 we log the name of the actor system when it binds to a port. The current name "spark" is super general and does not convey any meaning. For instance, the following line is taken from my driver log after setting `spark.driver.port` to 5001.
      ```
      14/08/13 19:33:29 INFO Remoting: Remoting started; listening on addresses:
      [akka.tcp://sparkandrews-mbp:5001]
      14/08/13 19:33:29 INFO Remoting: Remoting now listens on addresses:
      [akka.tcp://sparkandrews-mbp:5001]
      14/08/06 13:40:05 INFO Utils: Successfully started service 'spark' on port 5001.
      ```
      This commit renames this to "sparkDriver" and "sparkExecutor". The goal of this unambitious PR is simply to make the logged information more explicit without introducing any change in functionality.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1810 from andrewor14/service-name and squashes the following commits:
      
      8c459ed [Andrew Or] Use a common variable for driver/executor actor system names
      3a92843 [Andrew Or] Change actor name to sparkDriver and sparkExecutor
      921363e [Andrew Or] Merge branch 'master' of github.com:apache/spark into service-name
      c8c6a62 [Andrew Or] Do not include hyphens in actor name
      1c1b42e [Andrew Or] Avoid spaces in akka system name
      f644b55 [Andrew Or] Use more specific service name
      b21ae5bb
    • Daoyuan Wang's avatar
      [Spark-3222] [SQL] Cross join support in HiveQL · 52fbdc2d
      Daoyuan Wang authored
      We can simple treat cross join as inner join without join conditions.
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      Author: adrian-wang <daoyuanwong@gmail.com>
      
      Closes #2124 from adrian-wang/crossjoin and squashes the following commits:
      
      8c9b7c5 [Daoyuan Wang] add a test
      7d47bbb [adrian-wang] add cross join support for hql
      52fbdc2d
  2. Aug 25, 2014
    • Kousuke Saruta's avatar
      [SPARK-2976] Replace tabs with spaces · 62f5009f
      Kousuke Saruta authored
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #1895 from sarutak/SPARK-2976 and squashes the following commits:
      
      1cf7e69 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2976
      d1e0666 [Kousuke Saruta] Modified styles
      c5e80a4 [Kousuke Saruta] Remove tab from JavaPageRank.java and JavaKinesisWordCountASL.java
      c003b36 [Kousuke Saruta] Removed tab from sorttable.js
      62f5009f
    • witgo's avatar
      SPARK-2481: The environment variables SPARK_HISTORY_OPTS is covered in spark-env.sh · 9f04db17
      witgo authored
      Author: witgo <witgo@qq.com>
      Author: GuoQiang Li <witgo@qq.com>
      
      Closes #1341 from witgo/history_env and squashes the following commits:
      
      b4fd9f8 [GuoQiang Li] review commit
      0ebe401 [witgo] *-history-server.sh load spark-config.sh
      9f04db17
    • Chia-Yung Su's avatar
      [SPARK-3011][SQL] _temporary directory should be filtered out by sqlContext.parquetFile · 4243bb66
      Chia-Yung Su authored
      fix compile error on hadoop 0.23 for the pull request #1924.
      
      Author: Chia-Yung Su <chiayung@appier.com>
      
      Closes #1959 from joesu/bugfix-spark3011 and squashes the following commits:
      
      be30793 [Chia-Yung Su] remove .* and _* except _metadata
      8fe2398 [Chia-Yung Su] add note to explain
      40ea9bd [Chia-Yung Su] fix hadoop-0.23 compile error
      c7e44f2 [Chia-Yung Su] match syntax
      f8fc32a [Chia-Yung Su] filter out tmp dir
      4243bb66
    • wangfei's avatar
      [SQL] logWarning should be logInfo in getResultSetSchema · 507a1b52
      wangfei authored
      Author: wangfei <wangfei_hello@126.com>
      
      Closes #1939 from scwf/patch-5 and squashes the following commits:
      
      f952d10 [wangfei] [SQL] logWarning should be logInfo in getResultSetSchema
      507a1b52
    • Cheng Hao's avatar
      [SPARK-3058] [SQL] Support EXTENDED for EXPLAIN · 156eb396
      Cheng Hao authored
      Provide `extended` keyword support for `explain` command in SQL. e.g.
      ```
      explain extended select key as a1, value as a2 from src where key=1;
      == Parsed Logical Plan ==
      Project ['key AS a1#3,'value AS a2#4]
       Filter ('key = 1)
        UnresolvedRelation None, src, None
      
      == Analyzed Logical Plan ==
      Project [key#8 AS a1#3,value#9 AS a2#4]
       Filter (CAST(key#8, DoubleType) = CAST(1, DoubleType))
        MetastoreRelation default, src, None
      
      == Optimized Logical Plan ==
      Project [key#8 AS a1#3,value#9 AS a2#4]
       Filter (CAST(key#8, DoubleType) = 1.0)
        MetastoreRelation default, src, None
      
      == Physical Plan ==
      Project [key#8 AS a1#3,value#9 AS a2#4]
       Filter (CAST(key#8, DoubleType) = 1.0)
        HiveTableScan [key#8,value#9], (MetastoreRelation default, src, None), None
      
      Code Generation: false
      == RDD ==
      (2) MappedRDD[14] at map at HiveContext.scala:350
        MapPartitionsRDD[13] at mapPartitions at basicOperators.scala:42
        MapPartitionsRDD[12] at mapPartitions at basicOperators.scala:57
        MapPartitionsRDD[11] at mapPartitions at TableReader.scala:112
        MappedRDD[10] at map at TableReader.scala:240
        HadoopRDD[9] at HadoopRDD at TableReader.scala:230
      ```
      
      It's the sub task of #1847. But can go without any dependency.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #1962 from chenghao-intel/explain_extended and squashes the following commits:
      
      295db74 [Cheng Hao] Fix bug in printing the simple execution plan
      48bc989 [Cheng Hao] Support EXTENDED for EXPLAIN
      156eb396
    • Cheng Lian's avatar
      [SPARK-2929][SQL] Refactored Thrift server and CLI suites · cae9414d
      Cheng Lian authored
      Removed most hard coded timeout, timing assumptions and all `Thread.sleep`. Simplified IPC and synchronization with `scala.sys.process` and future/promise so that the test suites can run more robustly and faster.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1856 from liancheng/thriftserver-tests and squashes the following commits:
      
      2d914ca [Cheng Lian] Minor refactoring
      0e12e71 [Cheng Lian] Cleaned up test output
      0ee921d [Cheng Lian] Refactored Thrift server and CLI suites
      cae9414d
    • Takuya UESHIN's avatar
      [SPARK-3204][SQL] MaxOf would be foldable if both left and right are foldable. · d299e2bf
      Takuya UESHIN authored
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #2116 from ueshin/issues/SPARK-3204 and squashes the following commits:
      
      7d9b107 [Takuya UESHIN] Make MaxOf foldable if both left and right are foldable.
      d299e2bf
    • Cheng Lian's avatar
      Fixed a typo in docs/running-on-mesos.md · 805fec84
      Cheng Lian authored
      It should be `spark-env.sh` rather than `spark.env.sh`.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2119 from liancheng/fix-mesos-doc and squashes the following commits:
      
      f360548 [Cheng Lian] Fixed a typo in docs/running-on-mesos.md
      805fec84
    • Xiangrui Meng's avatar
      [FIX] fix error message in sendMessageReliably · fd8ace2d
      Xiangrui Meng authored
      rxin
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2120 from mengxr/sendMessageReliably and squashes the following commits:
      
      b14400c [Xiangrui Meng] fix error message in sendMessageReliably
      fd8ace2d
    • Allan Douglas R. de Oliveira's avatar
      SPARK-3180 - Better control of security groups · cc40a709
      Allan Douglas R. de Oliveira authored
      Adds the --authorized-address and --additional-security-group options as explained in the issue.
      
      Author: Allan Douglas R. de Oliveira <allan@chaordicsystems.com>
      
      Closes #2088 from douglaz/configurable_sg and squashes the following commits:
      
      e3e48ca [Allan Douglas R. de Oliveira] Adds the option to specify the address authorized to access the SG and another option to provide an additional existing SG
      cc40a709
    • Sean Owen's avatar
      SPARK-2798 [BUILD] Correct several small errors in Flume module pom.xml files · cd30db56
      Sean Owen authored
      (EDIT) Since the scalatest issue was since resolved, this is now about a few small problems in the Flume Sink `pom.xml`
      
      - `scalatest` is not declared as a test-scope dependency
      - Its Avro version doesn't match the rest of the build
      - Its Flume version is not synced with the other Flume module
      - The other Flume module declares its dependency on Flume Sink slightly incorrectly, hard-coding the Scala 2.10 version
      - It depends on Scala Lang directly, which it shouldn't
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #1726 from srowen/SPARK-2798 and squashes the following commits:
      
      a46e2c6 [Sean Owen] scalatest to test scope, harmonize Avro and Flume versions, remove direct Scala dependency, fix '2.10' in Flume dependency
      cd30db56
    • Xiangrui Meng's avatar
      [SPARK-2495][MLLIB] make KMeans constructor public · 220f4136
      Xiangrui Meng authored
      to re-construct k-means models freeman-lab
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2112 from mengxr/public-constructors and squashes the following commits:
      
      18d53a9 [Xiangrui Meng] make KMeans constructor public
      220f4136
  3. Aug 24, 2014
    • Davies Liu's avatar
      [SPARK-2871] [PySpark] add zipWithIndex() and zipWithUniqueId() · fb0db772
      Davies Liu authored
      RDD.zipWithIndex()
      
              Zips this RDD with its element indices.
      
              The ordering is first based on the partition index and then the
              ordering of items within each partition. So the first item in
              the first partition gets index 0, and the last item in the last
              partition receives the largest index.
      
              This method needs to trigger a spark job when this RDD contains
              more than one partitions.
      
              >>> sc.parallelize(range(4), 2).zipWithIndex().collect()
              [(0, 0), (1, 1), (2, 2), (3, 3)]
      
      RDD.zipWithUniqueId()
      
              Zips this RDD with generated unique Long ids.
      
              Items in the kth partition will get ids k, n+k, 2*n+k, ..., where
              n is the number of partitions. So there may exist gaps, but this
              method won't trigger a spark job, which is different from
              L{zipWithIndex}
      
              >>> sc.parallelize(range(4), 2).zipWithUniqueId().collect()
              [(0, 0), (2, 1), (1, 2), (3, 3)]
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2092 from davies/zipWith and squashes the following commits:
      
      cebe5bf [Davies Liu] improve test cases, reverse the order of index
      0d2a128 [Davies Liu] add zipWithIndex() and zipWithUniqueId()
      fb0db772
    • Reza Zadeh's avatar
      [MLlib][SPARK-2997] Update SVD documentation to reflect roughly square · b1b20301
      Reza Zadeh authored
      Update the documentation to reflect the fact we can handle roughly square matrices.
      
      Author: Reza Zadeh <rizlar@gmail.com>
      
      Closes #2070 from rezazadeh/svddocs and squashes the following commits:
      
      826b8fe [Reza Zadeh] left singular vectors
      3f34fc6 [Reza Zadeh] PCA is still TS
      7ffa2aa [Reza Zadeh] better title
      aeaf39d [Reza Zadeh] More docs
      788ed13 [Reza Zadeh] add computational cost explanation
      6429c59 [Reza Zadeh] Add link to rowmatrix docs
      1eeab8b [Reza Zadeh] Update SVD documentation to reflect roughly square
      b1b20301
    • DB Tsai's avatar
      [SPARK-2841][MLlib] Documentation for feature transformations · 572952ae
      DB Tsai authored
      Documentation for newly added feature transformations:
      1. TF-IDF
      2. StandardScaler
      3. Normalizer
      
      Author: DB Tsai <dbtsai@alpinenow.com>
      
      Closes #2068 from dbtsai/transformer-documentation and squashes the following commits:
      
      109f324 [DB Tsai] address feedback
      572952ae
    • Kousuke Saruta's avatar
      [SPARK-3192] Some scripts have 2 space indentation but other scripts have 4 space indentation. · ded6796b
      Kousuke Saruta authored
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #2104 from sarutak/SPARK-3192 and squashes the following commits:
      
      db78419 [Kousuke Saruta] Modified indentation of spark-shell
      ded6796b
  4. Aug 23, 2014
    • Raymond Liu's avatar
      Clean unused code in SortShuffleWriter · 8861cdf1
      Raymond Liu authored
      Just clean unused code which have been moved into ExternalSorter.
      
      Author: Raymond Liu <raymond.liu@intel.com>
      
      Closes #1882 from colorant/sortShuffleWriter and squashes the following commits:
      
      e6337be [Raymond Liu] Clean unused code in SortShuffleWriter
      8861cdf1
    • Davies Liu's avatar
      [SPARK-2871] [PySpark] add approx API for RDD · 8df4dad4
      Davies Liu authored
      RDD.countApprox(self, timeout, confidence=0.95)
      
              :: Experimental ::
              Approximate version of count() that returns a potentially incomplete
              result within a timeout, even if not all tasks have finished.
      
              >>> rdd = sc.parallelize(range(1000), 10)
              >>> rdd.countApprox(1000, 1.0)
              1000
      
      RDD.sumApprox(self, timeout, confidence=0.95)
      
              Approximate operation to return the sum within a timeout
              or meet the confidence.
      
              >>> rdd = sc.parallelize(range(1000), 10)
              >>> r = sum(xrange(1000))
              >>> (rdd.sumApprox(1000) - r) / r < 0.05
      
      RDD.meanApprox(self, timeout, confidence=0.95)
      
              :: Experimental ::
              Approximate operation to return the mean within a timeout
              or meet the confidence.
      
              >>> rdd = sc.parallelize(range(1000), 10)
              >>> r = sum(xrange(1000)) / 1000.0
              >>> (rdd.meanApprox(1000) - r) / r < 0.05
              True
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2095 from davies/approx and squashes the following commits:
      
      e8c252b [Davies Liu] add approx API for RDD
      8df4dad4
    • Davies Liu's avatar
      [SPARK-2871] [PySpark] add `key` argument for max(), min() and top(n) · db436e36
      Davies Liu authored
      RDD.max(key=None)
      
              param key: A function used to generate key for comparing
      
              >>> rdd = sc.parallelize([1.0, 5.0, 43.0, 10.0])
              >>> rdd.max()
              43.0
              >>> rdd.max(key=str)
              5.0
      
      RDD.min(key=None)
      
              Find the minimum item in this RDD.
      
              param key: A function used to generate key for comparing
      
              >>> rdd = sc.parallelize([2.0, 5.0, 43.0, 10.0])
              >>> rdd.min()
              2.0
              >>> rdd.min(key=str)
              10.0
      
      RDD.top(num, key=None)
      
              Get the top N elements from a RDD.
      
              Note: It returns the list sorted in descending order.
              >>> sc.parallelize([10, 4, 2, 12, 3]).top(1)
              [12]
              >>> sc.parallelize([2, 3, 4, 5, 6], 2).top(2)
              [6, 5]
              >>> sc.parallelize([10, 4, 2, 12, 3]).top(3, key=str)
              [4, 3, 2]
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2094 from davies/cmp and squashes the following commits:
      
      ccbaf25 [Davies Liu] add `key` to top()
      ad7e374 [Davies Liu] fix tests
      2f63512 [Davies Liu] change `comp` to `key` in min/max
      dd91e08 [Davies Liu] add `comp` argument for RDD.max() and RDD.min()
      db436e36
    • Michael Armbrust's avatar
      [SPARK-2967][SQL] Follow-up: Also copy hash expressions in sort based shuffle fix. · 3519b5e8
      Michael Armbrust authored
      Follow-up to #2066
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2072 from marmbrus/sortShuffle and squashes the following commits:
      
      2ff8114 [Michael Armbrust] Fix bug
      3519b5e8
    • Michael Armbrust's avatar
      [SPARK-2554][SQL] CountDistinct partial aggregation and object allocation improvements · 7e191fe2
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      Author: Gregory Owen <greowen@gmail.com>
      
      Closes #1935 from marmbrus/countDistinctPartial and squashes the following commits:
      
      5c7848d [Michael Armbrust] turn off caching in the constructor
      8074a80 [Michael Armbrust] fix tests
      32d216f [Michael Armbrust] reynolds comments
      c122cca [Michael Armbrust] Address comments, add tests
      b2e8ef3 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into countDistinctPartial
      fae38f4 [Michael Armbrust] Fix style
      fdca896 [Michael Armbrust] cleanup
      93d0f64 [Michael Armbrust] metastore concurrency fix.
      db44a30 [Michael Armbrust] JIT hax.
      3868f6c [Michael Armbrust] Merge pull request #9 from GregOwen/countDistinctPartial
      c9e67de [Gregory Owen] Made SpecificRow and types serializable by Kryo
      2b46c4b [Michael Armbrust] Merge remote-tracking branch 'origin/master' into countDistinctPartial
      8ff6402 [Michael Armbrust] Add specific row.
      58d15f1 [Michael Armbrust] disable codegen logging
      87d101d [Michael Armbrust] Fix isNullAt bug
      abee26d [Michael Armbrust] WIP
      27984d0 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into countDistinctPartial
      57ae3b1 [Michael Armbrust] Fix order dependent test
      b3d0f64 [Michael Armbrust] Add golden files.
      c1f7114 [Michael Armbrust] Improve tests / fix serialization.
      f31b8ad [Michael Armbrust] more fixes
      38c7449 [Michael Armbrust] comments and style
      9153652 [Michael Armbrust] better toString
      d494598 [Michael Armbrust] Fix tests now that the planner is better
      41fbd1d [Michael Armbrust] Never try and create an empty hash set.
      050bb97 [Michael Armbrust] Skip no-arg constructors for kryo,
      bd08239 [Michael Armbrust] WIP
      213ada8 [Michael Armbrust] First draft of partially aggregated and code generated count distinct / max
      7e191fe2
    • Yin Huai's avatar
      [SQL] Make functionRegistry in HiveContext transient. · 2fb1c72e
      Yin Huai authored
      Seems we missed `transient` for the `functionRegistry` in `HiveContext`.
      
      cc: marmbrus
      
      Author: Yin Huai <huaiyin.thu@gmail.com>
      
      Closes #2074 from yhuai/makeFunctionRegistryTransient and squashes the following commits:
      
      6534e7d [Yin Huai] Make functionRegistry transient.
      2fb1c72e
    • Liang-Chi Hsieh's avatar
      [Minor] fix typo · 76bb044b
      Liang-Chi Hsieh authored
      Fix a typo in comment.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #2105 from viirya/fix_typo and squashes the following commits:
      
      6596a80 [Liang-Chi Hsieh] fix typo.
      76bb044b
    • Daoyuan Wang's avatar
      [SPARK-3068]remove MaxPermSize option for jvm 1.8 · f3d65cd0
      Daoyuan Wang authored
      In JVM 1.8.0, MaxPermSize is no longer supported.
      In spark `stderr` output, there would be a line of
      
          Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
      
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #2011 from adrian-wang/maxpermsize and squashes the following commits:
      
      ef1d660 [Daoyuan Wang] direct get java version in runtime
      37db9c1 [Daoyuan Wang] code refine
      3c1d554 [Daoyuan Wang] remove MaxPermSize option for jvm 1.8
      f3d65cd0
    • Kousuke Saruta's avatar
      [SPARK-2963] REGRESSION - The description about how to build for using CLI and... · 323cd92b
      Kousuke Saruta authored
      [SPARK-2963] REGRESSION - The description about how to build for using CLI and Thrift JDBC server is absent in proper document  -
      
      The most important things I mentioned in #1885 is as follows.
      
      * People who build Spark is not always programmer.
      * If a person who build Spark is not a programmer, he/she won't read programmer's guide before building.
      
      So, how to build for using CLI and JDBC server is not only in programmer's guide.
      
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #2080 from sarutak/SPARK-2963 and squashes the following commits:
      
      ee07c76 [Kousuke Saruta] Modified regression of the description about building for using Thrift JDBC server and CLI
      ed53329 [Kousuke Saruta] Modified description and notaton of proper noun
      07c59fc [Kousuke Saruta] Added a description about how to build to use HiveServer and CLI for SparkSQL to building-with-maven.md
      6e6645a [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-2963
      c88fa93 [Kousuke Saruta] Added a description about building to use HiveServer and CLI for SparkSQL
      323cd92b
  5. Aug 22, 2014
    • Tathagata Das's avatar
      [SPARK-3169] Removed dependency on spark streaming test from spark flume sink · 30040741
      Tathagata Das authored
      Due to maven bug https://jira.codehaus.org/browse/MNG-1378, maven could not resolve spark streaming classes required by the spark-streaming test-jar dependency of external/flume-sink. There is no particular reason that the external/flume-sink has to depend on Spark Streaming at all, so I am eliminating this dependency. Also I have removed the exclusions present in the Flume dependencies, as there is no reason to exclude them (they were excluded in the external/flume module to prevent dependency collisions with Spark).
      
      Since Jenkins will test the sbt build and the unit test, I only tested maven compilation locally.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #2101 from tdas/spark-sink-pom-fix and squashes the following commits:
      
      8f42621 [Tathagata Das] Added Flume sink exclusions back, and added netty to test dependencies
      93b559f [Tathagata Das] Removed dependency on spark streaming test from spark flume sink
      30040741
    • Reynold Xin's avatar
      a5219db1
    • XuTingjun's avatar
      [SPARK-2742][yarn] delete useless variables · 220c2d76
      XuTingjun authored
      Author: XuTingjun <1039320815@qq.com>
      
      Closes #1614 from XuTingjun/yarn-bug and squashes the following commits:
      
      f07096e [XuTingjun] Update ClientArguments.scala
      220c2d76
  6. Aug 21, 2014
    • Joseph K. Bradley's avatar
      [SPARK-2840] [mllib] DecisionTree doc update (Java, Python examples) · 050f8d01
      Joseph K. Bradley authored
      Updated DecisionTree documentation, with examples for Java, Python.
      Added same Java example to code as well.
      CC: @mengxr  @manishamde @atalwalkar
      
      Author: Joseph K. Bradley <joseph.kurata.bradley@gmail.com>
      
      Closes #2063 from jkbradley/dt-docs and squashes the following commits:
      
      2dd2c19 [Joseph K. Bradley] Last updates based on github review.
      9dd1b6b [Joseph K. Bradley] Updated decision tree doc.
      d802369 [Joseph K. Bradley] Updates based on comments: cache data, corrected doc text.
      b9bee04 [Joseph K. Bradley] Updated DT examples
      57eee9f [Joseph K. Bradley] Created JavaDecisionTree example from example in docs, and corrected doc example as needed.
      d939a92 [Joseph K. Bradley] Updated DecisionTree documentation.  Added Java, Python examples.
      050f8d01
  7. Aug 20, 2014
    • Xiangrui Meng's avatar
      [SPARK-2843][MLLIB] add a section about regularization parameter in ALS · e0f94626
      Xiangrui Meng authored
      atalwalkar srowen
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2064 from mengxr/als-doc and squashes the following commits:
      
      b2e20ab [Xiangrui Meng] introduced -> discussed
      98abdd7 [Xiangrui Meng] add reference
      339bd08 [Xiangrui Meng] add a section about regularization parameter in ALS
      e0f94626
    • Xiangrui Meng's avatar
      [SPARK-3143][MLLIB] add tf-idf user guide · e1571874
      Xiangrui Meng authored
      Moved TF-IDF before Word2Vec because the former is more basic. I also added a link for Word2Vec. atalwalkar
      
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #2061 from mengxr/tfidf-doc and squashes the following commits:
      
      ca04c70 [Xiangrui Meng] address comments
      a5ea4b4 [Xiangrui Meng] add tf-idf user guide
      e1571874
    • Andrew Or's avatar
      [SPARK-3140] Clarify confusing PySpark exception message · ba3c730e
      Andrew Or authored
      We read the py4j port from the stdout of the `bin/spark-submit` subprocess. If there is interference in stdout (e.g. a random echo in `spark-submit`), we throw an exception with a warning message. We do not, however, distinguish between this case from the case where no stdout is produced at all.
      
      I wasted a non-trivial amount of time being baffled by this exception in search of places where I print random whitespace (in vain, of course). A clearer exception message that distinguishes between these cases will prevent similar headaches that I have gone through.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2067 from andrewor14/python-exception and squashes the following commits:
      
      742f823 [Andrew Or] Further clarify warning messages
      e96a7a0 [Andrew Or] Distinguish between unexpected output and no output at all
      ba3c730e
    • Marcelo Vanzin's avatar
      [SPARK-2848] Shade Guava in uber-jars. · c9f74395
      Marcelo Vanzin authored
      For further discussion, please check the JIRA entry.
      
      This change moves Guava classes to a different package so that they don't conflict with the user-provided Guava (or the Hadoop-provided one). Since one class (Optional) was exposed through Spark's public API, that class was forked from Guava at the current dependency version (14.0.1) so that it can be kept going forward (until the API is cleaned).
      
      Note this change has a few implications:
      - *all* classes in the final jars will reference the relocated classes. If Hadoop classes are included (i.e. "-Phadoop-provided" is not activated), those will also reference the Guava 14 classes (instead of the Guava 11 classes from the Hadoop classpath).
      - if the Guava version in Spark is ever changed, the new Guava will still reference the forked Optional class; this may or may not be a problem, but in the long term it's better to think about removing Optional from the public API.
      
      For the end user, there are two visible implications:
      
      - Guava is not provided as a transitive dependency anymore (since it's "provided" in Spark)
      - At runtime, unless they provide their own, they'll either have no Guava or Hadoop's version of Guava (11), depending on how they set up their classpath.
      
      Note that this patch does not change the sbt deliverables; those will still contain guava in its original package, and provide guava as a compile-time dependency. This assumes that maven is the canonical build, and sbt-built artifacts are not (officially) published.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #1813 from vanzin/SPARK-2848 and squashes the following commits:
      
      9bdffb0 [Marcelo Vanzin] Undo sbt build changes.
      819b445 [Marcelo Vanzin] Review feedback.
      05e0a3d [Marcelo Vanzin] Merge branch 'master' into SPARK-2848
      fef4370 [Marcelo Vanzin] Unfork Optional.java.
      d3ea8e1 [Marcelo Vanzin] Exclude asm classes from final jar.
      637189b [Marcelo Vanzin] Add hacky filter to prefer Spark's copy of Optional.
      2fec990 [Marcelo Vanzin] Shade Guava in the sbt build.
      616998e [Marcelo Vanzin] Shade Guava in the maven build, fork Guava's Optional.java.
      c9f74395
    • Alex Liu's avatar
      [SPARK-2846][SQL] Add configureInputJobPropertiesForStorageHandler to initialization of job conf · d9e94146
      Alex Liu authored
      ...al job conf
      
      Author: Alex Liu <alex_liu68@yahoo.com>
      
      Closes #1927 from alexliu68/SPARK-SQL-2846 and squashes the following commits:
      
      e4bdc4c [Alex Liu] SPARK-SQL-2846 add configureInputJobPropertiesForStorageHandler to initial job conf
      d9e94146
    • wangfei's avatar
      SPARK_LOGFILE and SPARK_ROOT_LOGGER no longer need in spark-daemon.sh · a1e8b1bc
      wangfei authored
      Author: wangfei <wangfei_hello@126.com>
      
      Closes #2057 from scwf/patch-7 and squashes the following commits:
      
      1b7b9a5 [wangfei] SPARK_LOGFILE and SPARK_ROOT_LOGGER no longer need in spark-daemon.sh
      a1e8b1bc
    • Michael Armbrust's avatar
      [SPARK-2967][SQL] Fix sort based shuffle for spark sql. · a2e658dc
      Michael Armbrust authored
      Add explicit row copies when sort based shuffle is on.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2066 from marmbrus/sortShuffle and squashes the following commits:
      
      fcd7bb2 [Michael Armbrust] Fix sort based shuffle for spark sql.
      a2e658dc
    • Reynold Xin's avatar
      [SPARK-2298] Encode stage attempt in SparkListener & UI. · fb60bec3
      Reynold Xin authored
      Simple way to reproduce this in the UI:
      
      ```scala
      val f = new java.io.File("/tmp/test")
      f.delete()
      sc.parallelize(1 to 2, 2).map(x => (x,x )).repartition(3).mapPartitionsWithContext { case (context, iter) =>
        if (context.partitionId == 0) {
          val f = new java.io.File("/tmp/test")
          if (!f.exists) {
            f.mkdir()
            System.exit(0);
          }
        }
        iter
      }.count()
      ```
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #1545 from rxin/stage-attempt and squashes the following commits:
      
      3ee1d2a [Reynold Xin] - Rename attempt to retry in UI. - Properly report stage failure in FetchFailed.
      40a6bd5 [Reynold Xin] Updated test suites.
      c414c36 [Reynold Xin] Fixed the hanging in JobCancellationSuite.
      b3e2eed [Reynold Xin] Oops previous code didn't compile.
      0f36075 [Reynold Xin] Mark unknown stage attempt with id -1 and drop that in JobProgressListener.
      6c08b07 [Reynold Xin] Addressed code review feedback.
      4e5faa2 [Reynold Xin] [SPARK-2298] Encode stage attempt in SparkListener & UI.
      fb60bec3
    • Andrew Or's avatar
      [SPARK-2849] Handle driver configs separately in client mode · b3ec51bf
      Andrew Or authored
      In client deploy mode, the driver is launched from within `SparkSubmit`'s JVM. This means by the time we parse Spark configs from `spark-defaults.conf`, it is already too late to control certain properties of the driver's JVM. We currently ignore these configs in client mode altogether.
      ```
      spark.driver.memory
      spark.driver.extraJavaOptions
      spark.driver.extraClassPath
      spark.driver.extraLibraryPath
      ```
      This PR handles these properties before launching the driver JVM. It achieves this by spawning a separate JVM that runs a new class called `SparkSubmitDriverBootstrapper`, which spawns `SparkSubmit` as a sub-process with the appropriate classpath, library paths, java opts and memory.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1845 from andrewor14/handle-configs-bash and squashes the following commits:
      
      bed4bdf [Andrew Or] Change a few comments / messages (minor)
      24dba60 [Andrew Or] Merge branch 'master' of github.com:apache/spark into handle-configs-bash
      08fd788 [Andrew Or] Warn against external usages of SparkSubmitDriverBootstrapper
      ff34728 [Andrew Or] Minor comments
      51aeb01 [Andrew Or] Filter out JVM memory in Scala rather than Bash (minor)
      9a778f6 [Andrew Or] Fix PySpark: actually kill driver on termination
      d0f20db [Andrew Or] Don't pass empty library paths, classpath, java opts etc.
      a78cb26 [Andrew Or] Revert a few changes in utils.sh (minor)
      9ba37e2 [Andrew Or] Don't barf when the properties file does not exist
      8867a09 [Andrew Or] A few more naming things (minor)
      19464ad [Andrew Or] SPARK_SUBMIT_JAVA_OPTS -> SPARK_SUBMIT_OPTS
      d6488f9 [Andrew Or] Merge branch 'master' of github.com:apache/spark into handle-configs-bash
      1ea6bbe [Andrew Or] SparkClassLauncher -> SparkSubmitDriverBootstrapper
      a91ea19 [Andrew Or] Fix precedence of library paths, classpath, java opts and memory
      158f813 [Andrew Or] Remove "client mode" boolean argument
      c84f5c8 [Andrew Or] Remove debug print statement (minor)
      b71f52b [Andrew Or] Revert a few more changes (minor)
      7d94a8d [Andrew Or] Merge branch 'master' of github.com:apache/spark into handle-configs-bash
      3a8235d [Andrew Or] Only parse the properties file if special configs exist
      c37e08d [Andrew Or] Revert a few more changes
      a396eda [Andrew Or] Nullify my own hard work to simplify bash
      0effa1e [Andrew Or] Add code in Scala that handles special configs
      c886568 [Andrew Or] Fix lines too long + a few comments / style (minor)
      7a4190a [Andrew Or] Merge branch 'master' of github.com:apache/spark into handle-configs-bash
      7396be2 [Andrew Or] Explicitly comment that multi-line properties are not supported
      fa11ef8 [Andrew Or] Parse the properties file only if the special configs exist
      371cac4 [Andrew Or] Add function prefix (minor)
      be99eb3 [Andrew Or] Fix tests to not include multi-line configs
      bd0d468 [Andrew Or] Simplify parsing config file by ignoring multi-line arguments
      56ac247 [Andrew Or] Use eval and set to simplify splitting
      8d4614c [Andrew Or] Merge branch 'master' of github.com:apache/spark into handle-configs-bash
      aeb79c7 [Andrew Or] Merge branch 'master' of github.com:apache/spark into handle-configs-bash
      2732ac0 [Andrew Or] Integrate BASH tests into dev/run-tests + log error properly
      8d26a5c [Andrew Or] Add tests for bash/utils.sh
      4ae24c3 [Andrew Or] Fix bug: escape properly in quote_java_property
      b3c4cd5 [Andrew Or] Fix bug: count the number of quotes instead of detecting presence
      c2273fc [Andrew Or] Fix typo (minor)
      e793e5f [Andrew Or] Handle multi-line arguments
      5d8f8c4 [Andrew Or] Merge branch 'master' of github.com:apache/spark into submit-driver-extra
      c7b9926 [Andrew Or] Minor changes to spark-defaults.conf.template
      a992ae2 [Andrew Or] Escape spark.*.extraJavaOptions correctly
      aabfc7e [Andrew Or] escape -> split (minor)
      45a1eb9 [Andrew Or] Fix bug: escape escaped backslashes and quotes properly...
      1cdc6b1 [Andrew Or] Fix bug: escape escaped double quotes properly
      c854859 [Andrew Or] Add small comment
      c13a2cb [Andrew Or] Merge branch 'master' of github.com:apache/spark into submit-driver-extra
      8e552b7 [Andrew Or] Include an example of spark.*.extraJavaOptions
      de765c9 [Andrew Or] Print spark-class command properly
      a4df3c4 [Andrew Or] Move parsing and escaping logic to utils.sh
      dec2343 [Andrew Or] Only export variables if they exist
      fa2136e [Andrew Or] Escape Java options + parse java properties files properly
      ef12f74 [Andrew Or] Minor formatting
      4ec22a1 [Andrew Or] Merge branch 'master' of github.com:apache/spark into submit-driver-extra
      e5cfb46 [Andrew Or] Collapse duplicate code + fix potential whitespace issues
      4edcaa8 [Andrew Or] Redirect stdout to stderr for python
      130f295 [Andrew Or] Handle spark.driver.memory too
      98dd8e3 [Andrew Or] Add warning if properties file does not exist
      8843562 [Andrew Or] Fix compilation issues...
      75ee6b4 [Andrew Or] Remove accidentally added file
      63ed2e9 [Andrew Or] Merge branch 'master' of github.com:apache/spark into submit-driver-extra
      0025474 [Andrew Or] Revert SparkSubmit handling of --driver-* options for only cluster mode
      a2ab1b0 [Andrew Or] Parse spark.driver.extra* in bash
      250cb95 [Andrew Or] Do not ignore spark.driver.extra* for client mode
      b3ec51bf
Loading