Skip to content
Snippets Groups Projects
  1. Aug 30, 2014
    • Marcelo Vanzin's avatar
      [SPARK-2889] Create Hadoop config objects consistently. · b6cf1348
      Marcelo Vanzin authored
      Different places in the code were instantiating Configuration / YarnConfiguration objects in different ways. This could lead to confusion for people who actually expected "spark.hadoop.*" options to end up in the configs used by Spark code, since that would only happen for the SparkContext's config.
      
      This change modifies most places to use SparkHadoopUtil to initialize configs, and make that method do the translation that previously was only done inside SparkContext.
      
      The places that were not changed fall in one of the following categories:
      - Test code where this doesn't really matter
      - Places deep in the code where plumbing SparkConf would be too difficult for very little gain
      - Default values for arguments - since the caller can provide their own config in that case
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #1843 from vanzin/SPARK-2889 and squashes the following commits:
      
      52daf35 [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      f179013 [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      51e71cf [Marcelo Vanzin] Add test to ensure that overriding Yarn configs works.
      53f9506 [Marcelo Vanzin] Add DeveloperApi annotation.
      3d345cb [Marcelo Vanzin] Restore old method for backwards compat.
      fc45067 [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      0ac3fdf [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      3f26760 [Marcelo Vanzin] Compilation fix.
      f16cadd [Marcelo Vanzin] Initialize config in SparkHadoopUtil.
      b8ab173 [Marcelo Vanzin] Update Utils API to take a Configuration argument.
      1e7003f [Marcelo Vanzin] Replace explicit Configuration instantiation with SparkHadoopUtil.
      b6cf1348
    • Reynold Xin's avatar
      Manually close old pull requests · d90434c0
      Reynold Xin authored
      Closes #1824
      d90434c0
    • Raymond Liu's avatar
      [SPARK-2288] Hide ShuffleBlockManager behind ShuffleManager · acea9280
      Raymond Liu authored
      By Hiding the shuffleblockmanager behind Shufflemanager, we decouple the shuffle data's block mapping management work from Diskblockmananger. This give a more clear interface and more easy for other shuffle manager to implement their own block management logic. the jira ticket have more details.
      
      Author: Raymond Liu <raymond.liu@intel.com>
      
      Closes #1241 from colorant/shuffle and squashes the following commits:
      
      0e01ae3 [Raymond Liu] Move ShuffleBlockmanager behind shuffleManager
      acea9280
    • Kousuke Saruta's avatar
      [SPARK-3305] Remove unused import from UI classes. · 7e662af3
      Kousuke Saruta authored
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #2200 from sarutak/SPARK-3305 and squashes the following commits:
      
      3cbd6ee [Kousuke Saruta] Removed unused import from classes related to UI
      7e662af3
    • Patrick Wendell's avatar
      a004a8d8
  2. Aug 29, 2014
    • Cheng Lian's avatar
      [SPARK-3320][SQL] Made batched in-memory column buffer building work for... · 32b18dd5
      Cheng Lian authored
      [SPARK-3320][SQL] Made batched in-memory column buffer building work for SchemaRDDs with empty partitions
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2213 from liancheng/spark-3320 and squashes the following commits:
      
      45a0139 [Cheng Lian] Fixed typo in InMemoryColumnarQuerySuite
      f67067d [Cheng Lian] Fixed SPARK-3320
      32b18dd5
    • wangfei's avatar
      [SPARK-3296][mllib] spark-example should be run-example in head notation of... · 13901764
      wangfei authored
      [SPARK-3296][mllib] spark-example should be run-example in head notation of DenseKMeans and SparseNaiveBayes
      
      `./bin/spark-example`  should be `./bin/run-example` in DenseKMeans and SparseNaiveBayes
      
      Author: wangfei <wangfei_hello@126.com>
      
      Closes #2193 from scwf/run-example and squashes the following commits:
      
      207eb3a [wangfei] spark-example should be run-example
      27a8999 [wangfei] ./bin/spark-example should be ./bin/run-example
      13901764
    • Zdenek Farana's avatar
      [SPARK-3173][SQL] Timestamp support in the parser · 98ddbe6c
      Zdenek Farana authored
      If you have a table with TIMESTAMP column, that column can't be used in WHERE clause properly - it is not evaluated properly. [More](https://issues.apache.org/jira/browse/SPARK-3173)
      
      Motivation: http://www.aproint.com/aggregation-with-spark-sql/
      
      - [x] modify SqlParser so it supports casting to TIMESTAMP (workaround for item 2)
      - [x] the string literal should be converted into Timestamp if the column is Timestamp.
      
      Author: Zdenek Farana <zdenek.farana@gmail.com>
      Author: Zdenek Farana <zdenek.farana@aproint.com>
      
      Closes #2084 from byF/SPARK-3173 and squashes the following commits:
      
      442b59d [Zdenek Farana] Fixed test merge conflict
      2dbf4f6 [Zdenek Farana] Merge remote-tracking branch 'origin/SPARK-3173' into SPARK-3173
      65b6215 [Zdenek Farana] Fixed timezone sensitivity in the test
      47b27b4 [Zdenek Farana] Now works in the case of "StringLiteral=TimestampColumn"
      96a661b [Zdenek Farana] Code style change
      491dfcf [Zdenek Farana] Added test cases for SPARK-3173
      4446b1e [Zdenek Farana] A string literal is casted into Timestamp when the column is Timestamp.
      59af397 [Zdenek Farana] Added a new TIMESTAMP keyword; CAST to TIMESTAMP now can be used in SQL expression.
      98ddbe6c
    • qiping.lqp's avatar
      [SPARK-3291][SQL]TestcaseName in createQueryTest should not contain ":" · 634d04b8
      qiping.lqp authored
      ":" is not allowed to appear in a file name of Windows system. If file name contains ":", this file can't be checked out in a Windows system and developers using Windows must be careful to not commit the deletion of such files, Which is very inconvenient.
      
      Author: qiping.lqp <qiping.lqp@alibaba-inc.com>
      
      Closes #2191 from chouqin/querytest and squashes the following commits:
      
      0e943a1 [qiping.lqp] rename golden file
      60a863f [qiping.lqp] TestcaseName in createQueryTest should not contain ":"
      634d04b8
    • Cheng Lian's avatar
      [SPARK-3269][SQL] Decreases initial buffer size for row set to prevent OOM · d94a44d7
      Cheng Lian authored
      When a large batch size is specified, `SparkSQLOperationManager` OOMs even if the whole result set is much smaller than the batch size.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2171 from liancheng/jdbc-fetch-size and squashes the following commits:
      
      5e1623b [Cheng Lian] Decreases initial buffer size for row set to prevent OOM
      d94a44d7
    • Cheng Lian's avatar
      [SQL] Turns on in-memory columnar compression in HiveCompatibilitySuite · b1eccfc8
      Cheng Lian authored
      `HiveCompatibilitySuite` already turns on in-memory columnar caching, it would be good to also enable compression to improve test coverage.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2190 from liancheng/compression-on and squashes the following commits:
      
      88b536c [Cheng Lian] Code cleanup, narrowed field visibility
      d13efd2 [Cheng Lian] Turns on in-memory columnar compression in HiveCompatibilitySuite
      b1eccfc8
    • Cheng Hao's avatar
      [SPARK-3198] [SQL] Remove the TreeNode.id · dc4d577c
      Cheng Hao authored
      Thus id property of the TreeNode API does save time in a faster way to compare 2 TreeNodes, it is kind of performance bottleneck during the expression object creation in a multi-threading env (because of the memory barrier).
      Fortunately, the tree node comparison only happen once in master, so even we remove it, the entire performance will not be affected.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #2155 from chenghao-intel/treenode and squashes the following commits:
      
      7cf2cd2 [Cheng Hao] Remove the implicit keyword for TreeNodeRef and some other small issues
      5873415 [Cheng Hao] Remove the TreeNode.id
      dc4d577c
    • Cheng Lian's avatar
      [SPARK-3234][Build] Fixed environment variables that rely on deprecated... · 287c0ac7
      Cheng Lian authored
      [SPARK-3234][Build] Fixed environment variables that rely on deprecated command line options in make-distribution.sh
      
      Please refer to [SPARK-3234](https://issues.apache.org/jira/browse/SPARK-3234) for details.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2208 from liancheng/spark-3234 and squashes the following commits:
      
      fb26de8 [Cheng Lian] Fixed SPARK-3234
      287c0ac7
    • William Benton's avatar
      SPARK-2813: [SQL] Implement SQRT() directly in Spark SQL · 2f1519de
      William Benton authored
      This PR adds a native implementation for SQL SQRT() and thus avoids delegating this function to Hive.
      
      Author: William Benton <willb@redhat.com>
      
      Closes #1750 from willb/spark-2813 and squashes the following commits:
      
      22c8a79 [William Benton] Fixed missed newline from rebase
      d673861 [William Benton] Added string coercions for SQRT and associated test case
      e125df4 [William Benton] Added ExpressionEvaluationSuite test cases for SQRT
      7b84bcd [William Benton] SQL SQRT now properly returns NULL for NULL inputs
      8256971 [William Benton] added SQRT test to SqlQuerySuite
      504d2e5 [William Benton] Added native SQRT implementation
      2f1519de
    • Nicholas Chammas's avatar
      [Docs] SQL doc formatting and typo fixes · 53aa8316
      Nicholas Chammas authored
      As [reported on the dev list](http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC2-tp8107p8131.html):
      * Code fencing with triple-backticks doesn’t seem to work like it does on GitHub. Newlines are lost. Instead, use 4-space indent to format small code blocks.
      * Nested bullets need 2 leading spaces, not 1.
      * Spellcheck!
      
      Author: Nicholas Chammas <nicholas.chammas@gmail.com>
      Author: nchammas <nicholas.chammas@gmail.com>
      
      Closes #2201 from nchammas/sql-doc-fixes and squashes the following commits:
      
      873f889 [Nicholas Chammas] [Docs] fix skip-api flag
      5195e0c [Nicholas Chammas] [Docs] SQL doc formatting and typo fixes
      3b26c8d [nchammas] [Spark QA] Link to console output on test time out
      53aa8316
    • Davies Liu's avatar
      [SPARK-3307] [PySpark] Fix doc string of SparkContext.broadcast() · e248328b
      Davies Liu authored
       remove invalid docs
      
      Author: Davies Liu <davies.liu@gmail.com>
      
      Closes #2202 from davies/keep and squashes the following commits:
      
      aa3b44f [Davies Liu] remove invalid docs
      e248328b
    • Kousuke Saruta's avatar
      [SPARK-3279] Remove useless field variable in ApplicationMaster · 27df6ce6
      Kousuke Saruta authored
      Author: Kousuke Saruta <sarutak@oss.nttdata.co.jp>
      
      Closes #2177 from sarutak/SPARK-3279 and squashes the following commits:
      
      2955edc [Kousuke Saruta] Removed useless field variable from ApplicationMaster
      27df6ce6
  3. Aug 28, 2014
    • Reynold Xin's avatar
      [SPARK-1912] Lazily initialize buffers for local shuffle blocks. · 665e71d1
      Reynold Xin authored
      This is a simplified fix for SPARK-1912.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #2179 from rxin/SPARK-1912 and squashes the following commits:
      
      b2f0e9e [Reynold Xin] Fix unit tests.
      a8eddfe [Reynold Xin] [SPARK-1912] Lazily initialize buffers for local shuffle blocks.
      665e71d1
    • nchammas's avatar
      [Spark QA] Link to console output on test time out · 3c517a81
      nchammas authored
      When tests time out we should link to the Jenkins console output for easy review. We already do this for when tests start or complete normally.
      
      Here's [a recent example](https://github.com/apache/spark/pull/2109#issuecomment-53374032) of where this would be helpful.
      
      Author: nchammas <nicholas.chammas@gmail.com>
      
      Closes #2140 from nchammas/patch-1 and squashes the following commits:
      
      3b26c8d [nchammas] [Spark QA] Link to console output on test time out
      3c517a81
    • Andrew Or's avatar
      [SPARK-3277] Fix external spilling with LZ4 assertion error · a46b8f2d
      Andrew Or authored
      **Summary of the changes**
      
      The bulk of this PR is comprised of tests and documentation; the actual fix is really just adding 1 line of code (see `BlockObjectWriter.scala`). We currently do not run the `External*` test suites with different compression codecs, and this would have caught the bug reported in [SPARK-3277](https://issues.apache.org/jira/browse/SPARK-3277). This PR extends the existing code to test spilling using all compression codecs known to Spark, including `LZ4`.
      
      **The bug itself**
      
      In `DiskBlockObjectWriter`, we only report the shuffle bytes written before we close the streams. With `LZ4`, all the bytes written reported by our metrics were 0 because `flush()` was not taking effect for some reason. In general, compression codecs may write additional bytes to the file after we call `close()`, and so we must also capture those bytes in our shuffle write metrics.
      
      Thanks mridulm and pwendell for help with debugging.
      
      Author: Andrew Or <andrewor14@gmail.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #2187 from andrewor14/fix-lz4-spilling and squashes the following commits:
      
      1b54bdc [Andrew Or] Speed up tests by not compressing everything
      1c4624e [Andrew Or] Merge branch 'master' of github.com:apache/spark into fix-lz4-spilling
      6b2e7d1 [Andrew Or] Fix compilation error
      92e251b [Patrick Wendell] Better documentation for BlockObjectWriter.
      a1ad536 [Andrew Or] Fix tests
      089593f [Andrew Or] Actually fix SPARK-3277 (tests still fail)
      4bbcf68 [Andrew Or] Update tests to actually test all compression codecs
      b264a84 [Andrew Or] ExternalAppendOnlyMapSuite code style fixes (minor)
      1bfa743 [Andrew Or] Add more information to assert for better debugging
      a46b8f2d
    • Sandy Ryza's avatar
      SPARK-3082. yarn.Client.logClusterResourceDetails throws NPE if requeste... · 92af2314
      Sandy Ryza authored
      ...d queue doesn't exist
      
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #1984 from sryza/sandy-spark-3082 and squashes the following commits:
      
      fe08c37 [Sandy Ryza] Remove log message entirely
      85253ad [Sandy Ryza] SPARK-3082. yarn.Client.logClusterResourceDetails throws NPE if requested queue doesn't exist
      92af2314
    • Ankur Dave's avatar
      [SPARK-3190] Avoid overflow in VertexRDD.count() · 96df9290
      Ankur Dave authored
      VertexRDDs with more than 4 billion elements are counted incorrectly due to integer overflow when summing partition sizes. This PR fixes the issue by converting partition sizes to Longs before summing them.
      
      The following code previously returned -10000000. After applying this PR, it returns the correct answer of 5000000000 (5 billion).
      
      ```scala
      val pairs = sc.parallelize(0L until 500L).map(_ * 10000000)
        .flatMap(start => start until (start + 10000000)).map(x => (x, x))
      VertexRDD(pairs).count()
      ```
      
      Author: Ankur Dave <ankurdave@gmail.com>
      
      Closes #2106 from ankurdave/SPARK-3190 and squashes the following commits:
      
      641f468 [Ankur Dave] Avoid overflow in VertexRDD.count()
      96df9290
    • Yadong Qi's avatar
      [SPARK-3285] [examples] Using values.sum is easier to understand than using... · 39012452
      Yadong Qi authored
      [SPARK-3285] [examples] Using values.sum is easier to understand than using values.foldLeft(0)(_ + _)
      
      def sum[B >: A](implicit num: Numeric[B]): B = foldLeft(num.zero)(num.plus)
      Using values.sum is easier to understand than using values.foldLeft(0)(_ + _), so we'd better use values.sum instead of values.foldLeft(0)(_ + _)
      
      Author: Yadong Qi <qiyadong2010@gmail.com>
      
      Closes #2182 from watermen/bug-fix3 and squashes the following commits:
      
      17be9fb [Yadong Qi] Update CheckpointSuite.scala
      714bda5 [Yadong Qi] Update BasicOperationsSuite.scala
      57e704c [Yadong Qi] Update StatefulNetworkWordCount.scala
      39012452
    • Reynold Xin's avatar
      [SPARK-3281] Remove Netty specific code in BlockManager / shuffle · be53c54b
      Reynold Xin authored
      Netty functionality will be added back in subsequent PRs by using the BlockTransferService interface.
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #2181 from rxin/SPARK-3281 and squashes the following commits:
      
      5494b0e [Reynold Xin] Fix extra port.
      ff6d1e1 [Reynold Xin] [SPARK-3281] Remove Netty specific code in BlockManager.
      be53c54b
    • Andrew Or's avatar
      [SPARK-3264] Allow users to set executor Spark home in Mesos · 41dc5987
      Andrew Or authored
      The executors and the driver may not share the same Spark home. There is currently one way to set the executor side Spark home in Mesos, through setting `spark.home`. However, this is neither documented nor intuitive. This PR adds a more specific config `spark.mesos.executor.home` and exposes this to the user.
      
      liancheng tnachen
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2166 from andrewor14/mesos-spark-home and squashes the following commits:
      
      b87965e [Andrew Or] Merge branch 'master' of github.com:apache/spark into mesos-spark-home
      f6abb2e [Andrew Or] Document spark.mesos.executor.home
      ca7846d [Andrew Or] Add more specific configuration for executor Spark home in Mesos
      41dc5987
    • Cheng Lian's avatar
      [SPARK-2608][Core] Fixed command line option passing issue over Mesos via SPARK_EXECUTOR_OPTS · 6d392b36
      Cheng Lian authored
      This is another try after #2145 to fix [SPARK-2608](https://issues.apache.org/jira/browse/SPARK-2608
      
      ).
      
      The basic idea is to pass `extraJavaOpts` and `extraLibraryPath` together via environment variable `SPARK_EXECUTOR_OPTS`. This variable is recognized by `spark-class` and not used anywhere else. In this way, we still launch Mesos executors with `spark-class`/`spark-executor`, but avoids the executor side Spark home issue.
      
      Quoted string with spaces is not allowed in either `extraJavaOpts` or `extraLibraryPath` when using Spark over Mesos. The reason is that Mesos passes the whole command line as a single string argument to `sh -c` to start the executor, and this makes shell string escaping non-trivial to handle. This should be fixed in a later release.
      
      Classes in package `org.apache.spark.deploy` shouldn't be used as they assume Spark is deployed in standalone mode, and give wrong executor side Spark home directory. Please refer to comments in #2145 for more details.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2161 from liancheng/mesos-fix-with-env-var and squashes the following commits:
      
      ba59190 [Cheng Lian] Added fine grained Mesos executor support
      1174076 [Cheng Lian] Draft fix for CoarseMesosSchedulerBackend
      
      (cherry picked from commit 935bffe3)
      Signed-off-by: default avatarReynold Xin <rxin@apache.org>
      6d392b36
    • Tatiana Borisova's avatar
      [SPARK-3150] Fix NullPointerException in in Spark recovery: Add initializing... · 70d81466
      Tatiana Borisova authored
      [SPARK-3150] Fix NullPointerException in in Spark recovery: Add initializing default values in DriverInfo.init()
      
      The issue happens when Spark is run standalone on a cluster.
      When master and driver fall simultaneously on one node in a cluster, master tries to recover its state and restart spark driver.
      While restarting driver, it falls with NPE exception (stacktrace is below).
      After falling, it restarts and tries to recover its state and restart Spark driver again. It happens over and over in an infinite cycle.
      Namely, Spark tries to read DriverInfo state from zookeeper, but after reading it happens to be null in DriverInfo.worker.
      
      https://issues.apache.org/jira/browse/SPARK-3150
      
      Author: Tatiana Borisova <tanyatik@yandex.ru>
      
      Closes #2062 from tanyatik/spark-3150 and squashes the following commits:
      
      9936043 [Tatiana Borisova] Add initializing default values in DriverInfo.init()
      70d81466
    • Michael Armbrust's avatar
      [SPARK-3230][SQL] Fix udfs that return structs · 76e3ba42
      Michael Armbrust authored
      We need to convert the case classes into Rows.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2133 from marmbrus/structUdfs and squashes the following commits:
      
      189722f [Michael Armbrust] Merge remote-tracking branch 'origin/master' into structUdfs
      8e29b1c [Michael Armbrust] Use existing function
      d8d0b76 [Michael Armbrust] Fix udfs that return structs
      76e3ba42
    • Cheng Lian's avatar
      [SQL] Fixed 2 comment typos in SQLConf · 68f75dcd
      Cheng Lian authored
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #2172 from liancheng/sqlconf-typo and squashes the following commits:
      
      115cc71 [Cheng Lian] Fixed 2 comment typos in SQLConf
      68f75dcd
    • Michael Armbrust's avatar
      [HOTFIX][SQL] Remove cleaning of UDFs · 024178c5
      Michael Armbrust authored
      It is not safe to run the closure cleaner on slaves.  #2153 introduced this which broke all UDF execution on slaves.  Will re-add cleaning of UDF closures in a follow-up PR.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2174 from marmbrus/fixUdfs and squashes the following commits:
      
      55406de [Michael Armbrust] [HOTFIX] Remove cleaning of UDFs
      024178c5
    • Andrew Or's avatar
      [HOTFIX] Wait for EOF only for the PySpark shell · dafe3434
      Andrew Or authored
      In `SparkSubmitDriverBootstrapper`, we wait for the parent process to send us an `EOF` before finishing the application. This is applicable for the PySpark shell because we terminate the application the same way. However if we run a python application, for instance, the JVM actually never exits unless it receives a manual EOF from the user. This is causing a few tests to timeout.
      
      We only need to do this for the PySpark shell because Spark submit runs as a python subprocess only in this case. Thus, the normal Spark shell doesn't need to go through this case even though it is also a REPL.
      
      Thanks davies for reporting this.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2170 from andrewor14/bootstrap-hotfix and squashes the following commits:
      
      42963f5 [Andrew Or] Do not wait for EOF unless this is the pyspark shell
      dafe3434
  4. Aug 27, 2014
    • Rob O'Dwyer's avatar
      SPARK-3265 Allow using custom ipython executable with pyspark · f38fab97
      Rob O'Dwyer authored
      Although you can make pyspark use ipython with `IPYTHON=1`, and also change the python executable with `PYSPARK_PYTHON=...`, you can't use both at the same time because it hardcodes the default ipython script.
      
      This makes it use the `PYSPARK_PYTHON` variable if present and fall back to default python, similarly to how the default python executable is handled.
      
      So you can use a custom ipython like so:
      `PYSPARK_PYTHON=./anaconda/bin/ipython IPYTHON_OPTS="notebook" pyspark`
      
      Author: Rob O'Dwyer <odwyerrob@gmail.com>
      
      Closes #2167 from robbles/patch-1 and squashes the following commits:
      
      d98e8a9 [Rob O'Dwyer] Allow using custom ipython executable with pyspark
      f38fab97
    • scwf's avatar
      [SPARK-3271] delete unused methods in Utils · b86277c1
      scwf authored
      delete no used method in Utils
      
      Author: scwf <wangfei1@huawei.com>
      
      Closes #2160 from scwf/delete-no-use-method and squashes the following commits:
      
      d8f6b0d [scwf] delete no use method in Utils
      b86277c1
    • Matthew Farrellee's avatar
      Add line continuation for script to work w/ py2.7.5 · 64d8ecbb
      Matthew Farrellee authored
      Error was -
      
      $ SPARK_HOME=$PWD/dist ./dev/create-release/generate-changelist.py
        File "./dev/create-release/generate-changelist.py", line 128
          if day < SPARK_REPO_CHANGE_DATE1 or
                                            ^
      SyntaxError: invalid syntax
      
      Author: Matthew Farrellee <matt@redhat.com>
      
      Closes #2139 from mattf/master-fix-generate-changelist.py-0 and squashes the following commits:
      
      6b3a900 [Matthew Farrellee] Add line continuation for script to work w/ py2.7.5
      64d8ecbb
    • Patrick Wendell's avatar
      8712653f
    • Michael Armbrust's avatar
      [SPARK-3235][SQL] Ensure in-memory tables don't always broadcast. · 7d2a7a91
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #2147 from marmbrus/inMemDefaultSize and squashes the following commits:
      
      5390360 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into inMemDefaultSize
      14204d3 [Michael Armbrust] Set the context before creating SparkLogicalPlans.
      8da4414 [Michael Armbrust] Make sure we throw errors when leaf nodes fail to provide statistcs
      18ce029 [Michael Armbrust] Ensure in-memory tables don't always broadcast.
      7d2a7a91
    • luogankun's avatar
      [SPARK-3065][SQL] Add locale setting to fix results do not match for... · 65253502
      luogankun authored
      [SPARK-3065][SQL] Add locale setting to fix results do not match for udf_unix_timestamp format "yyyy MMM dd h:mm:ss a" run with not "America/Los_Angeles" TimeZone in HiveCompatibilitySuite
      
      When run the udf_unix_timestamp of org.apache.spark.sql.hive.execution.HiveCompatibilitySuite testcase
      with not "America/Los_Angeles" TimeZone throws error. [https://issues.apache.org/jira/browse/SPARK-3065]
      add locale setting on beforeAll and afterAll method to fix the bug of HiveCompatibilitySuite testcase
      
      Author: luogankun <luogankun@gmail.com>
      
      Closes #1968 from luogankun/SPARK-3065 and squashes the following commits:
      
      c167832 [luogankun] [SPARK-3065][SQL] Add Locale setting to HiveCompatibilitySuite
      0a25e3a [luogankun] [SPARK-3065][SQL] Add Locale setting to HiveCompatibilitySuite
      65253502
    • Aaron Davidson's avatar
      [SQL] [SPARK-3236] Reading Parquet tables from Metastore mangles location · cc275f4b
      Aaron Davidson authored
      Currently we do `relation.hiveQlTable.getDataLocation.getPath`, which returns the path-part of the URI (e.g., "s3n://my-bucket/my-path" => "/my-path"). We should do `relation.hiveQlTable.getDataLocation.toString` instead, as a URI's toString returns a faithful representation of the full URI, which can later be passed into a Hadoop Path.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #2150 from aarondav/parquet-location and squashes the following commits:
      
      459f72c [Aaron Davidson] [SQL] [SPARK-3236] Reading Parquet tables from Metastore mangles location
      cc275f4b
    • viirya's avatar
      [SPARK-3252][SQL] Add missing condition for test · 28d41d62
      viirya authored
      According to the text message, both relations should be tested. So add the missing condition.
      
      Author: viirya <viirya@gmail.com>
      
      Closes #2159 from viirya/fix_test and squashes the following commits:
      
      b1c0f52 [viirya] add missing condition.
      28d41d62
    • Andrew Or's avatar
      [SPARK-3243] Don't use stale spark-driver.* system properties · 63a053ab
      Andrew Or authored
      If we set both `spark.driver.extraClassPath` and `--driver-class-path`, then the latter correctly overrides the former. However, the value of the system property `spark.driver.extraClassPath` still uses the former, which is actually not added to the class path. This may cause some confusion...
      
      Of course, this also affects other options (i.e. java options, library path, memory...).
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2154 from andrewor14/driver-submit-configs-fix and squashes the following commits:
      
      17ec6fc [Andrew Or] Fix tests
      0140836 [Andrew Or] Don't forget spark.driver.memory
      e39d20f [Andrew Or] Also set spark.driver.extra* configs in client mode
      63a053ab
Loading