Skip to content
Snippets Groups Projects
  1. Jul 06, 2015
    • Wenchen Fan's avatar
      [SPARK-8837][SPARK-7114][SQL] support using keyword in column name · 0e194645
      Wenchen Fan authored
      Author: Wenchen Fan <cloud0fan@outlook.com>
      
      Closes #7237 from cloud-fan/parser and squashes the following commits:
      
      e7b49bb [Wenchen Fan] support using keyword in column name
      0e194645
    • Daniel Emaasit (PhD Student)'s avatar
      [SPARK-8124] [SPARKR] Created more examples on SparkR DataFrames · 293225e0
      Daniel Emaasit (PhD Student) authored
      Here are more examples on SparkR DataFrames including creating a Spark Contect and a SQL
      context, loading data and simple data manipulation.
      
      Author: Daniel Emaasit (PhD Student) <daniel.emaasit@gmail.com>
      
      Closes #6668 from Emaasit/dan-dev and squashes the following commits:
      
      3a97867 [Daniel Emaasit (PhD Student)] Used fewer rows for createDataFrame
      f7227f9 [Daniel Emaasit (PhD Student)] Using command line arguments
      a550f70 [Daniel Emaasit (PhD Student)] Used base R functions
      33f9882 [Daniel Emaasit (PhD Student)] Renamed file
      b6603e3 [Daniel Emaasit (PhD Student)] changed "Describe" function to "describe"
      90565dd [Daniel Emaasit (PhD Student)] Deleted the getting-started file
      b95a103 [Daniel Emaasit (PhD Student)] Deleted this file
      cc55cd8 [Daniel Emaasit (PhD Student)] combined all the code into one .R file
      c6933af [Daniel Emaasit (PhD Student)] changed variable name to SQLContext
      8e0fe14 [Daniel Emaasit (PhD Student)] provided two options for creating DataFrames
      2653573 [Daniel Emaasit (PhD Student)] Updates to a comment and variable name
      275b787 [Daniel Emaasit (PhD Student)] Added the Apache License at the top of the file
      2e8f724 [Daniel Emaasit (PhD Student)] Added the Apache License at the top of the file
      486f44e [Daniel Emaasit (PhD Student)] Added the Apache License at the file
      d705112 [Daniel Emaasit (PhD Student)] Created more examples on SparkR DataFrames
      293225e0
    • Steve Lindemann's avatar
      [SPARK-8841] [SQL] Fix partition pruning percentage log message · 39e4e7e4
      Steve Lindemann authored
      When pruning partitions for a query plan, a message is logged indicating what how many partitions were selected based on predicate criteria, and what percent were pruned.
      
      The current release erroneously uses `1 - total/selected` to compute this quantity, leading to nonsense messages like "pruned -1000% partitions". The fix is simple and obvious.
      
      Author: Steve Lindemann <steve.lindemann@engineersgatelp.com>
      
      Closes #7227 from srlindemann/master and squashes the following commits:
      
      c788061 [Steve Lindemann] fix percentPruned log message
      39e4e7e4
    • Reynold Xin's avatar
      [SPARK-8831][SQL] Support AbstractDataType in TypeCollection. · 86768b7b
      Reynold Xin authored
      Otherwise it is impossible to declare an expression supporting DecimalType.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7232 from rxin/typecollection-adt and squashes the following commits:
      
      934d3d1 [Reynold Xin] [SPARK-8831][SQL] Support AbstractDataType in TypeCollection.
      86768b7b
  2. Jul 05, 2015
    • Cheng Hao's avatar
      [SQL][Minor] Update the DataFrame API for encode/decode · 6d0411b4
      Cheng Hao authored
      This is a the follow up of #6843.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #7230 from chenghao-intel/str_funcs2_followup and squashes the following commits:
      
      52cc553 [Cheng Hao] update the code as comment
      6d0411b4
    • Yu ISHIKAWA's avatar
      [SPARK-8549] [SPARKR] Fix the line length of SparkR · a0cb111b
      Yu ISHIKAWA authored
      [[SPARK-8549] Fix the line length of SparkR - ASF JIRA](https://issues.apache.org/jira/browse/SPARK-8549)
      
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #7204 from yu-iskw/SPARK-8549 and squashes the following commits:
      
      6fb131a [Yu ISHIKAWA] Fix the typo
      1737598 [Yu ISHIKAWA] [SPARK-8549][SparkR] Fix the line length of SparkR
      a0cb111b
    • Joshi's avatar
      [SPARK-7137] [ML] Update SchemaUtils checkInputColumn to print more info if needed · f9c448dc
      Joshi authored
      Author: Joshi <rekhajoshm@gmail.com>
      Author: Rekha Joshi <rekhajoshm@gmail.com>
      
      Closes #5992 from rekhajoshm/fix/SPARK-7137 and squashes the following commits:
      
      8c42b57 [Joshi] update checkInputColumn to print more info if needed
      33ddd2e [Joshi] update checkInputColumn to print more info if needed
      acf3e17 [Joshi] update checkInputColumn to print more info if needed
      8993c0e [Joshi] SPARK-7137: Add checkInputColumn back to Params and print more info
      e3677c9 [Rekha Joshi] Merge pull request #1 from apache/master
      f9c448dc
    • Liang-Chi Hsieh's avatar
      [MINOR] [SQL] Minor fix for CatalystSchemaConverter · 2b820f2a
      Liang-Chi Hsieh authored
      ping liancheng
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      
      Closes #7224 from viirya/few_fix_catalystschema and squashes the following commits:
      
      d994330 [Liang-Chi Hsieh] Minor fix for CatalystSchemaConverter.
      2b820f2a
  3. Jul 04, 2015
    • Reynold Xin's avatar
      [SPARK-8822][SQL] clean up type checking in math.scala. · c991ef5a
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7220 from rxin/SPARK-8822 and squashes the following commits:
      
      0cda076 [Reynold Xin] Test cases.
      22d0463 [Reynold Xin] Fixed type precedence.
      beb2a97 [Reynold Xin] [SPARK-8822][SQL] clean up type checking in math.scala.
      c991ef5a
    • Reynold Xin's avatar
      [SQL] More unit tests for implicit type cast & add simpleString to AbstractDataType. · 347cab85
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7221 from rxin/implicit-cast-tests and squashes the following commits:
      
      64b13bd [Reynold Xin] Fixed a bug ..
      489b732 [Reynold Xin] [SQL] More unit tests for implicit type cast & add simpleString to AbstractDataType.
      347cab85
    • Reynold Xin's avatar
      48f7aed6
    • Tarek Auel's avatar
      [SPARK-8270][SQL] levenshtein distance · 6b3574e6
      Tarek Auel authored
      Jira: https://issues.apache.org/jira/browse/SPARK-8270
      
      Info: I can not build the latest master, it stucks during the build process: `[INFO] Dependency-reduced POM written at: /Users/tarek/test/spark/bagel/dependency-reduced-pom.xml`
      
      Author: Tarek Auel <tarek.auel@googlemail.com>
      
      Closes #7214 from tarekauel/SPARK-8270 and squashes the following commits:
      
      ab348b9 [Tarek Auel] Merge branch 'master' into SPARK-8270
      a2ad318 [Tarek Auel] [SPARK-8270] changed order of fields
      d91b12c [Tarek Auel] [SPARK-8270] python fix
      adbd075 [Tarek Auel] [SPARK-8270] fixed typo
      23185c9 [Tarek Auel] [SPARK-8270] levenshtein distance
      6b3574e6
    • Cheng Hao's avatar
      [SPARK-8238][SPARK-8239][SPARK-8242][SPARK-8243][SPARK-8268][SQL]Add... · f35b0c34
      Cheng Hao authored
      [SPARK-8238][SPARK-8239][SPARK-8242][SPARK-8243][SPARK-8268][SQL]Add ascii/base64/unbase64/encode/decode functions
      
      Add `ascii`,`base64`,`unbase64`,`encode` and `decode` expressions.
      
      Author: Cheng Hao <hao.cheng@intel.com>
      
      Closes #6843 from chenghao-intel/str_funcs2 and squashes the following commits:
      
      78dee7d [Cheng Hao] base 64 -> base64
      9d6f9f4 [Cheng Hao] remove the toString method for expressions
      ed5c19c [Cheng Hao] update code as comments
      96170fc [Cheng Hao] scalastyle issues
      e2df768 [Cheng Hao] remove the unused import
      491ce7b [Cheng Hao] add ascii/base64/unbase64/encode/decode functions
      f35b0c34
    • Josh Rosen's avatar
      [SPARK-8777] [SQL] Add random data generator test utilities to Spark SQL · f32487b7
      Josh Rosen authored
      This commit adds a set of random data generation utilities to Spark SQL, for use in its own unit tests.
      
      - `RandomDataGenerator.forType(DataType)` returns an `Option[() => Any]` that, if defined, contains a function for generating random values for the given DataType.  The random values use the external representations for the given DataType (for example, for DateType we return `java.sql.Date` instances instead of longs).
      - `DateTypeTestUtilities` defines some convenience fields for looping over instances of data types.  For example, `numericTypes` holds `DataType` instances for all supported numeric types.  These constants will help us to raise the level of abstraction in our tests.  For example, it's now very easy to write a test which is parameterized by all common data types.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7176 from JoshRosen/sql-random-data-generators and squashes the following commits:
      
      f71634d [Josh Rosen] Roll back ScalaCheck usage
      e0d7d49 [Josh Rosen] Bump ScalaCheck version in LICENSE
      89d86b1 [Josh Rosen] Bump ScalaCheck version.
      0c20905 [Josh Rosen] Initial attempt at using ScalaCheck.
      b55875a [Josh Rosen] Generate doubles and floats over entire possible range.
      5acdd5c [Josh Rosen] Infinity and NaN are interesting.
      ab76cbd [Josh Rosen] Move code to Catalyst package.
      d2b4a4a [Josh Rosen] Add random data generator test utilities to Spark SQL.
      f32487b7
    • Daoyuan Wang's avatar
      [SPARK-8192] [SPARK-8193] [SQL] udf current_date, current_timestamp · 9fb6b832
      Daoyuan Wang authored
      Author: Daoyuan Wang <daoyuan.wang@intel.com>
      
      Closes #6985 from adrian-wang/udfcurrent and squashes the following commits:
      
      6a20b64 [Daoyuan Wang] remove codegen and add lazy in testsuite
      27c9f95 [Daoyuan Wang] refine tests..
      e11ae75 [Daoyuan Wang] refine tests
      61ed3d5 [Daoyuan Wang] add in functions
      98e8550 [Daoyuan Wang] fix sytle
      427d9dc [Daoyuan Wang] add tests and codegen
      0b69a1f [Daoyuan Wang] udf current
      9fb6b832
    • Cheolsoo Park's avatar
      [SPARK-8572] [SQL] Type coercion for ScalaUDFs · 4a22bce8
      Cheolsoo Park authored
      Implemented type coercion for udf arguments in Scala. The changes include-
      * Add `with ExpectsInputTypes ` to `ScalaUDF` class.
      * Pass down argument types info from `UDFRegistration` and `functions`.
      
      With this patch, the example query in [SPARK-8572](https://issues.apache.org/jira/browse/SPARK-8572) no longer throws a type cast error at runtime.
      
      Also added a unit test to `UDFSuite` in which a decimal type is passed to a udf that expects an int.
      
      Author: Cheolsoo Park <cheolsoop@netflix.com>
      
      Closes #7203 from piaozhexiu/SPARK-8572 and squashes the following commits:
      
      2d0ed15 [Cheolsoo Park] Incorporate comments
      dce1efd [Cheolsoo Park] Fix unit tests and update the codegen script
      066deed [Cheolsoo Park] Type coercion for udf inputs
      4a22bce8
  4. Jul 03, 2015
    • Spiro Michaylov's avatar
      [SPARK-8810] [SQL] Added several UDF unit tests for Spark SQL · e92c24d3
      Spiro Michaylov authored
      One test for each of the GROUP BY, WHERE and HAVING clauses, and one that combines all three with an additional UDF in the SELECT.
      
      (Since this is my first attempt at contributing to SPARK, meta-level guidance on anything I've screwed up would be greatly appreciated, whether important or minor.)
      
      Author: Spiro Michaylov <spiro@michaylov.com>
      
      Closes #7207 from spirom/udf-test-branch and squashes the following commits:
      
      6bbba9e [Spiro Michaylov] Responded to review comments on UDF unit tests
      1a3c5ff [Spiro Michaylov] Added several UDF unit tests for Spark SQL
      e92c24d3
    • MechCoder's avatar
      [SPARK-7401] [MLLIB] [PYSPARK] Vectorize dot product and sq_dist between... · f0fac2aa
      MechCoder authored
      [SPARK-7401] [MLLIB] [PYSPARK] Vectorize dot product and sq_dist between SparseVector and DenseVector
      
      Currently we iterate over indices which can be vectorized.
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #5946 from MechCoder/spark-7203 and squashes the following commits:
      
      034d086 [MechCoder] Vectorize dot calculation for numpy arrays for ndim=2
      bce2b07 [MechCoder] fix doctest
      fcad0a3 [MechCoder] Remove type checks for list, pyarray etc
      0ee5dd4 [MechCoder] Add tests and other isinstance changes
      e5f1de0 [MechCoder] [SPARK-7401] Vectorize dot product and sq_dist
      f0fac2aa
    • zhichao.li's avatar
      [SPARK-8226] [SQL] Add function shiftrightunsigned · ab535b9a
      zhichao.li authored
      Author: zhichao.li <zhichao.li@intel.com>
      
      Closes #7035 from zhichao-li/shiftRightUnsigned and squashes the following commits:
      
      6bcca5a [zhichao.li] change coding style
      3e9f5ae [zhichao.li] python style
      d85ae0b [zhichao.li] add shiftrightunsigned
      ab535b9a
    • Reynold Xin's avatar
      [SPARK-8809][SQL] Remove ConvertNaNs analyzer rule. · 2848f4da
      Reynold Xin authored
      "NaN" from string to double is already handled by Cast expression itself.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7206 from rxin/convertnans and squashes the following commits:
      
      3d99c33 [Reynold Xin] [SPARK-8809][SQL] Remove ConvertNaNs analyzer rule.
      2848f4da
    • Burak Yavuz's avatar
      [SPARK-8803] handle special characters in elements in crosstab · 9b23e92c
      Burak Yavuz authored
      cc rxin
      
      Having back ticks or null as elements causes problems.
      Since elements become column names, we have to drop them from the element as back ticks are special characters.
      Having null throws exceptions, we could replace them with empty strings.
      
      Handling back ticks should be improved for 1.5
      
      Author: Burak Yavuz <brkyvz@gmail.com>
      
      Closes #7201 from brkyvz/weird-ct-elements and squashes the following commits:
      
      e06b840 [Burak Yavuz] fix scalastyle
      93a0d3f [Burak Yavuz] added tests for NaN and Infinity
      9dba6ce [Burak Yavuz] address cr1
      db71dbd [Burak Yavuz] handle special characters in elements in crosstab
      9b23e92c
    • Yin Huai's avatar
      [SPARK-8776] Increase the default MaxPermSize · f743c79a
      Yin Huai authored
      I am increasing the perm gen size to 256m.
      
      https://issues.apache.org/jira/browse/SPARK-8776
      
      Author: Yin Huai <yhuai@databricks.com>
      
      Closes #7196 from yhuai/SPARK-8776 and squashes the following commits:
      
      60901b4 [Yin Huai] Fix test.
      d44b713 [Yin Huai] Make sparkShell and hiveConsole use 256m PermGen size.
      30aaf8e [Yin Huai] Increase the default PermGen size to 256m.
      f743c79a
  5. Jul 02, 2015
    • Reynold Xin's avatar
      [SPARK-8801][SQL] Support TypeCollection in ExpectsInputTypes · a59d14f6
      Reynold Xin authored
      This patch adds a new TypeCollection AbstractDataType that can be used by expressions to specify more than one expected input types.
      
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7202 from rxin/type-collection and squashes the following commits:
      
      c714ca1 [Reynold Xin] Fixed style.
      a0c0d12 [Reynold Xin] Fixed bugs and unit tests.
      d8b8ae7 [Reynold Xin] Added TypeCollection.
      a59d14f6
    • Cheng Lian's avatar
      [SPARK-8501] [SQL] Avoids reading schema from empty ORC files · 20a4d7db
      Cheng Lian authored
      ORC writes empty schema (`struct<>`) to ORC files containing zero rows.  This is OK for Hive since the table schema is managed by the metastore. But it causes trouble when reading raw ORC files via Spark SQL since we have to discover the schema from the files.
      
      Notice that the ORC data source always avoids writing empty ORC files, but it's still problematic when reading Hive tables which contain empty part-files.
      
      Author: Cheng Lian <lian@databricks.com>
      
      Closes #7199 from liancheng/spark-8501 and squashes the following commits:
      
      bb8cd95 [Cheng Lian] Addresses comments
      a290221 [Cheng Lian] Avoids reading schema from empty ORC files
      20a4d7db
    • Reynold Xin's avatar
      Minor style fix for the previous commit. · dfd8bac8
      Reynold Xin authored
      dfd8bac8
    • zhichao.li's avatar
      [SPARK-8213][SQL]Add function factorial · 1a7a7d7d
      zhichao.li authored
      Author: zhichao.li <zhichao.li@intel.com>
      
      Closes #6822 from zhichao-li/factorial and squashes the following commits:
      
      26edf4f [zhichao.li] add factorial
      1a7a7d7d
    • Bryan Cutler's avatar
      [SPARK-6980] [CORE] Akka timeout exceptions indicate which conf controls them (RPC Layer) · aa7bbc14
      Bryan Cutler authored
      Latest changes after refactoring to the RPC layer.  I rebased against trunk to make sure to get any recent changes since it had been a while.  I wasn't crazy about the name `ConfigureTimeout` and `RpcTimeout` seemed to fit better, but I'm open to suggestions!
      
      I ran most of the tests and they pass, but others would get stuck with "WARN TaskSchedulerImpl: Initial job has not accepted any resources".  I think its just my machine, so I'd though I would push what I have anyway.
      
      Still left to do:
      * I only added a couple unit tests so far, there are probably some more cases to test
      * Make sure all uses require a `RpcTimeout`
      * Right now, both the `ask` and `Await.result` use the same timeout, should we differentiate between these in the TimeoutException message?
      * I wrapped `Await.result` in `RpcTimeout`, should we also wrap `Await.ready`?
      * Proper scoping of classes and methods
      
      hardmettle, feel free to help out with any of these!
      
      Author: Bryan Cutler <bjcutler@us.ibm.com>
      Author: Harsh Gupta <harsh@Harshs-MacBook-Pro.local>
      Author: BryanCutler <cutlerb@gmail.com>
      
      Closes #6205 from BryanCutler/configTimeout-6980 and squashes the following commits:
      
      46c8d48 [Bryan Cutler] [SPARK-6980] Changed RpcEnvSuite test to never reply instead of just sleeping, to avoid possible sync issues
      06afa53 [Bryan Cutler] [SPARK-6980] RpcTimeout class extends Serializable, was causing error in MasterSuite
      7bb70f1 [Bryan Cutler] Merge branch 'master' into configTimeout-6980
      dbd5f73 [Bryan Cutler] [SPARK-6980] Changed RpcUtils askRpcTimeout and lookupRpcTimeout scope to private[spark] and improved deprecation warning msg
      4e89c75 [Bryan Cutler] [SPARK-6980] Missed one usage of deprecated RpcUtils.askTimeout in YarnSchedulerBackend although it is not being used, and fixed SparkConfSuite UT to not use deprecated RpcUtils functions
      6a1c50d [Bryan Cutler] [SPARK-6980] Minor cleanup of test case
      7f4d78e [Bryan Cutler] [SPARK-6980] Fixed scala style checks
      287059a [Bryan Cutler] [SPARK-6980] Removed extra import in AkkaRpcEnvSuite
      3d8b1ff [Bryan Cutler] [SPARK-6980] Cleaned up imports in AkkaRpcEnvSuite
      3a168c7 [Bryan Cutler] [SPARK-6980] Rewrote Akka RpcTimeout UTs in RpcEnvSuite
      7636189 [Bryan Cutler] [SPARK-6980] Fixed call to askWithReply in DAGScheduler to use RpcTimeout - this was being compiled by auto-tupling and changing the message type of BlockManagerHeartbeat
      be11c4e [Bryan Cutler] Merge branch 'master' into configTimeout-6980
      039afed [Bryan Cutler] [SPARK-6980] Corrected import organization
      218aa50 [Bryan Cutler] [SPARK-6980] Corrected issues from feedback
      fadaf6f [Bryan Cutler] [SPARK-6980] Put back in deprecated RpcUtils askTimeout and lookupTimout to fix MiMa errors
      fa6ed82 [Bryan Cutler] [SPARK-6980] Had to increase timeout on positive test case because a processor slowdown could trigger an Future TimeoutException
      b05d449 [Bryan Cutler] [SPARK-6980] Changed constructor to use val duration instead of getter function, changed name of string property from conf to timeoutProp for consistency
      c6cfd33 [Bryan Cutler] [SPARK-6980] Changed UT ask message timeout to explicitly intercept a SparkException
      1394de6 [Bryan Cutler] [SPARK-6980] Moved MessagePrefix to createRpcTimeoutException directly
      1517721 [Bryan Cutler] [SPARK-6980] RpcTimeout object scope should be private[spark]
      2206b4d [Bryan Cutler] [SPARK-6980] Added unit test for ask then immediat awaitReply
      1b9beab [Bryan Cutler] [SPARK-6980] Cleaned up import ordering
      08f5afc [Bryan Cutler] [SPARK-6980] Added UT for constructing RpcTimeout with default value
      d3754d1 [Bryan Cutler] [SPARK-6980] Added akkaConf to prevent dead letter logging
      995d196 [Bryan Cutler] [SPARK-6980] Cleaned up import ordering, comments, spacing from PR feedback
      7774d56 [Bryan Cutler] [SPARK-6980] Cleaned up UT imports
      4351c48 [Bryan Cutler] [SPARK-6980] Added UT for addMessageIfTimeout, cleaned up UTs
      1607a5f [Bryan Cutler] [SPARK-6980] Changed addMessageIfTimeout to PartialFunction, cleanup from PR comments
      2f94095 [Bryan Cutler] [SPARK-6980] Added addMessageIfTimeout for when a Future is completed with TimeoutException
      235919b [Bryan Cutler] [SPARK-6980] Resolved conflicts after master merge
      c07d05c [Bryan Cutler] Merge branch 'master' into configTimeout-6980-tmp
      b7fb99f [BryanCutler] Merge pull request #2 from hardmettle/configTimeoutUpdates_6980
      4be3a8d [Harsh Gupta] Modifying loop condition to find property match
      0ee5642 [Harsh Gupta] Changing the loop condition to halt at the first match in the property list for RpcEnv exception catch
      f74064d [Harsh Gupta] Retrieving properties from property list using iterator and while loop instead of chained functions
      a294569 [Bryan Cutler] [SPARK-6980] Added creation of RpcTimeout with Seq of property keys
      23d2f26 [Bryan Cutler] [SPARK-6980] Fixed await result not being handled by RpcTimeout
      49f9f04 [Bryan Cutler] [SPARK-6980] Minor cleanup and scala style fix
      5b59a44 [Bryan Cutler] [SPARK-6980] Added some RpcTimeout unit tests
      78a2c0a [Bryan Cutler] [SPARK-6980] Using RpcTimeout.awaitResult for future in AppClient now
      97523e0 [Bryan Cutler] [SPARK-6980] Akka ask timeout description refactored to RPC layer
      aa7bbc14
    • Josh Rosen's avatar
      [SPARK-8782] [SQL] Fix code generation for ORDER BY NULL · d9838196
      Josh Rosen authored
      This fixes code generation for queries containing `ORDER BY NULL`.  Previously, the generated code would fail to compile.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #7179 from JoshRosen/generate-order-fixes and squashes the following commits:
      
      6ef49a6 [Josh Rosen] Fix ORDER BY NULL
      0036696 [Josh Rosen] Add regression test for SPARK-8782 (ORDER BY NULL)
      d9838196
    • Reynold Xin's avatar
      Revert "[SPARK-8784] [SQL] Add Python API for hex and unhex" · e589e71a
      Reynold Xin authored
      This reverts commit fc7aebd9.
      e589e71a
    • Yu ISHIKAWA's avatar
      [SPARK-7104] [MLLIB] Support model save/load in Python's Word2Vec · 488bad31
      Yu ISHIKAWA authored
      Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com>
      
      Closes #6821 from yu-iskw/SPARK-7104 and squashes the following commits:
      
      975136b [Yu ISHIKAWA] Organize import
      0ef58b6 [Yu ISHIKAWA] Use rmtree, instead of removedirs
      cb21653 [Yu ISHIKAWA] Add an explicit type for `Word2VecModelWrapper.save`
      1d468ef [Yu ISHIKAWA] [SPARK-7104][MLlib] Support model save/load in Python's Word2Vec
      488bad31
    • Davies Liu's avatar
      [SPARK-8784] [SQL] Add Python API for hex and unhex · fc7aebd9
      Davies Liu authored
      Also improve the performance of hex/unhex
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #7181 from davies/hex and squashes the following commits:
      
      f032fbb [Davies Liu] Merge branch 'hex' of github.com:davies/spark into hex
      49e325f [Davies Liu] Merge branch 'master' of github.com:apache/spark into hex
      b31fc9a [Davies Liu] Update math.scala
      25156b7 [Davies Liu] address comments and fix test
      c3af78c [Davies Liu] address commments
      1a24082 [Davies Liu] Add Python API for hex and unhex
      fc7aebd9
    • lewuathe's avatar
      [SPARK-3382] [MLLIB] GradientDescent convergence tolerance · 7d9cc967
      lewuathe authored
      GrandientDescent can receive convergence tolerance value. Default value is 0.0.
      When loss value becomes less than the tolerance which is set by user, iteration is terminated.
      
      Author: lewuathe <lewuathe@me.com>
      
      Closes #3636 from Lewuathe/gd-convergence-tolerance and squashes the following commits:
      
      0b8a9a8 [lewuathe] Update doc
      ce91b15 [lewuathe] Merge branch 'master' into gd-convergence-tolerance
      4f22c2b [lewuathe] Modify based on SPARK-1503
      5e47b82 [lewuathe] Merge branch 'master' into gd-convergence-tolerance
      abadb7e [lewuathe] Fix LassoSuite
      8fadebd [lewuathe] Fix failed unit tests
      ee5de46 [lewuathe] Merge branch 'master' into gd-convergence-tolerance
      8313ba2 [lewuathe] Fix styles
      0ead94c [lewuathe] Merge branch 'master' into gd-convergence-tolerance
      a94cfd5 [lewuathe] Modify some styles
      3aef0a2 [lewuathe] Modify converged logic to do relative comparison
      f7b19d5 [lewuathe] [SPARK-3382] Clarify comparison logic
      e6c9cd2 [lewuathe] [SPARK-3382] Compare with the diff of solution vector
      4b125d2 [lewuathe] [SPARK3382] Fix scala style
      e7c10dd [lewuathe] [SPARK-3382] format improvements
      f867eea [lewuathe] [SPARK-3382] Modify warning message statements
      b9d5e61 [lewuathe] [SPARK-3382] should compare diff inside loss history and convergence tolerance
      5433f71 [lewuathe] [SPARK-3382] GradientDescent convergence tolerance
      7d9cc967
    • Reynold Xin's avatar
      [SPARK-8772][SQL] Implement implicit type cast for expressions that define input types. · 52508beb
      Reynold Xin authored
      Author: Reynold Xin <rxin@databricks.com>
      
      Closes #7175 from rxin/implicitCast and squashes the following commits:
      
      88080a2 [Reynold Xin] Clearer definition of implicit type cast.
      f0ff97f [Reynold Xin] Added missing file.
      c65e532 [Reynold Xin] [SPARK-8772][SQL] Implement implicit type cast for expressions that defines input types.
      52508beb
    • Andrew Or's avatar
      [SPARK-7835] Refactor HeartbeatReceiverSuite for coverage + cleanup · cd203550
      Andrew Or authored
      The existing test suite has a lot of duplicate code and doesn't even cover the most fundamental feature of the HeartbeatReceiver, which is expiring hosts that have not responded in a while.
      
      This introduces manual clocks in `HeartbeatReceiver` and makes it respond to heartbeats only for registered executors. A few internal messages are moved to `receiveAndReply` to increase determinism of the tests so we don't have to rely on flaky constructs like `eventually`.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #7173 from andrewor14/heartbeat-receiver-tests and squashes the following commits:
      
      4a903d6 [Andrew Or] Increase HeartReceiverSuite coverage and clean up
      cd203550
    • Deron Eriksson's avatar
      [SPARK-1564] [DOCS] Added Javascript to Javadocs to create badges for tags like :: Experimental :: · fcbcba66
      Deron Eriksson authored
      Modified copy_api_dirs.rb and created api-javadocs.js and api-javadocs.css files in order to add badges to javadoc files for :: Experimental ::, :: DeveloperApi ::, and :: AlphaComponent :: tags
      
      Author: Deron Eriksson <deron@us.ibm.com>
      
      Closes #7169 from deroneriksson/SPARK-1564_JavaDocs_badges and squashes the following commits:
      
      a8353db [Deron Eriksson] added license headers to api-docs.css and api-javadocs.css
      07feb07 [Deron Eriksson] added linebreaks to make jquery more readable when adding html badge tags
      65b4930 [Deron Eriksson] Modified copy_api_dirs.rb and created api-javadocs.js and api-javadocs.css files in order to add badges to javadoc files for :: Experimental ::, :: DeveloperApi ::, and :: AlphaComponent :: tags
      fcbcba66
    • Andrew Or's avatar
      [SPARK-8781] Fix variables in published pom.xml are not resolved · 82cf3315
      Andrew Or authored
      The issue is summarized in the JIRA and is caused by this commit: 984ad601.
      
      This patch reverts that commit and fixes the maven build in a different way. We limit the dependencies of `KinesisReceiverSuite` to avoid having to deal with the complexities in how maven deals with transitive test dependencies.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #7193 from andrewor14/fix-kinesis-pom and squashes the following commits:
      
      ca3d5d4 [Andrew Or] Limit kinesis test dependencies
      f24e09c [Andrew Or] Revert "[BUILD] Fix Maven build for Kinesis"
      82cf3315
    • MechCoder's avatar
      [SPARK-8479] [MLLIB] Add numNonzeros and numActives to linalg.Matrices · 34d448db
      MechCoder authored
      Matrices allow zeros to be stored in values. Sometimes a method is handy to check if the numNonZeros are same as number of Active values.
      
      Author: MechCoder <manojkumarsivaraj334@gmail.com>
      
      Closes #6904 from MechCoder/nnz_matrix and squashes the following commits:
      
      252c6b7 [MechCoder] Add to MiMa excludes
      e2390f5 [MechCoder] Use count instead of foreach
      2f62b2f [MechCoder] Add to MiMa excludes
      d6e96ef [MechCoder] [SPARK-8479] Add numNonzeros and numActives to linalg.Matrices
      34d448db
    • Andrew Or's avatar
      [SPARK-8581] [SPARK-8584] Simplify checkpointing code + better error message · 2e2f3260
      Andrew Or authored
      This patch rewrites the old checkpointing code in a way that is easier to understand. It also adds a guard against an invalid specification of checkpoint directory to provide a clearer error message. Most of the changes here are relatively minor.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #6968 from andrewor14/checkpoint-cleanup and squashes the following commits:
      
      4ef8263 [Andrew Or] Use global synchronized instead
      6f6fd84 [Andrew Or] Merge branch 'master' of github.com:apache/spark into checkpoint-cleanup
      b1437ad [Andrew Or] Warn instead of throw
      5484293 [Andrew Or] Merge branch 'master' of github.com:apache/spark into checkpoint-cleanup
      7fb4af5 [Andrew Or] Guard against bad settings of checkpoint directory
      691da98 [Andrew Or] Simplify checkpoint code / code style / comments
      2e2f3260
    • Liang-Chi Hsieh's avatar
      [SPARK-8708] [MLLIB] Paritition ALS ratings based on both users and products · 0e553a3e
      Liang-Chi Hsieh authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-8708
      
      Previously the partitions of ratings are only based on the given products. So if the `usersProducts` given for prediction contains only few products or even one product, the generated ratings will be pushed into few or single partition and can't use high parallelism.
      
      The following codes are the example reported in the JIRA. Because it asks the predictions for users on product 2. There is only one partition in the result.
      
          >>> r1 = (1, 1, 1.0)
          >>> r2 = (1, 2, 2.0)
          >>> r3 = (2, 1, 2.0)
          >>> r4 = (2, 2, 2.0)
          >>> r5 = (3, 1, 1.0)
          >>> ratings = sc.parallelize([r1, r2, r3, r4, r5], 5)
          >>> users = ratings.map(itemgetter(0)).distinct()
          >>> model = ALS.trainImplicit(ratings, 1, seed=10)
          >>> predictions_for_2 = model.predictAll(users.map(lambda u: (u, 2)))
          >>> predictions_for_2.glom().map(len).collect()
          [0, 0, 3, 0, 0]
      
      This PR uses user and product instead of only product to partition the ratings.
      
      Author: Liang-Chi Hsieh <viirya@gmail.com>
      Author: Liang-Chi Hsieh <viirya@appier.com>
      
      Closes #7121 from viirya/mfm_fix_partition and squashes the following commits:
      
      779946d [Liang-Chi Hsieh] Calculate approximate numbers of users and products in one pass.
      4336dc2 [Liang-Chi Hsieh] Merge remote-tracking branch 'upstream/master' into mfm_fix_partition
      83e56c1 [Liang-Chi Hsieh] Instead of additional join, use the numbers of users and products to decide how to perform join.
      b534dc8 [Liang-Chi Hsieh] Paritition ratings based on both users and products.
      0e553a3e
    • Yijie Shen's avatar
      [SPARK-8407] [SQL] complex type constructors: struct and named_struct · 52302a80
      Yijie Shen authored
      This is a follow up of [SPARK-8283](https://issues.apache.org/jira/browse/SPARK-8283) ([PR-6828](https://github.com/apache/spark/pull/6828)), to support both `struct` and `named_struct` in Spark SQL.
      
      After [#6725](https://github.com/apache/spark/pull/6828), the semantic of [`CreateStruct`](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypes.scala#L56) methods have changed a little and do not limited to cols of `NamedExpressions`, it will name non-NamedExpression fields following the hive convention, col1, col2 ...
      
      This PR would both loosen [`struct`](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L723) to take children of `Expression` type and add `named_struct` support.
      
      Author: Yijie Shen <henry.yijieshen@gmail.com>
      
      Closes #6874 from yijieshen/SPARK-8283 and squashes the following commits:
      
      4cd3375ac [Yijie Shen] change struct documentation
      d599d0b [Yijie Shen] rebase code
      9a7039e [Yijie Shen] fix reviews and regenerate golden answers
      b487354 [Yijie Shen] replace assert using checkAnswer
      f07e114 [Yijie Shen] tiny fix
      9613be9 [Yijie Shen] review fix
      7fef712 [Yijie Shen] Fix checkInputTypes' implementation using foldable and nullable
      60812a7 [Yijie Shen] Fix type check
      828d694 [Yijie Shen] remove unnecessary resolved assertion inside dataType method
      fd3cd8e [Yijie Shen] remove type check from eval
      7a71255 [Yijie Shen] tiny fix
      ccbbd86 [Yijie Shen] Fix reviews
      47da332 [Yijie Shen] remove nameStruct API from DataFrame
      917e680 [Yijie Shen] Fix reviews
      4bd75ad [Yijie Shen] loosen struct method in functions.scala to take Expression children
      0acb7be [Yijie Shen] Add CreateNamedStruct in both DataFrame function API and FunctionRegistery
      52302a80
Loading