Skip to content
Snippets Groups Projects
  1. Sep 15, 2014
    • Prashant Sharma's avatar
      [SPARK-3433][BUILD] Fix for Mima false-positives with @DeveloperAPI and @Experimental annotations. · ecf0c029
      Prashant Sharma authored
      Actually false positive reported was due to mima generator not picking up the new jars in presence of old jars(theoretically this should not have happened.). So as a workaround, ran them both separately and just append them together.
      
      Author: Prashant Sharma <prashant@apache.org>
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2285 from ScrapCodes/mima-fix and squashes the following commits:
      
      093c76f [Prashant Sharma] Update mima
      59012a8 [Prashant Sharma] Update mima
      35b6c71 [Prashant Sharma] SPARK-3433 Fix for Mima false-positives with @DeveloperAPI and @Experimental annotations.
      ecf0c029
  2. Sep 11, 2014
    • Andrew Or's avatar
      [Spark-3490] Disable SparkUI for tests · 6324eb7b
      Andrew Or authored
      We currently open many ephemeral ports during the tests, and as a result we occasionally can't bind to new ones. This has caused the `DriverSuite` and the `SparkSubmitSuite` to fail intermittently.
      
      By disabling the `SparkUI` when it's not needed, we already cut down on the number of ports opened significantly, on the order of the number of `SparkContexts` ever created. We must keep it enabled for a few tests for the UI itself, however.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2363 from andrewor14/disable-ui-for-tests and squashes the following commits:
      
      332a7d5 [Andrew Or] No need to set spark.ui.port to 0 anymore
      30c93a2 [Andrew Or] Simplify streaming UISuite
      a431b84 [Andrew Or] Fix streaming test failures
      8f5ae53 [Andrew Or] Fix no new line at the end
      29c9b5b [Andrew Or] Disable SparkUI for tests
      6324eb7b
  3. Sep 08, 2014
  4. Sep 02, 2014
    • Tathagata Das's avatar
      [SPARK-1981][Streaming][Hotfix] Fixed docs related to kinesis · e9bb12be
      Tathagata Das authored
      - Include kinesis in the unidocs
      - Hide non-public classes from docs
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #2239 from tdas/kinesis-doc-fix and squashes the following commits:
      
      156e20c [Tathagata Das] More fixes, based on PR comments.
      e9a6c01 [Tathagata Das] Fixed docs related to kinesis
      e9bb12be
  5. Aug 20, 2014
    • Marcelo Vanzin's avatar
      [SPARK-2848] Shade Guava in uber-jars. · c9f74395
      Marcelo Vanzin authored
      For further discussion, please check the JIRA entry.
      
      This change moves Guava classes to a different package so that they don't conflict with the user-provided Guava (or the Hadoop-provided one). Since one class (Optional) was exposed through Spark's public API, that class was forked from Guava at the current dependency version (14.0.1) so that it can be kept going forward (until the API is cleaned).
      
      Note this change has a few implications:
      - *all* classes in the final jars will reference the relocated classes. If Hadoop classes are included (i.e. "-Phadoop-provided" is not activated), those will also reference the Guava 14 classes (instead of the Guava 11 classes from the Hadoop classpath).
      - if the Guava version in Spark is ever changed, the new Guava will still reference the forked Optional class; this may or may not be a problem, but in the long term it's better to think about removing Optional from the public API.
      
      For the end user, there are two visible implications:
      
      - Guava is not provided as a transitive dependency anymore (since it's "provided" in Spark)
      - At runtime, unless they provide their own, they'll either have no Guava or Hadoop's version of Guava (11), depending on how they set up their classpath.
      
      Note that this patch does not change the sbt deliverables; those will still contain guava in its original package, and provide guava as a compile-time dependency. This assumes that maven is the canonical build, and sbt-built artifacts are not (officially) published.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #1813 from vanzin/SPARK-2848 and squashes the following commits:
      
      9bdffb0 [Marcelo Vanzin] Undo sbt build changes.
      819b445 [Marcelo Vanzin] Review feedback.
      05e0a3d [Marcelo Vanzin] Merge branch 'master' into SPARK-2848
      fef4370 [Marcelo Vanzin] Unfork Optional.java.
      d3ea8e1 [Marcelo Vanzin] Exclude asm classes from final jar.
      637189b [Marcelo Vanzin] Add hacky filter to prefer Spark's copy of Optional.
      2fec990 [Marcelo Vanzin] Shade Guava in the sbt build.
      616998e [Marcelo Vanzin] Shade Guava in the maven build, fork Guava's Optional.java.
      c9f74395
  6. Aug 18, 2014
    • Michael Armbrust's avatar
      [SPARK-2406][SQL] Initial support for using ParquetTableScan to read HiveMetaStore tables. · 3abd0c1c
      Michael Armbrust authored
      This PR adds an experimental flag `spark.sql.hive.convertMetastoreParquet` that when true causes the planner to detects tables that use Hive's Parquet SerDe and instead plans them using Spark SQL's native `ParquetTableScan`.
      
      Author: Michael Armbrust <michael@databricks.com>
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1819 from marmbrus/parquetMetastore and squashes the following commits:
      
      1620079 [Michael Armbrust] Revert "remove hive parquet bundle"
      cc30430 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into parquetMetastore
      4f3d54f [Michael Armbrust] fix style
      41ebc5f [Michael Armbrust] remove hive parquet bundle
      a43e0da [Michael Armbrust] Merge remote-tracking branch 'origin/master' into parquetMetastore
      4c4dc19 [Michael Armbrust] Fix bug with tree splicing.
      ebb267e [Michael Armbrust] include parquet hive to tests pass (Remove this later).
      c0d9b72 [Michael Armbrust] Avoid creating a HadoopRDD per partition.  Add dirty hacks to retrieve partition values from the InputSplit.
      8cdc93c [Michael Armbrust] Merge pull request #8 from yhuai/parquetMetastore
      a0baec7 [Yin Huai] Partitioning columns can be resolved.
      1161338 [Michael Armbrust] Add a test to make sure conversion is actually happening
      212d5cd [Michael Armbrust] Initial support for using ParquetTableScan to read HiveMetaStore tables.
      3abd0c1c
  7. Aug 07, 2014
    • Prashant Sharma's avatar
      SPARK-2899 Doc generation is back to working in new SBT Build. · 32096c2a
      Prashant Sharma authored
      The reason for this bug was introduciton of OldDeps project. It had to be excluded to prevent unidocs from trying to put it on "docs compile" classpath.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #1830 from ScrapCodes/doc-fix and squashes the following commits:
      
      e5d52e6 [Prashant Sharma] SPARK-2899 Doc generation is back to working in new SBT Build.
      32096c2a
  8. Aug 06, 2014
    • Gregory Owen's avatar
      SPARK-2882: Spark build now checks local maven cache for dependencies · 4e008334
      Gregory Owen authored
      Fixes [SPARK-2882](https://issues.apache.org/jira/browse/SPARK-2882)
      
      Author: Gregory Owen <greowen@gmail.com>
      
      Closes #1818 from GregOwen/spark-2882 and squashes the following commits:
      
      294446d [Gregory Owen] SPARK-2882: Spark build now checks local maven cache for dependencies
      4e008334
    • Andrew Or's avatar
      [SPARK-2157] Enable tight firewall rules for Spark · 09f7e458
      Andrew Or authored
      The goal of this PR is to allow users of Spark to write tight firewall rules for their clusters. This is currently not possible because Spark uses random ports in many places, notably the communication between executors and drivers. The changes in this PR are based on top of ash211's changes in #1107.
      
      The list covered here may or may not be the complete set of port needed for Spark to operate perfectly. However, as of the latest commit there are no known sources of random ports (except in tests). I have not documented a few of the more obscure configs.
      
      My spark-env.sh looks like this:
      ```
      export SPARK_MASTER_PORT=6060
      export SPARK_WORKER_PORT=7070
      export SPARK_MASTER_WEBUI_PORT=9090
      export SPARK_WORKER_WEBUI_PORT=9091
      ```
      and my spark-defaults.conf looks like this:
      ```
      spark.master spark://andrews-mbp:6060
      spark.driver.port 5001
      spark.fileserver.port 5011
      spark.broadcast.port 5021
      spark.replClassServer.port 5031
      spark.blockManager.port 5041
      spark.executor.port 5051
      ```
      
      Author: Andrew Or <andrewor14@gmail.com>
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #1777 from andrewor14/configure-ports and squashes the following commits:
      
      621267b [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      8a6b820 [Andrew Or] Use a random UI port during tests
      7da0493 [Andrew Or] Fix tests
      523c30e [Andrew Or] Add test for isBindCollision
      b97b02a [Andrew Or] Minor fixes
      c22ad00 [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      93d359f [Andrew Or] Executors connect to wrong port when collision occurs
      d502e5f [Andrew Or] Handle port collisions when creating Akka systems
      a2dd05c [Andrew Or] Patrick's comment nit
      86461e2 [Andrew Or] Remove spark.executor.env.port and spark.standalone.client.port
      1d2d5c6 [Andrew Or] Fix ports for standalone cluster mode
      cb3be88 [Andrew Or] Various doc fixes (broken link, format etc.)
      e837cde [Andrew Or] Remove outdated TODOs
      bfbab28 [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      de1b207 [Andrew Or] Update docs to reflect new ports
      b565079 [Andrew Or] Add spark.ports.maxRetries
      2551eb2 [Andrew Or] Remove spark.worker.watcher.port
      151327a [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      9868358 [Andrew Or] Add a few miscellaneous ports
      6016e77 [Andrew Or] Add spark.executor.port
      8d836e6 [Andrew Or] Also document SPARK_{MASTER/WORKER}_WEBUI_PORT
      4d9e6f3 [Andrew Or] Fix super subtle bug
      3f8e51b [Andrew Or] Correct erroneous docs...
      e111d08 [Andrew Or] Add names for UI services
      470f38c [Andrew Or] Special case non-"Address already in use" exceptions
      1d7e408 [Andrew Or] Treat 0 ports specially + return correct ConnectionManager port
      ba32280 [Andrew Or] Minor fixes
      6b550b0 [Andrew Or] Assorted fixes
      73fbe89 [Andrew Or] Move start service logic to Utils
      ec676f4 [Andrew Or] Merge branch 'SPARK-2157' of github.com:ash211/spark into configure-ports
      038a579 [Andrew Ash] Trust the server start function to report the port the service started on
      7c5bdc4 [Andrew Ash] Fix style issue
      0347aef [Andrew Ash] Unify port fallback logic to a single place
      24a4c32 [Andrew Ash] Remove type on val to match surrounding style
      9e4ad96 [Andrew Ash] Reformat for style checker
      5d84e0e [Andrew Ash] Document new port configuration options
      066dc7a [Andrew Ash] Fix up HttpServer port increments
      cad16da [Andrew Ash] Add fallover increment logic for HttpServer
      c5a0568 [Andrew Ash] Fix ConnectionManager to retry with increment
      b80d2fd [Andrew Ash] Make Spark's block manager port configurable
      17c79bb [Andrew Ash] Add a configuration option for spark-shell's class server
      f34115d [Andrew Ash] SPARK-1176 Add port configuration for HttpBroadcast
      49ee29b [Andrew Ash] SPARK-1174 Add port configuration for HttpFileServer
      1c0981a [Andrew Ash] Make port in HttpServer configurable
      09f7e458
  9. Aug 02, 2014
    • Chris Fregly's avatar
      [SPARK-1981] Add AWS Kinesis streaming support · 91f9504e
      Chris Fregly authored
      Author: Chris Fregly <chris@fregly.com>
      
      Closes #1434 from cfregly/master and squashes the following commits:
      
      4774581 [Chris Fregly] updated docs, renamed retry to retryRandom to be more clear, removed retries around store() method
      0393795 [Chris Fregly] moved Kinesis examples out of examples/ and back into extras/kinesis-asl
      691a6be [Chris Fregly] fixed tests and formatting, fixed a bug with JavaKinesisWordCount during union of streams
      0e1c67b [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      74e5c7c [Chris Fregly] updated per TD's feedback.  simplified examples, updated docs
      e33cbeb [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      bf614e9 [Chris Fregly] per matei's feedback:  moved the kinesis examples into the examples/ dir
      d17ca6d [Chris Fregly] per TD's feedback:  updated docs, simplified the KinesisUtils api
      912640c [Chris Fregly] changed the foundKinesis class to be a publically-avail class
      db3eefd [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      21de67f [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      6c39561 [Chris Fregly] parameterized the versions of the aws java sdk and kinesis client
      338997e [Chris Fregly] improve build docs for kinesis
      828f8ae [Chris Fregly] more cleanup
      e7c8978 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      cd68c0d [Chris Fregly] fixed typos and backward compatibility
      d18e680 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      b3b0ff1 [Chris Fregly] [SPARK-1981] Add AWS Kinesis streaming support
      91f9504e
    • Andrew Or's avatar
      [SPARK-2454] Do not ship spark home to Workers · 148af608
      Andrew Or authored
      When standalone Workers launch executors, they inherit the Spark home set by the driver. This means if the worker machines do not share the same directory structure as the driver node, the Workers will attempt to run scripts (e.g. bin/compute-classpath.sh) that do not exist locally and fail. This is a common scenario if the driver is launched from outside of the cluster.
      
      The solution is to simply not pass the driver's Spark home to the Workers. This PR further makes an attempt to avoid overloading the usages of `spark.home`, which is now only used for setting executor Spark home on Mesos and in python.
      
      This is based on top of #1392 and originally reported by YanTangZhai. Tested on standalone cluster.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1734 from andrewor14/spark-home-reprise and squashes the following commits:
      
      f71f391 [Andrew Or] Revert changes in python
      1c2532c [Andrew Or] Merge branch 'master' of github.com:apache/spark into spark-home-reprise
      188fc5d [Andrew Or] Avoid using spark.home where possible
      09272b7 [Andrew Or] Always use Worker's working directory as spark home
      148af608
  10. Jul 30, 2014
    • Matei Zaharia's avatar
      SPARK-2045 Sort-based shuffle · e9662844
      Matei Zaharia authored
      This adds a new ShuffleManager based on sorting, as described in https://issues.apache.org/jira/browse/SPARK-2045. The bulk of the code is in an ExternalSorter class that is similar to ExternalAppendOnlyMap, but sorts key-value pairs by partition ID and can be used to create a single sorted file with a map task's output. (Longer-term I think this can take on the remaining functionality in ExternalAppendOnlyMap and replace it so we don't have code duplication.)
      
      The main TODOs still left are:
      - [x] enabling ExternalSorter to merge across spilled files
        - [x] with an Ordering
        - [x] without an Ordering, using the keys' hash codes
      - [x] adding more tests (e.g. a version of our shuffle suite that runs on this)
      - [x] rebasing on top of the size-tracking refactoring in #1165 when that is merged
      - [x] disabling spilling if spark.shuffle.spill is set to false
      
      Despite this though, this seems to work pretty well (running successfully in cases where the hash shuffle would OOM, such as 1000 reduce tasks on executors with only 1G memory), and it seems to be comparable in speed or faster than hash-based shuffle (it will create much fewer files for the OS to keep track of). So I'm posting it to get some early feedback.
      
      After these TODOs are done, I'd also like to enable ExternalSorter to sort data within each partition by a key as well, which will allow us to use it to implement external spilling in reduce tasks in `sortByKey`.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #1499 from mateiz/sort-based-shuffle and squashes the following commits:
      
      bd841f9 [Matei Zaharia] Various review comments
      d1c137fd [Matei Zaharia] Various review comments
      a611159 [Matei Zaharia] Compile fixes due to rebase
      62c56c8 [Matei Zaharia] Fix ShuffledRDD sometimes not returning Tuple2s.
      f617432 [Matei Zaharia] Fix a failing test (seems to be due to change in SizeTracker logic)
      9464d5f [Matei Zaharia] Simplify code and fix conflicts after latest rebase
      0174149 [Matei Zaharia] Add cleanup behavior and cleanup tests for sort-based shuffle
      eb4ee0d [Matei Zaharia] Remove customizable element type in ShuffledRDD
      fa2e8db [Matei Zaharia] Allow nextBatchStream to be called after we're done looking at all streams
      a34b352 [Matei Zaharia] Fix tracking of indices within a partition in SpillReader, and add test
      03e1006 [Matei Zaharia] Add a SortShuffleSuite that runs ShuffleSuite with sort-based shuffle
      3c7ff1f [Matei Zaharia] Obey the spark.shuffle.spill setting in ExternalSorter
      ad65fbd [Matei Zaharia] Rebase on top of Aaron's Sorter change, and use Sorter in our buffer
      44d2a93 [Matei Zaharia] Use estimateSize instead of atGrowThreshold to test collection sizes
      5686f71 [Matei Zaharia] Optimize merging phase for in-memory only data:
      5461cbb [Matei Zaharia] Review comments and more tests (e.g. tests with 1 element per partition)
      e9ad356 [Matei Zaharia] Update ContextCleanerSuite to make sure shuffle cleanup tests use hash shuffle (since they were written for it)
      c72362a [Matei Zaharia] Added bug fix and test for when iterators are empty
      de1fb40 [Matei Zaharia] Make trait SizeTrackingCollection private[spark]
      4988d16 [Matei Zaharia] tweak
      c1b7572 [Matei Zaharia] Small optimization
      ba7db7f [Matei Zaharia] Handle null keys in hash-based comparator, and add tests for collisions
      ef4e397 [Matei Zaharia] Support for partial aggregation even without an Ordering
      4b7a5ce [Matei Zaharia] More tests, and ability to sort data if a total ordering is given
      e1f84be [Matei Zaharia] Fix disk block manager test
      5a40a1c [Matei Zaharia] More tests
      614f1b4 [Matei Zaharia] Add spill metrics to map tasks
      cc52caf [Matei Zaharia] Add more error handling and tests for error cases
      bbf359d [Matei Zaharia] More work
      3a56341 [Matei Zaharia] More partial work towards sort-based shuffle
      7a0895d [Matei Zaharia] Some more partial work towards sort-based shuffle
      b615476 [Matei Zaharia] Scaffolding for sort-based shuffle
      e9662844
    • Michael Armbrust's avatar
      [SQL] Fix compiling of catalyst docs. · 2248891a
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1653 from marmbrus/fixDocs and squashes the following commits:
      
      0aa1feb [Michael Armbrust] Fix compiling of catalyst docs.
      2248891a
    • Yin Huai's avatar
      [SPARK-2179][SQL] Public API for DataTypes and Schema · 7003c163
      Yin Huai authored
      The current PR contains the following changes:
      * Expose `DataType`s in the sql package (internal details are private to sql).
      * Users can create Rows.
      * Introduce `applySchema` to create a `SchemaRDD` by applying a `schema: StructType` to an `RDD[Row]`.
      * Add a function `simpleString` to every `DataType`. Also, the schema represented by a `StructType` can be visualized by `printSchema`.
      * `ScalaReflection.typeOfObject` provides a way to infer the Catalyst data type based on an object. Also, we can compose `typeOfObject` with some custom logics to form a new function to infer the data type (for different use cases).
      * `JsonRDD` has been refactored to use changes introduced by this PR.
      * Add a field `containsNull` to `ArrayType`. So, we can explicitly mark if an `ArrayType` can contain null values. The default value of `containsNull` is `false`.
      
      New APIs are introduced in the sql package object and SQLContext. You can find the scaladoc at
      [sql package object](http://yhuai.github.io/site/api/scala/index.html#org.apache.spark.sql.package) and [SQLContext](http://yhuai.github.io/site/api/scala/index.html#org.apache.spark.sql.SQLContext).
      
      An example of using `applySchema` is shown below.
      ```scala
      import org.apache.spark.sql._
      val sqlContext = new org.apache.spark.sql.SQLContext(sc)
      
      val schema =
        StructType(
          StructField("name", StringType, false) ::
          StructField("age", IntegerType, true) :: Nil)
      
      val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Row(p(0), p(1).trim.toInt))
      val peopleSchemaRDD = sqlContext. applySchema(people, schema)
      peopleSchemaRDD.printSchema
      // root
      // |-- name: string (nullable = false)
      // |-- age: integer (nullable = true)
      
      peopleSchemaRDD.registerAsTable("people")
      sqlContext.sql("select name from people").collect.foreach(println)
      ```
      
      I will add new contents to the SQL programming guide later.
      
      JIRA: https://issues.apache.org/jira/browse/SPARK-2179
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1346 from yhuai/dataTypeAndSchema and squashes the following commits:
      
      1d45977 [Yin Huai] Clean up.
      a6e08b4 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      c712fbf [Yin Huai] Converts types of values based on defined schema.
      4ceeb66 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      e5f8df5 [Yin Huai] Scaladoc.
      122d1e7 [Yin Huai] Address comments.
      03bfd95 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      2476ed0 [Yin Huai] Minor updates.
      ab71f21 [Yin Huai] Format.
      fc2bed1 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      bd40a33 [Yin Huai] Address comments.
      991f860 [Yin Huai] Move "asJavaDataType" and "asScalaDataType" to DataTypeConversions.scala.
      1cb35fe [Yin Huai] Add "valueContainsNull" to MapType.
      3edb3ae [Yin Huai] Python doc.
      692c0b9 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      1d93395 [Yin Huai] Python APIs.
      246da96 [Yin Huai] Add java data type APIs to javadoc index.
      1db9531 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      d48fc7b [Yin Huai] Minor updates.
      33c4fec [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      b9f3071 [Yin Huai] Java API for applySchema.
      1c9f33c [Yin Huai] Java APIs for DataTypes and Row.
      624765c [Yin Huai] Tests for applySchema.
      aa92e84 [Yin Huai] Update data type tests.
      8da1a17 [Yin Huai] Add Row.fromSeq.
      9c99bc0 [Yin Huai] Several minor updates.
      1d9c13a [Yin Huai] Update applySchema API.
      85e9b51 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      e495e4e [Yin Huai] More comments.
      42d47a3 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      c3f4a02 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      2e58dbd [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      b8b7db4 [Yin Huai] 1. Move sql package object and package-info to sql-core. 2. Minor updates on APIs. 3. Update scala doc.
      68525a2 [Yin Huai] Update JSON unit test.
      3209108 [Yin Huai] Add unit tests.
      dcaf22f [Yin Huai] Add a field containsNull to ArrayType to indicate if an array can contain null values or not. If an ArrayType is constructed by "ArrayType(elementType)" (the existing constructor), the value of containsNull is false.
      9168b83 [Yin Huai] Update comments.
      fc649d7 [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      eca7d04 [Yin Huai] Add two apply methods which will be used to extract StructField(s) from a StructType.
      949d6bb [Yin Huai] When creating a SchemaRDD for a JSON dataset, users can apply an existing schema.
      7a6a7e5 [Yin Huai] Fix bug introduced by the change made on SQLContext.inferSchema.
      43a45e1 [Yin Huai] Remove sql.util.package introduced in a previous commit.
      0266761 [Yin Huai] Format
      03eec4c [Yin Huai] Merge remote-tracking branch 'upstream/master' into dataTypeAndSchema
      90460ac [Yin Huai] Infer the Catalyst data type from an object and cast a data value to the expected type.
      3fa0df5 [Yin Huai] Provide easier ways to construct a StructType.
      16be3e5 [Yin Huai] This commit contains three changes: * Expose `DataType`s in the sql package (internal details are private to sql). * Introduce `createSchemaRDD` to create a `SchemaRDD` from an `RDD` with a provided schema (represented by a `StructType`) and a provided function to construct `Row`, * Add a function `simpleString` to every `DataType`. Also, the schema represented by a `StructType` can be visualized by `printSchema`.
      7003c163
  11. Jul 29, 2014
    • Michael Armbrust's avatar
      [SPARK-2054][SQL] Code Generation for Expression Evaluation · 84467468
      Michael Armbrust authored
      Adds a new method for evaluating expressions using code that is generated though Scala reflection.  This functionality is configured by the SQLConf option `spark.sql.codegen` and is currently turned off by default.
      
      Evaluation can be done in several specialized ways:
       - *Projection* - Given an input row, produce a new row from a set of expressions that define each column in terms of the input row.  This can either produce a new Row object or perform the projection in-place on an existing Row (MutableProjection).
       - *Ordering* - Compares two rows based on a list of `SortOrder` expressions
       - *Condition* - Returns `true` or `false` given an input row.
      
      For each of the above operations there is both a Generated and Interpreted version.  When generation for a given expression type is undefined, the code generator falls back on calling the `eval` function of the expression class.  Even without custom code, there is still a potential speed up, as loops are unrolled and code can still be inlined by JIT.
      
      This PR also contains a new type of Aggregation operator, `GeneratedAggregate`, that performs aggregation by using generated `Projection` code.  Currently the required expression rewriting only works for simple aggregations like `SUM` and `COUNT`.  This functionality will be extended in a future PR.
      
      This PR also performs several clean ups that simplified the implementation:
       - The notion of `Binding` all expressions in a tree automatically before query execution has been removed.  Instead it is the responsibly of an operator to provide the input schema when creating one of the specialized evaluators defined above.  In cases when the standard eval method is going to be called, binding can still be done manually using `BindReferences`.  There are a few reasons for this change:  First, there were many operators where it just didn't work before.  For example, operators with more than one child, and operators like aggregation that do significant rewriting of the expression. Second, the semantics of equality with `BoundReferences` are broken.  Specifically, we have had a few bugs where partitioning breaks because of the binding.
       - A copy of the current `SQLContext` is automatically propagated to all `SparkPlan` nodes by the query planner.  Before this was done ad-hoc for the nodes that needed this.  However, this required a lot of boilerplate as one had to always remember to make it `transient` and also had to modify the `otherCopyArgs`.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #993 from marmbrus/newCodeGen and squashes the following commits:
      
      96ef82c [Michael Armbrust] Merge remote-tracking branch 'apache/master' into newCodeGen
      f34122d [Michael Armbrust] Merge remote-tracking branch 'apache/master' into newCodeGen
      67b1c48 [Michael Armbrust] Use conf variable in SQLConf object
      4bdc42c [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      41a40c9 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      de22aac [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      fed3634 [Michael Armbrust] Inspectors are not serializable.
      ef8d42b [Michael Armbrust] comments
      533fdfd [Michael Armbrust] More logging of expression rewriting for GeneratedAggregate.
      3cd773e [Michael Armbrust] Allow codegen for Generate.
      64b2ee1 [Michael Armbrust] Implement copy
      3587460 [Michael Armbrust] Drop unused string builder function.
      9cce346 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      1a61293 [Michael Armbrust] Address review comments.
      0672e8a [Michael Armbrust] Address comments.
      1ec2d6e [Michael Armbrust] Address comments
      033abc6 [Michael Armbrust] off by default
      4771fab [Michael Armbrust] Docs, more test coverage.
      d30fee2 [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      d2ad5c5 [Michael Armbrust] Refactor putting SQLContext into SparkPlan. Fix ordering, other test cases.
      be2cd6b [Michael Armbrust] WIP: Remove old method for reference binding, more work on configuration.
      bc88ecd [Michael Armbrust] Style
      6cc97ca [Michael Armbrust] Merge remote-tracking branch 'origin/master' into newCodeGen
      4220f1e [Michael Armbrust] Better config, docs, etc.
      ca6cc6b [Michael Armbrust] WIP
      9d67d85 [Michael Armbrust] Fix hive planner
      fc522d5 [Michael Armbrust] Hook generated aggregation in to the planner.
      e742640 [Michael Armbrust] Remove unneeded changes and code.
      675e679 [Michael Armbrust] Upgrade paradise.
      0093376 [Michael Armbrust] Comment / indenting cleanup.
      d81f998 [Michael Armbrust] include schema for binding.
      0e889e8 [Michael Armbrust] Use typeOf instead tq
      f623ffd [Michael Armbrust] Quiet logging from test suite.
      efad14f [Michael Armbrust] Remove some half finished functions.
      92e74a4 [Michael Armbrust] add overrides
      a2b5408 [Michael Armbrust] WIP: Code generation with scala reflection.
      84467468
    • Hari Shreedharan's avatar
      [STREAMING] SPARK-1729. Make Flume pull data from source, rather than the current pu... · 800ecff4
      Hari Shreedharan authored
      ...sh model
      
      Currently Spark uses Flume's internal Avro Protocol to ingest data from Flume. If the executor running the
      receiver fails, it currently has to be restarted on the same node to be able to receive data.
      
      This commit adds a new Sink which can be deployed to a Flume agent. This sink can be polled by a new
      DStream that is also included in this commit. This model ensures that data can be pulled into Spark from
      Flume even if the receiver is restarted on a new node. This also allows the receiver to receive data on
      multiple threads for better performance.
      
      Author: Hari Shreedharan <harishreedharan@gmail.com>
      Author: Hari Shreedharan <hshreedharan@apache.org>
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Author: harishreedharan <hshreedharan@cloudera.com>
      
      Closes #807 from harishreedharan/master and squashes the following commits:
      
      e7f70a3 [Hari Shreedharan] Merge remote-tracking branch 'asf-git/master'
      96cfb6f [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      e48d785 [Hari Shreedharan] Documenting flume-sink being ignored for Mima checks.
      5f212ce [Hari Shreedharan] Ignore Spark Sink from mima.
      981bf62 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      7a1bc6e [Hari Shreedharan] Fix SparkBuild.scala
      a082eb3 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      1f47364 [Hari Shreedharan] Minor fixes.
      73d6f6d [Hari Shreedharan] Cleaned up tests a bit. Added some docs in multiple places.
      65b76b4 [Hari Shreedharan] Fixing the unit test.
      e59cc20 [Hari Shreedharan] Use SparkFlumeEvent instead of the new type. Also, Flume Polling Receiver now uses the store(ArrayBuffer) method.
      f3c99d1 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      3572180 [Hari Shreedharan] Adding a license header, making Jenkins happy.
      799509f [Hari Shreedharan] Fix a compile issue.
      3c5194c [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      d248d22 [harishreedharan] Merge pull request #1 from tdas/flume-polling
      10b6214 [Tathagata Das] Changed public API, changed sink package, and added java unit test to make sure Java API is callable from Java.
      1edc806 [Hari Shreedharan] SPARK-1729. Update logging in Spark Sink.
      8c00289 [Hari Shreedharan] More debug messages
      393bd94 [Hari Shreedharan] SPARK-1729. Use LinkedBlockingQueue instead of ArrayBuffer to keep track of connections.
      120e2a1 [Hari Shreedharan] SPARK-1729. Some test changes and changes to utils classes.
      9fd0da7 [Hari Shreedharan] SPARK-1729. Use foreach instead of map for all Options.
      8136aa6 [Hari Shreedharan] Adding TransactionProcessor to map on returning batch of data
      86aa274 [Hari Shreedharan] Merge remote-tracking branch 'asf/master'
      205034d [Hari Shreedharan] Merging master in
      4b0c7fc [Hari Shreedharan] FLUME-1729. New Flume-Spark integration.
      bda01fc [Hari Shreedharan] FLUME-1729. Flume-Spark integration.
      0d69604 [Hari Shreedharan] FLUME-1729. Better Flume-Spark integration.
      3c23c18 [Hari Shreedharan] SPARK-1729. New Spark-Flume integration.
      70bcc2a [Hari Shreedharan] SPARK-1729. New Flume-Spark integration.
      d6fa3aa [Hari Shreedharan] SPARK-1729. New Flume-Spark integration.
      e7da512 [Hari Shreedharan] SPARK-1729. Fixing import order
      9741683 [Hari Shreedharan] SPARK-1729. Fixes based on review.
      c604a3c [Hari Shreedharan] SPARK-1729. Optimize imports.
      0f10788 [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      87775aa [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      8df37e4 [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      03d6c1c [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      08176ad [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      d24d9d4 [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      6d6776a [Hari Shreedharan] SPARK-1729. Make Flume pull data from source, rather than the current push model
      800ecff4
  12. Jul 28, 2014
    • Cheng Lian's avatar
      [SPARK-2410][SQL] Merging Hive Thrift/JDBC server (with Maven profile fix) · a7a9d144
      Cheng Lian authored
      JIRA issue: [SPARK-2410](https://issues.apache.org/jira/browse/SPARK-2410)
      
      Another try for #1399 & #1600. Those two PR breaks Jenkins builds because we made a separate profile `hive-thriftserver` in sub-project `assembly`, but the `hive-thriftserver` module is defined outside the `hive-thriftserver` profile. Thus every time a pull request that doesn't touch SQL code will also execute test suites defined in `hive-thriftserver`, but tests fail because related .class files are not included in the assembly jar.
      
      In the most recent commit, module `hive-thriftserver` is moved into its own profile to fix this problem. All previous commits are squashed for clarity.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1620 from liancheng/jdbc-with-maven-fix and squashes the following commits:
      
      629988e [Cheng Lian] Moved hive-thriftserver module definition into its own profile
      ec3c7a7 [Cheng Lian] Cherry picked the Hive Thrift server
      a7a9d144
  13. Jul 27, 2014
    • Patrick Wendell's avatar
      Revert "[SPARK-2410][SQL] Merging Hive Thrift/JDBC server" · e5bbce9a
      Patrick Wendell authored
      This reverts commit f6ff2a61.
      e5bbce9a
    • Cheng Lian's avatar
      [SPARK-2410][SQL] Merging Hive Thrift/JDBC server · f6ff2a61
      Cheng Lian authored
      (This is a replacement of #1399, trying to fix potential `HiveThriftServer2` port collision between parallel builds. Please refer to [these comments](https://github.com/apache/spark/pull/1399#issuecomment-50212572) for details.)
      
      JIRA issue: [SPARK-2410](https://issues.apache.org/jira/browse/SPARK-2410)
      
      Merging the Hive Thrift/JDBC server from [branch-1.0-jdbc](https://github.com/apache/spark/tree/branch-1.0-jdbc).
      
      Thanks chenghao-intel for his initial contribution of the Spark SQL CLI.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1600 from liancheng/jdbc and squashes the following commits:
      
      ac4618b [Cheng Lian] Uses random port for HiveThriftServer2 to avoid collision with parallel builds
      090beea [Cheng Lian] Revert changes related to SPARK-2678, decided to move them to another PR
      21c6cf4 [Cheng Lian] Updated Spark SQL programming guide docs
      fe0af31 [Cheng Lian] Reordered spark-submit options in spark-shell[.cmd]
      199e3fb [Cheng Lian] Disabled MIMA for hive-thriftserver
      1083e9d [Cheng Lian] Fixed failed test suites
      7db82a1 [Cheng Lian] Fixed spark-submit application options handling logic
      9cc0f06 [Cheng Lian] Starts beeline with spark-submit
      cfcf461 [Cheng Lian] Updated documents and build scripts for the newly added hive-thriftserver profile
      061880f [Cheng Lian] Addressed all comments by @pwendell
      7755062 [Cheng Lian] Adapts test suites to spark-submit settings
      40bafef [Cheng Lian] Fixed more license header issues
      e214aab [Cheng Lian] Added missing license headers
      b8905ba [Cheng Lian] Fixed minor issues in spark-sql and start-thriftserver.sh
      f975d22 [Cheng Lian] Updated docs for Hive compatibility and Shark migration guide draft
      3ad4e75 [Cheng Lian] Starts spark-sql shell with spark-submit
      a5310d1 [Cheng Lian] Make HiveThriftServer2 play well with spark-submit
      61f39f4 [Cheng Lian] Starts Hive Thrift server via spark-submit
      2c4c539 [Cheng Lian] Cherry picked the Hive Thrift server
      f6ff2a61
  14. Jul 25, 2014
    • Michael Armbrust's avatar
      Revert "[SPARK-2410][SQL] Merging Hive Thrift/JDBC server" · afd757a2
      Michael Armbrust authored
      This reverts commit 06dc0d2c.
      
      #1399 is making Jenkins fail.  We should investigate and put this back after its passing tests.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #1594 from marmbrus/revertJDBC and squashes the following commits:
      
      59748da [Michael Armbrust] Revert "[SPARK-2410][SQL] Merging Hive Thrift/JDBC server"
      afd757a2
    • Yin Huai's avatar
      [SPARK-2682] Javadoc generated from Scala source code is not in javadoc's index · a19d8c89
      Yin Huai authored
      Add genjavadocSettings back to SparkBuild. It requires #1585 .
      
      https://issues.apache.org/jira/browse/SPARK-2682
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1584 from yhuai/SPARK-2682 and squashes the following commits:
      
      2e89461 [Yin Huai] Merge remote-tracking branch 'upstream/master' into SPARK-2682
      54e3b66 [Yin Huai] Add genjavadocSettings back.
      a19d8c89
    • Cheng Lian's avatar
      [SPARK-2410][SQL] Merging Hive Thrift/JDBC server · 06dc0d2c
      Cheng Lian authored
      JIRA issue:
      
      - Main: [SPARK-2410](https://issues.apache.org/jira/browse/SPARK-2410)
      - Related: [SPARK-2678](https://issues.apache.org/jira/browse/SPARK-2678)
      
      Cherry picked the Hive Thrift/JDBC server from [branch-1.0-jdbc](https://github.com/apache/spark/tree/branch-1.0-jdbc).
      
      (Thanks chenghao-intel for his initial contribution of the Spark SQL CLI.)
      
      TODO
      
      - [x] Use `spark-submit` to launch the server, the CLI and beeline
      - [x] Migration guideline draft for Shark users
      
      ----
      
      Hit by a bug in `SparkSubmitArguments` while working on this PR: all application options that are recognized by `SparkSubmitArguments` are stolen as `SparkSubmit` options. For example:
      
      ```bash
      $ spark-submit --class org.apache.hive.beeline.BeeLine spark-internal --help
      ```
      
      This actually shows usage information of `SparkSubmit` rather than `BeeLine`.
      
      ~~Fixed this bug here since the `spark-internal` related stuff also touches `SparkSubmitArguments` and I'd like to avoid conflict.~~
      
      **UPDATE** The bug mentioned above is now tracked by [SPARK-2678](https://issues.apache.org/jira/browse/SPARK-2678). Decided to revert changes to this bug since it involves more subtle considerations and worth a separate PR.
      
      Author: Cheng Lian <lian.cs.zju@gmail.com>
      
      Closes #1399 from liancheng/thriftserver and squashes the following commits:
      
      090beea [Cheng Lian] Revert changes related to SPARK-2678, decided to move them to another PR
      21c6cf4 [Cheng Lian] Updated Spark SQL programming guide docs
      fe0af31 [Cheng Lian] Reordered spark-submit options in spark-shell[.cmd]
      199e3fb [Cheng Lian] Disabled MIMA for hive-thriftserver
      1083e9d [Cheng Lian] Fixed failed test suites
      7db82a1 [Cheng Lian] Fixed spark-submit application options handling logic
      9cc0f06 [Cheng Lian] Starts beeline with spark-submit
      cfcf461 [Cheng Lian] Updated documents and build scripts for the newly added hive-thriftserver profile
      061880f [Cheng Lian] Addressed all comments by @pwendell
      7755062 [Cheng Lian] Adapts test suites to spark-submit settings
      40bafef [Cheng Lian] Fixed more license header issues
      e214aab [Cheng Lian] Added missing license headers
      b8905ba [Cheng Lian] Fixed minor issues in spark-sql and start-thriftserver.sh
      f975d22 [Cheng Lian] Updated docs for Hive compatibility and Shark migration guide draft
      3ad4e75 [Cheng Lian] Starts spark-sql shell with spark-submit
      a5310d1 [Cheng Lian] Make HiveThriftServer2 play well with spark-submit
      61f39f4 [Cheng Lian] Starts Hive Thrift server via spark-submit
      2c4c539 [Cheng Lian] Cherry picked the Hive Thrift server
      06dc0d2c
  15. Jul 15, 2014
    • Yin Huai's avatar
      [SPARK-2474][SQL] For a registered table in OverrideCatalog, the Analyzer... · 8af46d58
      Yin Huai authored
      [SPARK-2474][SQL] For a registered table in OverrideCatalog, the Analyzer failed to resolve references in the format of "tableName.fieldName"
      
      Please refer to JIRA (https://issues.apache.org/jira/browse/SPARK-2474) for how to reproduce the problem and my understanding of the root cause.
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #1406 from yhuai/SPARK-2474 and squashes the following commits:
      
      96b1627 [Yin Huai] Merge remote-tracking branch 'upstream/master' into SPARK-2474
      af36d65 [Yin Huai] Fix comment.
      be86ba9 [Yin Huai] Correct SQL console settings.
      c43ad00 [Yin Huai] Wrap the relation in a Subquery named by the table name in OverrideCatalog.lookupRelation.
      a5c2145 [Yin Huai] Support sql/console.
      8af46d58
    • Takuya UESHIN's avatar
      [SPARK-2467] Revert SparkBuild to publish-local to both .m2 and .ivy2. · e2255e4b
      Takuya UESHIN authored
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #1398 from ueshin/issues/SPARK-2467 and squashes the following commits:
      
      7f01d58 [Takuya UESHIN] Revert SparkBuild to publish-local to both .m2 and .ivy2.
      e2255e4b
  16. Jul 11, 2014
    • Prashant Sharma's avatar
      [SPARK-2437] Rename MAVEN_PROFILES to SBT_MAVEN_PROFILES and add SBT_MAVEN_PROPERTIES · b23e9c3e
      Prashant Sharma authored
      NOTE: It is not possible to use both env variable  `SBT_MAVEN_PROFILES`  and `-P` flag at same time. `-P` if specified takes precedence.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #1374 from ScrapCodes/SPARK-2437/rename-MAVEN_PROFILES and squashes the following commits:
      
      8694bde [Prashant Sharma] [SPARK-2437] Rename MAVEN_PROFILES to SBT_MAVEN_PROFILES and add SBT_MAVEN_PROPERTIES
      b23e9c3e
  17. Jul 10, 2014
    • Prashant Sharma's avatar
      [SPARK-1776] Have Spark's SBT build read dependencies from Maven. · 628932b8
      Prashant Sharma authored
      Patch introduces the new way of working also retaining the existing ways of doing things.
      
      For example build instruction for yarn in maven is
      `mvn -Pyarn -PHadoop2.2 clean package -DskipTests`
      in sbt it can become
      `MAVEN_PROFILES="yarn, hadoop-2.2" sbt/sbt clean assembly`
      Also supports
      `sbt/sbt -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 clean assembly`
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #772 from ScrapCodes/sbt-maven and squashes the following commits:
      
      a8ac951 [Prashant Sharma] Updated sbt version.
      62b09bb [Prashant Sharma] Improvements.
      fa6221d [Prashant Sharma] Excluding sql from mima
      4b8875e [Prashant Sharma] Sbt assembly no longer builds tools by default.
      72651ca [Prashant Sharma] Addresses code reivew comments.
      acab73d [Prashant Sharma] Revert "Small fix to run-examples script."
      ac4312c [Prashant Sharma] Revert "minor fix"
      6af91ac [Prashant Sharma] Ported oldDeps back. + fixes issues with prev commit.
      65cf06c [Prashant Sharma] Servelet API jars mess up with the other servlet jars on the class path.
      446768e [Prashant Sharma] minor fix
      89b9777 [Prashant Sharma] Merge conflicts
      d0a02f2 [Prashant Sharma] Bumped up pom versions, Since the build now depends on pom it is better updated there. + general cleanups.
      dccc8ac [Prashant Sharma] updated mima to check against 1.0
      a49c61b [Prashant Sharma] Fix for tools jar
      a2f5ae1 [Prashant Sharma] Fixes a bug in dependencies.
      cf88758 [Prashant Sharma] cleanup
      9439ea3 [Prashant Sharma] Small fix to run-examples script.
      96cea1f [Prashant Sharma] SPARK-1776 Have Spark's SBT build read dependencies from Maven.
      36efa62 [Patrick Wendell] Set project name in pom files and added eclipse/intellij plugins.
      4973dbd [Patrick Wendell] Example build using pom reader.
      628932b8
  18. Jul 01, 2014
  19. Jun 20, 2014
    • Marcelo Vanzin's avatar
      Fix some tests. · 648553d4
      Marcelo Vanzin authored
      - JavaAPISuite was trying to compare a bare path with a URI. Fix by
        extracting the path from the URI, since we know it should be a
        local path anyway/
      
      - b9be1609 excluded the ASM dependency everywhere, but easymock needs
        it (because cglib needs it). So re-add the dependency, with test
        scope this time.
      
      The second one above actually uncovered a weird situation: the maven
      test target works, even though I can't find the class sbt complains
      about in its classpath. sbt complains with:
      
        [error] Uncaught exception when running org.apache.spark.util
        .random.RandomSamplerSuite: java.lang.NoClassDefFoundError:
        org/objectweb/asm/Type
      
      To avoid more weirdness caused by that, I explicitly added the asm
      dependency to both maven and sbt (for tests only), and verified
      the classes don't end up in the final assembly.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #917 from vanzin/flaky-tests and squashes the following commits:
      
      d022320 [Marcelo Vanzin] Fix some tests.
      648553d4
  20. Jun 17, 2014
    • Yin Huai's avatar
      [SPARK-2060][SQL] Querying JSON Datasets with SQL and DSL in Spark SQL · d2f4f30b
      Yin Huai authored
      JIRA: https://issues.apache.org/jira/browse/SPARK-2060
      
      Programming guide: http://yhuai.github.io/site/sql-programming-guide.html
      
      Scala doc of SQLContext: http://yhuai.github.io/site/api/scala/index.html#org.apache.spark.sql.SQLContext
      
      Author: Yin Huai <huai@cse.ohio-state.edu>
      
      Closes #999 from yhuai/newJson and squashes the following commits:
      
      227e89e [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      ce8eedd [Yin Huai] rxin's comments.
      bc9ac51 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      94ffdaa [Yin Huai] Remove "get" from method names.
      ce31c81 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      e2773a6 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      79ea9ba [Yin Huai] Fix typos.
      5428451 [Yin Huai] Newline
      1f908ce [Yin Huai] Remove extra line.
      d7a005c [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      7ea750e [Yin Huai] marmbrus's comments.
      6a5f5ef [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      83013fb [Yin Huai] Update Java Example.
      e7a6c19 [Yin Huai] SchemaRDD.javaToPython should convert a field with the StructType to a Map.
      6d20b85 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      4fbddf0 [Yin Huai] Programming guide.
      9df8c5a [Yin Huai] Python API.
      7027634 [Yin Huai] Java API.
      cff84cc [Yin Huai] Use a SchemaRDD for a JSON dataset.
      d0bd412 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      ab810b0 [Yin Huai] Make JsonRDD private.
      6df0891 [Yin Huai] Apache header.
      8347f2e [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      66f9e76 [Yin Huai] Update docs and use the entire dataset to infer the schema.
      8ffed79 [Yin Huai] Update the example.
      a5a4b52 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      4325475 [Yin Huai] If a sampled dataset is used for schema inferring, update the schema of the JsonTable after first execution.
      65b87f0 [Yin Huai] Fix sampling...
      8846af5 [Yin Huai] API doc.
      52a2275 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      0387523 [Yin Huai] Address PR comments.
      666b957 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      a2313a6 [Yin Huai] Address PR comments.
      f3ce176 [Yin Huai] After type conflict resolution, if a NullType is found, StringType is used.
      0576406 [Yin Huai] Add Apache license header.
      af91b23 [Yin Huai] Merge remote-tracking branch 'upstream/master' into newJson
      f45583b [Yin Huai] Infer the schema of a JSON dataset (a text file with one JSON object per line or a RDD[String] with one JSON object per string) and returns a SchemaRDD.
      f31065f [Yin Huai] A query plan or a SchemaRDD can print out its schema.
      d2f4f30b
  21. Jun 12, 2014
    • Doris Xin's avatar
      SPARK-1939 Refactor takeSample method in RDD to use ScaSRS · 1de1d703
      Doris Xin authored
      Modified the takeSample method in RDD to use the ScaSRS sampling technique to improve performance. Added a private method that computes sampling rate > sample_size/total to ensure sufficient sample size with success rate >= 0.9999. Added a unit test for the private method to validate choice of sampling rate.
      
      Author: Doris Xin <doris.s.xin@gmail.com>
      Author: dorx <doris.s.xin@gmail.com>
      Author: Xiangrui Meng <meng@databricks.com>
      
      Closes #916 from dorx/takeSample and squashes the following commits:
      
      5b061ae [Doris Xin] merge master
      444e750 [Doris Xin] edge cases
      3de882b [dorx] Merge pull request #2 from mengxr/SPARK-1939
      82dde31 [Xiangrui Meng] update pyspark's takeSample
      48d954d [Doris Xin] remove unused imports from RDDSuite
      fb1452f [Doris Xin] allowing num to be greater than count in all cases
      1481b01 [Doris Xin] washing test tubes and making coffee
      dc699f3 [Doris Xin] give back imports removed by accident in rdd.py
      64e445b [Doris Xin] logwarnning as soon as it enters the while loop
      55518ed [Doris Xin] added TODO for logging in rdd.py
      eff89e2 [Doris Xin] addressed reviewer comments.
      ecab508 [Doris Xin] "fixed checkstyle violation
      0a9b3e3 [Doris Xin] "reviewer comment addressed"
      f80f270 [Doris Xin] Merge branch 'master' into takeSample
      ae3ad04 [Doris Xin] fixed edge cases to prevent overflow
      065ebcd [Doris Xin] Merge branch 'master' into takeSample
      9bdd36e [Doris Xin] Check sample size and move computeFraction
      e3fd6a6 [Doris Xin] Merge branch 'master' into takeSample
      7cab53a [Doris Xin] fixed import bug in rdd.py
      ffea61a [Doris Xin] SPARK-1939: Refactor takeSample method in RDD
      1441977 [Doris Xin] SPARK-1939 Refactor takeSample method in RDD to use ScaSRS
      1de1d703
    • Patrick Wendell's avatar
      SPARK-1843: Replace assemble-deps with env variable. · 1c04652c
      Patrick Wendell authored
      (This change is actually small, I moved some logic into
      compute-classpath that was previously in spark-class).
      
      Assemble deps has existed for a while to allow developers to
      run local code with new changes quickly. When I'm developing I
      typically use a simpler approach which just prepends the Spark
      classes to the classpath before the assembly jar. This is well
      defined in the JVM and the Spark classes take precedence over those
      in the assembly.
      
      This approach is portable across both builds which is the main reason I'd
      like to switch to it. It's also a bit easier to toggle on and off quickly.
      
      The way you use this is the following:
      ```
      $ ./bin/spark-shell # Use spark with the normal assembly
      $ export SPARK_PREPEND_CLASSES=true
      $ ./bin/spark-shell # Now it's using compiled classes
      $ unset SPARK_PREPEND_CLASSES
      $ ./bin/spark-shell # Back to normal
      ```
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #877 from pwendell/assemble-deps and squashes the following commits:
      
      8a11345 [Patrick Wendell] Merge remote-tracking branch 'apache/master' into assemble-deps
      faa3168 [Patrick Wendell] Adding a warning for compatibility
      3f151a7 [Patrick Wendell] Small fix
      bbfb73c [Patrick Wendell] Review feedback
      328e9f8 [Patrick Wendell] SPARK-1843: Replace assemble-deps with env variable.
      1c04652c
  22. Jun 11, 2014
    • Prashant Sharma's avatar
      [SPARK-2069] MIMA false positives · 5b754b45
      Prashant Sharma authored
      Fixes SPARK 2070 and 2071
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #1021 from ScrapCodes/SPARK-2070/package-private-methods and squashes the following commits:
      
      7979a57 [Prashant Sharma] addressed code review comments
      558546d [Prashant Sharma] A little fancy error message.
      59275ab [Prashant Sharma] SPARK-2071 Mima ignores classes and its members from previous versions too.
      0c4ff2b [Prashant Sharma] SPARK-2070 Ignore methods along with annotated classes.
      5b754b45
  23. Jun 06, 2014
    • witgo's avatar
      [SPARK-1841]: update scalatest to version 2.1.5 · 41c4a331
      witgo authored
      Author: witgo <witgo@qq.com>
      
      Closes #713 from witgo/scalatest and squashes the following commits:
      
      b627a6a [witgo] merge master
      51fb3d6 [witgo] merge master
      3771474 [witgo] fix RDDSuite
      996d6f9 [witgo] fix TimeStampedWeakValueHashMap test
      9dfa4e7 [witgo] merge bug
      1479b22 [witgo] merge master
      29b9194 [witgo] fix code style
      022a7a2 [witgo] fix test dependency
      a52c0fa [witgo] fix test dependency
      cd8f59d [witgo] Merge branch 'master' of https://github.com/apache/spark into scalatest
      046540d [witgo] fix RDDSuite.scala
      2c543b9 [witgo] fix ReplSuite.scala
      c458928 [witgo] update scalatest to version 2.1.5
      41c4a331
  24. Jun 05, 2014
    • Marcelo Vanzin's avatar
      Remove compile-scoped junit dependency. · 668cb1de
      Marcelo Vanzin authored
      This avoids having junit classes showing up in the assembly jar.
      I verified that only test classes in the jtransforms package
      use junit.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #794 from vanzin/junit-dep-exclusion and squashes the following commits:
      
      274e1c2 [Marcelo Vanzin] Remove junit from assembly in sbt build also.
      ad950be [Marcelo Vanzin] Remove compile-scoped junit dependency.
      668cb1de
  25. Jun 03, 2014
    • Reynold Xin's avatar
      SPARK-1941: Update streamlib to 2.7.0 and use HyperLogLogPlus instead of HyperLogLog. · 1faef149
      Reynold Xin authored
      I also corrected some errors made in the previous HLL count approximate API, including relativeSD wasn't really a measure for error (and we used it to test error bounds in test results).
      
      Author: Reynold Xin <rxin@apache.org>
      
      Closes #897 from rxin/hll and squashes the following commits:
      
      4d83f41 [Reynold Xin] New error bound and non-randomness.
      f154ea0 [Reynold Xin] Added a comment on the value bound for testing.
      e367527 [Reynold Xin] One more round of code review.
      41e649a [Reynold Xin] Update final mima list.
      9e320c8 [Reynold Xin] Incorporate code review feedback.
      e110d70 [Reynold Xin] Merge branch 'master' into hll
      354deb8 [Reynold Xin] Added comment on the Mima exclude rules.
      acaa524 [Reynold Xin] Added the right exclude rules in MimaExcludes.
      6555bfe [Reynold Xin] Added a default method and re-arranged MimaExcludes.
      1db1522 [Reynold Xin] Excluded util.SerializableHyperLogLog from MIMA check.
      9221b27 [Reynold Xin] Merge branch 'master' into hll
      88cfe77 [Reynold Xin] Updated documentation and restored the old incorrect API to maintain API compatibility.
      1294be6 [Reynold Xin] Updated HLL+.
      e7786cb [Reynold Xin] Merge branch 'master' into hll
      c0ef0c2 [Reynold Xin] SPARK-1941: Update streamlib to 2.7.0 and use HyperLogLogPlus instead of HyperLogLog.
      1faef149
    • tzolov's avatar
      Add support for Pivotal HD in the Maven build: SPARK-1992 · b1f28535
      tzolov authored
      Allow Spark to build against particular Pivotal HD distributions. For example to build Spark against Pivotal HD 2.0.1 one can run:
      ```
      mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0-gphd-3.0.1.0 -DskipTests clean package
      ```
      
      Author: tzolov <christian.tzolov@gmail.com>
      
      Closes #942 from tzolov/master and squashes the following commits:
      
      bc3e05a [tzolov] Add support for Pivotal HD in the Maven build and SBT build: [SPARK-1992]
      b1f28535
  26. May 31, 2014
    • Michael Armbrust's avatar
      Optionally include Hive as a dependency of the REPL. · 7463cd24
      Michael Armbrust authored
      Due to the way spark-shell launches from an assembly jar, I don't think this change will affect anyone who isn't trying to launch the shell directly from sbt.  That said, it is kinda nice to be able to launch all things directly from SBT when developing.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #801 from marmbrus/hiveRepl and squashes the following commits:
      
      9570571 [Michael Armbrust] Optionally include Hive as a dependency of the REPL.
      7463cd24
  27. May 30, 2014
    • Prashant Sharma's avatar
      [SPARK-1971] Update MIMA to compare against Spark 1.0.0 · 79fa8fd4
      Prashant Sharma authored
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #910 from ScrapCodes/enable-mima/spark-core and squashes the following commits:
      
      79f3687 [Prashant Sharma] updated Mima to check against version 1.0
      1e8969c [Prashant Sharma] Spark core missed out on Mima settings. So in effect we never tested spark core for mima related errors.
      79fa8fd4
  28. May 29, 2014
  29. May 19, 2014
Loading