Skip to content
Snippets Groups Projects
  1. Sep 14, 2014
    • Prashant Sharma's avatar
      [SPARK-3452] Maven build should skip publishing artifacts people shouldn... · f493f798
      Prashant Sharma authored
      ...'t depend on
      
      Publish local in maven term is `install`
      
      and publish otherwise is `deploy`
      
      So disabled both for following projects.
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #2329 from ScrapCodes/SPARK-3452/maven-skip-install and squashes the following commits:
      
      257b79a [Prashant Sharma] [SPARK-3452] Maven build should skip publishing artifacts people shouldn't depend on
      f493f798
  2. Sep 11, 2014
    • witgo's avatar
      SPARK-2482: Resolve sbt warnings during build · 33c7a738
      witgo authored
      At the same time, import the `scala.language.postfixOps` and ` org.scalatest.time.SpanSugar._` cause `scala.language.postfixOps` doesn't work
      
      Author: witgo <witgo@qq.com>
      
      Closes #1330 from witgo/sbt_warnings3 and squashes the following commits:
      
      179ba61 [witgo] Resolve sbt warnings during build
      33c7a738
  3. Sep 06, 2014
  4. Sep 02, 2014
    • Andrew Or's avatar
      [SPARK-1919] Fix Windows spark-shell --jars · 8f1f9aaf
      Andrew Or authored
      We were trying to add `file:/C:/path/to/my.jar` to the class path. We should add `C:/path/to/my.jar` instead. Tested on Windows 8.1.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #2211 from andrewor14/windows-shell-jars and squashes the following commits:
      
      262c6a2 [Andrew Or] Oops... Add the new code to the correct place
      0d5a0c1 [Andrew Or] Format jar path only for adding to shell classpath
      42bd626 [Andrew Or] Remove unnecessary code
      0049f1b [Andrew Or] Remove embarrassing log messages
      b1755a0 [Andrew Or] Format jar paths properly before adding them to the classpath
      8f1f9aaf
  5. Aug 30, 2014
    • Marcelo Vanzin's avatar
      [SPARK-2889] Create Hadoop config objects consistently. · b6cf1348
      Marcelo Vanzin authored
      Different places in the code were instantiating Configuration / YarnConfiguration objects in different ways. This could lead to confusion for people who actually expected "spark.hadoop.*" options to end up in the configs used by Spark code, since that would only happen for the SparkContext's config.
      
      This change modifies most places to use SparkHadoopUtil to initialize configs, and make that method do the translation that previously was only done inside SparkContext.
      
      The places that were not changed fall in one of the following categories:
      - Test code where this doesn't really matter
      - Places deep in the code where plumbing SparkConf would be too difficult for very little gain
      - Default values for arguments - since the caller can provide their own config in that case
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #1843 from vanzin/SPARK-2889 and squashes the following commits:
      
      52daf35 [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      f179013 [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      51e71cf [Marcelo Vanzin] Add test to ensure that overriding Yarn configs works.
      53f9506 [Marcelo Vanzin] Add DeveloperApi annotation.
      3d345cb [Marcelo Vanzin] Restore old method for backwards compat.
      fc45067 [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      0ac3fdf [Marcelo Vanzin] Merge branch 'master' into SPARK-2889
      3f26760 [Marcelo Vanzin] Compilation fix.
      f16cadd [Marcelo Vanzin] Initialize config in SparkHadoopUtil.
      b8ab173 [Marcelo Vanzin] Update Utils API to take a Configuration argument.
      1e7003f [Marcelo Vanzin] Replace explicit Configuration instantiation with SparkHadoopUtil.
      b6cf1348
  6. Aug 27, 2014
    • Chip Senkbeil's avatar
      [SPARK-3256] Added support for :cp <jar> that was broken in Scala 2.10.x for REPL · 191d7cf2
      Chip Senkbeil authored
      As seen with [SI-6502](https://issues.scala-lang.org/browse/SI-6502) of Scala, the _:cp_ command was broken in Scala 2.10.x. As the Spark shell is a friendly wrapper on top of the Scala REPL, it is also affected by this problem.
      
      My solution was to alter the internal classpath and invalidate any new entries. I also had to add the ability to add new entries to the parent classloader of the interpreter (SparkIMain's global).
      
      The advantage of this versus wiping the interpreter and replaying all of the commands is that you don't have to worry about rerunning heavy Spark-related commands (going to the cluster) or potentially reloading data that might have changed. Instead, you get to work from where you left off.
      
      Until this is fixed upstream for 2.10.x, I had to use reflection to alter the internal compiler classpath.
      
      The solution now looks like this:
      ![screen shot 2014-08-13 at 3 46 02 pm](https://cloud.githubusercontent.com/assets/2481802/3912625/f02b1440-232c-11e4-9bf6-bafb3e352d14.png)
      
      Author: Chip Senkbeil <rcsenkbe@us.ibm.com>
      
      Closes #1929 from rcsenkbeil/FixReplClasspathSupport and squashes the following commits:
      
      f420cbf [Chip Senkbeil] Added SparkContext.addJar calls to support executing code on remote clusters
      a826795 [Chip Senkbeil] Updated AddUrlsToClasspath to use 'new Run' suggestion over hackish compiler error
      2ff1d86 [Chip Senkbeil] Added compilation failure on symbols hack to get Scala classes to load correctly
      a220639 [Chip Senkbeil] Added support for :cp <jar> that was broken in Scala 2.10.x for REPL
      191d7cf2
  7. Aug 06, 2014
    • Andrew Or's avatar
      [SPARK-2157] Enable tight firewall rules for Spark · 09f7e458
      Andrew Or authored
      The goal of this PR is to allow users of Spark to write tight firewall rules for their clusters. This is currently not possible because Spark uses random ports in many places, notably the communication between executors and drivers. The changes in this PR are based on top of ash211's changes in #1107.
      
      The list covered here may or may not be the complete set of port needed for Spark to operate perfectly. However, as of the latest commit there are no known sources of random ports (except in tests). I have not documented a few of the more obscure configs.
      
      My spark-env.sh looks like this:
      ```
      export SPARK_MASTER_PORT=6060
      export SPARK_WORKER_PORT=7070
      export SPARK_MASTER_WEBUI_PORT=9090
      export SPARK_WORKER_WEBUI_PORT=9091
      ```
      and my spark-defaults.conf looks like this:
      ```
      spark.master spark://andrews-mbp:6060
      spark.driver.port 5001
      spark.fileserver.port 5011
      spark.broadcast.port 5021
      spark.replClassServer.port 5031
      spark.blockManager.port 5041
      spark.executor.port 5051
      ```
      
      Author: Andrew Or <andrewor14@gmail.com>
      Author: Andrew Ash <andrew@andrewash.com>
      
      Closes #1777 from andrewor14/configure-ports and squashes the following commits:
      
      621267b [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      8a6b820 [Andrew Or] Use a random UI port during tests
      7da0493 [Andrew Or] Fix tests
      523c30e [Andrew Or] Add test for isBindCollision
      b97b02a [Andrew Or] Minor fixes
      c22ad00 [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      93d359f [Andrew Or] Executors connect to wrong port when collision occurs
      d502e5f [Andrew Or] Handle port collisions when creating Akka systems
      a2dd05c [Andrew Or] Patrick's comment nit
      86461e2 [Andrew Or] Remove spark.executor.env.port and spark.standalone.client.port
      1d2d5c6 [Andrew Or] Fix ports for standalone cluster mode
      cb3be88 [Andrew Or] Various doc fixes (broken link, format etc.)
      e837cde [Andrew Or] Remove outdated TODOs
      bfbab28 [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      de1b207 [Andrew Or] Update docs to reflect new ports
      b565079 [Andrew Or] Add spark.ports.maxRetries
      2551eb2 [Andrew Or] Remove spark.worker.watcher.port
      151327a [Andrew Or] Merge branch 'master' of github.com:apache/spark into configure-ports
      9868358 [Andrew Or] Add a few miscellaneous ports
      6016e77 [Andrew Or] Add spark.executor.port
      8d836e6 [Andrew Or] Also document SPARK_{MASTER/WORKER}_WEBUI_PORT
      4d9e6f3 [Andrew Or] Fix super subtle bug
      3f8e51b [Andrew Or] Correct erroneous docs...
      e111d08 [Andrew Or] Add names for UI services
      470f38c [Andrew Or] Special case non-"Address already in use" exceptions
      1d7e408 [Andrew Or] Treat 0 ports specially + return correct ConnectionManager port
      ba32280 [Andrew Or] Minor fixes
      6b550b0 [Andrew Or] Assorted fixes
      73fbe89 [Andrew Or] Move start service logic to Utils
      ec676f4 [Andrew Or] Merge branch 'SPARK-2157' of github.com:ash211/spark into configure-ports
      038a579 [Andrew Ash] Trust the server start function to report the port the service started on
      7c5bdc4 [Andrew Ash] Fix style issue
      0347aef [Andrew Ash] Unify port fallback logic to a single place
      24a4c32 [Andrew Ash] Remove type on val to match surrounding style
      9e4ad96 [Andrew Ash] Reformat for style checker
      5d84e0e [Andrew Ash] Document new port configuration options
      066dc7a [Andrew Ash] Fix up HttpServer port increments
      cad16da [Andrew Ash] Add fallover increment logic for HttpServer
      c5a0568 [Andrew Ash] Fix ConnectionManager to retry with increment
      b80d2fd [Andrew Ash] Make Spark's block manager port configurable
      17c79bb [Andrew Ash] Add a configuration option for spark-shell's class server
      f34115d [Andrew Ash] SPARK-1176 Add port configuration for HttpBroadcast
      49ee29b [Andrew Ash] SPARK-1174 Add port configuration for HttpFileServer
      1c0981a [Andrew Ash] Make port in HttpServer configurable
      09f7e458
  8. Aug 02, 2014
    • Andrew Or's avatar
      [SPARK-2454] Do not ship spark home to Workers · 148af608
      Andrew Or authored
      When standalone Workers launch executors, they inherit the Spark home set by the driver. This means if the worker machines do not share the same directory structure as the driver node, the Workers will attempt to run scripts (e.g. bin/compute-classpath.sh) that do not exist locally and fail. This is a common scenario if the driver is launched from outside of the cluster.
      
      The solution is to simply not pass the driver's Spark home to the Workers. This PR further makes an attempt to avoid overloading the usages of `spark.home`, which is now only used for setting executor Spark home on Mesos and in python.
      
      This is based on top of #1392 and originally reported by YanTangZhai. Tested on standalone cluster.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #1734 from andrewor14/spark-home-reprise and squashes the following commits:
      
      f71f391 [Andrew Or] Revert changes in python
      1c2532c [Andrew Or] Merge branch 'master' of github.com:apache/spark into spark-home-reprise
      188fc5d [Andrew Or] Avoid using spark.home where possible
      09272b7 [Andrew Or] Always use Worker's working directory as spark home
      148af608
  9. Aug 01, 2014
    • Prashant Sharma's avatar
      SPARK-2632, SPARK-2576. Fixed by only importing what is necessary during class definition. · 14991011
      Prashant Sharma authored
      Without this patch, it imports everything available in the scope.
      
      ```scala
      
      scala> val a = 10l
      val a = 10l
      a: Long = 10
      
      scala> import a._
      import a._
      import a._
      
      scala> case class A(a: Int) // show
      case class A(a: Int) // show
      class $read extends Serializable {
        def <init>() = {
          super.<init>;
          ()
        };
        class $iwC extends Serializable {
          def <init>() = {
            super.<init>;
            ()
          };
          class $iwC extends Serializable {
            def <init>() = {
              super.<init>;
              ()
            };
            import org.apache.spark.SparkContext._;
            class $iwC extends Serializable {
              def <init>() = {
                super.<init>;
                ()
              };
              val $VAL5 = $line5.$read.INSTANCE;
              import $VAL5.$iw.$iw.$iw.$iw.a;
              class $iwC extends Serializable {
                def <init>() = {
                  super.<init>;
                  ()
                };
                import a._;
                class $iwC extends Serializable {
                  def <init>() = {
                    super.<init>;
                    ()
                  };
                  class $iwC extends Serializable {
                    def <init>() = {
                      super.<init>;
                      ()
                    };
                    case class A extends scala.Product with scala.Serializable {
                      <caseaccessor> <paramaccessor> val a: Int = _;
                      def <init>(a: Int) = {
                        super.<init>;
                        ()
                      }
                    }
                  };
                  val $iw = new $iwC.<init>
                };
                val $iw = new $iwC.<init>
              };
              val $iw = new $iwC.<init>
            };
            val $iw = new $iwC.<init>
          };
          val $iw = new $iwC.<init>
        };
        val $iw = new $iwC.<init>
      }
      object $read extends scala.AnyRef {
        def <init>() = {
          super.<init>;
          ()
        };
        val INSTANCE = new $read.<init>
      }
      defined class A
      ```
      
      With this patch, it just imports  only the necessary.
      
      ```scala
      
      scala> val a = 10l
      val a = 10l
      a: Long = 10
      
      scala> import a._
      import a._
      import a._
      
      scala> case class A(a: Int) // show
      case class A(a: Int) // show
      class $read extends Serializable {
        def <init>() = {
          super.<init>;
          ()
        };
        class $iwC extends Serializable {
          def <init>() = {
            super.<init>;
            ()
          };
          class $iwC extends Serializable {
            def <init>() = {
              super.<init>;
              ()
            };
            case class A extends scala.Product with scala.Serializable {
              <caseaccessor> <paramaccessor> val a: Int = _;
              def <init>(a: Int) = {
                super.<init>;
                ()
              }
            }
          };
          val $iw = new $iwC.<init>
        };
        val $iw = new $iwC.<init>
      }
      object $read extends scala.AnyRef {
        def <init>() = {
          super.<init>;
          ()
        };
        val INSTANCE = new $read.<init>
      }
      defined class A
      
      scala>
      
      ```
      
      This patch also adds a `:fallback` mode on being enabled it will restore the spark-shell's 1.0.0 behaviour.
      
      Author: Prashant Sharma <scrapcodes@gmail.com>
      Author: Yin Huai <huai@cse.ohio-state.edu>
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #1635 from ScrapCodes/repl-fix-necessary-imports and squashes the following commits:
      
      b1968d2 [Prashant Sharma] Added toschemaRDD to test case.
      0b712bb [Yin Huai] Add a REPL test to test importing a method.
      02ad8ff [Yin Huai] Add a REPL test for importing SQLContext.createSchemaRDD.
      ed6d0c7 [Prashant Sharma] Added a fallback mode, incase users run into issues while using repl.
      b63d3b2 [Prashant Sharma] SPARK-2632, SPARK-2576. Fixed by only importing what is necessary during class definition.
      14991011
  10. Jul 31, 2014
    • Timothy Hunter's avatar
      [SPARK-2762] SparkILoop leaks memory in multi-repl configurations · 92ca910e
      Timothy Hunter authored
      This pull request is a small refactor so that a partial function (hence a closure) is not created. Instead, a regular function is used. The behavior of the code is not changed.
      
      Author: Timothy Hunter <timhunter@databricks.com>
      
      Closes #1674 from thunterdb/closure_issue and squashes the following commits:
      
      e1e664d [Timothy Hunter] simplify closure
      92ca910e
  11. Jul 22, 2014
    • Prashant Sharma's avatar
      [SPARK-2452] Create a new valid for each instead of using lineId. · 81fec992
      Prashant Sharma authored
      Author: Prashant Sharma <prashant@apache.org>
      
      Closes #1441 from ScrapCodes/SPARK-2452/multi-statement and squashes the following commits:
      
      26c5c72 [Prashant Sharma] Added a test case.
      7e8d28d [Prashant Sharma] SPARK-2452, create a new valid for each  instead of using lineId, because Line ids can be same sometimes.
      81fec992
  12. Jul 10, 2014
    • Prashant Sharma's avatar
      [SPARK-1776] Have Spark's SBT build read dependencies from Maven. · 628932b8
      Prashant Sharma authored
      Patch introduces the new way of working also retaining the existing ways of doing things.
      
      For example build instruction for yarn in maven is
      `mvn -Pyarn -PHadoop2.2 clean package -DskipTests`
      in sbt it can become
      `MAVEN_PROFILES="yarn, hadoop-2.2" sbt/sbt clean assembly`
      Also supports
      `sbt/sbt -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 clean assembly`
      
      Author: Prashant Sharma <prashant.s@imaginea.com>
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #772 from ScrapCodes/sbt-maven and squashes the following commits:
      
      a8ac951 [Prashant Sharma] Updated sbt version.
      62b09bb [Prashant Sharma] Improvements.
      fa6221d [Prashant Sharma] Excluding sql from mima
      4b8875e [Prashant Sharma] Sbt assembly no longer builds tools by default.
      72651ca [Prashant Sharma] Addresses code reivew comments.
      acab73d [Prashant Sharma] Revert "Small fix to run-examples script."
      ac4312c [Prashant Sharma] Revert "minor fix"
      6af91ac [Prashant Sharma] Ported oldDeps back. + fixes issues with prev commit.
      65cf06c [Prashant Sharma] Servelet API jars mess up with the other servlet jars on the class path.
      446768e [Prashant Sharma] minor fix
      89b9777 [Prashant Sharma] Merge conflicts
      d0a02f2 [Prashant Sharma] Bumped up pom versions, Since the build now depends on pom it is better updated there. + general cleanups.
      dccc8ac [Prashant Sharma] updated mima to check against 1.0
      a49c61b [Prashant Sharma] Fix for tools jar
      a2f5ae1 [Prashant Sharma] Fixes a bug in dependencies.
      cf88758 [Prashant Sharma] cleanup
      9439ea3 [Prashant Sharma] Small fix to run-examples script.
      96cea1f [Prashant Sharma] SPARK-1776 Have Spark's SBT build read dependencies from Maven.
      36efa62 [Patrick Wendell] Set project name in pom files and added eclipse/intellij plugins.
      4973dbd [Patrick Wendell] Example build using pom reader.
      628932b8
  13. Jul 04, 2014
  14. Jun 06, 2014
    • witgo's avatar
      [SPARK-1841]: update scalatest to version 2.1.5 · 41c4a331
      witgo authored
      Author: witgo <witgo@qq.com>
      
      Closes #713 from witgo/scalatest and squashes the following commits:
      
      b627a6a [witgo] merge master
      51fb3d6 [witgo] merge master
      3771474 [witgo] fix RDDSuite
      996d6f9 [witgo] fix TimeStampedWeakValueHashMap test
      9dfa4e7 [witgo] merge bug
      1479b22 [witgo] merge master
      29b9194 [witgo] fix code style
      022a7a2 [witgo] fix test dependency
      a52c0fa [witgo] fix test dependency
      cd8f59d [witgo] Merge branch 'master' of https://github.com/apache/spark into scalatest
      046540d [witgo] fix RDDSuite.scala
      2c543b9 [witgo] fix ReplSuite.scala
      c458928 [witgo] update scalatest to version 2.1.5
      41c4a331
  15. Jun 05, 2014
    • Takuya UESHIN's avatar
      [SPARK-2029] Bump pom.xml version number of master branch to 1.1.0-SNAPSHOT. · 7c160293
      Takuya UESHIN authored
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #974 from ueshin/issues/SPARK-2029 and squashes the following commits:
      
      e19e8f4 [Takuya UESHIN] Bump version number to 1.1.0-SNAPSHOT.
      7c160293
    • Marcelo Vanzin's avatar
      Fix issue in ReplSuite with hadoop-provided profile. · b77c19be
      Marcelo Vanzin authored
      When building the assembly with the maven "hadoop-provided"
      profile, the executors were failing to come up because Hadoop classes
      were not found in the classpath anymore; so add them explicitly to
      the classpath using spark.executor.extraClassPath. This is only
      needed for the local-cluster mode, but doesn't affect other tests,
      so it's added for all of them to keep the code simpler.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #781 from vanzin/repl-test-fix and squashes the following commits:
      
      4f0a3b0 [Marcelo Vanzin] Fix issue in ReplSuite with hadoop-provided profile.
      b77c19be
  16. Jun 03, 2014
    • Syed Hashmi's avatar
      [SPARK-1942] Stop clearing spark.driver.port in unit tests · 7782a304
      Syed Hashmi authored
      stop resetting spark.driver.port in unit tests (scala, java and python).
      
      Author: Syed Hashmi <shashmi@cloudera.com>
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #943 from syedhashmi/master and squashes the following commits:
      
      885f210 [Syed Hashmi] Removing unnecessary file (created by mergetool)
      b8bd4b5 [Syed Hashmi] Merge remote-tracking branch 'upstream/master'
      b895e59 [Syed Hashmi] Revert "[SPARK-1784] Add a new partitioner"
      57b6587 [Syed Hashmi] Revert "[SPARK-1784] Add a balanced partitioner"
      1574769 [Syed Hashmi] [SPARK-1942] Stop clearing spark.driver.port in unit tests
      4354836 [Syed Hashmi] Revert "SPARK-1686: keep schedule() calling in the main thread"
      fd36542 [Syed Hashmi] [SPARK-1784] Add a balanced partitioner
      6668015 [CodingCat] SPARK-1686: keep schedule() calling in the main thread
      4ca94cc [Syed Hashmi] [SPARK-1784] Add a new partitioner
      7782a304
  17. May 31, 2014
  18. May 24, 2014
    • Andrew Or's avatar
      [SPARK-1900 / 1918] PySpark on YARN is broken · 5081a0a9
      Andrew Or authored
      If I run the following on a YARN cluster
      ```
      bin/spark-submit sheep.py --master yarn-client
      ```
      it fails because of a mismatch in paths: `spark-submit` thinks that `sheep.py` resides on HDFS, and balks when it can't find the file there. A natural workaround is to add the `file:` prefix to the file:
      ```
      bin/spark-submit file:/path/to/sheep.py --master yarn-client
      ```
      However, this also fails. This time it is because python does not understand URI schemes.
      
      This PR fixes this by automatically resolving all paths passed as command line argument to `spark-submit` properly. This has the added benefit of keeping file and jar paths consistent across different cluster modes. For python, we strip the URI scheme before we actually try to run it.
      
      Much of the code is originally written by @mengxr. Tested on YARN cluster. More tests pending.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #853 from andrewor14/submit-paths and squashes the following commits:
      
      0bb097a [Andrew Or] Format path correctly before adding it to PYTHONPATH
      323b45c [Andrew Or] Include --py-files on PYTHONPATH for pyspark shell
      3c36587 [Andrew Or] Improve error messages (minor)
      854aa6a [Andrew Or] Guard against NPE if user gives pathological paths
      6638a6b [Andrew Or] Fix spark-shell jar paths after #849 went in
      3bb0359 [Andrew Or] Update more comments (minor)
      2a1f8a0 [Andrew Or] Update comments (minor)
      6af2c77 [Andrew Or] Merge branch 'master' of github.com:apache/spark into submit-paths
      a68c4d1 [Andrew Or] Handle Windows python file path correctly
      427a250 [Andrew Or] Resolve paths properly for Windows
      a591a4a [Andrew Or] Update tests for resolving URIs
      6c8621c [Andrew Or] Move resolveURIs to Utils
      db8255e [Andrew Or] Merge branch 'master' of github.com:apache/spark into submit-paths
      f542dce [Andrew Or] Fix outdated tests
      691c4ce [Andrew Or] Ignore special primary resource names
      5342ac7 [Andrew Or] Add missing space in error message
      02f77f3 [Andrew Or] Resolve command line arguments to spark-submit properly
      5081a0a9
  19. May 22, 2014
  20. May 12, 2014
    • Sean Owen's avatar
      SPARK-1798. Tests should clean up temp files · 7120a297
      Sean Owen authored
      Three issues related to temp files that tests generate – these should be touched up for hygiene but are not urgent.
      
      Modules have a log4j.properties which directs the unit-test.log output file to a directory like `[module]/target/unit-test.log`. But this ends up creating `[module]/[module]/target/unit-test.log` instead of former.
      
      The `work/` directory is not deleted by "mvn clean", in the parent and in modules. Neither is the `checkpoint/` directory created under the various external modules.
      
      Many tests create a temp directory, which is not usually deleted. This can be largely resolved by calling `deleteOnExit()` at creation and trying to call `Utils.deleteRecursively` consistently to clean up, sometimes in an `@After` method.
      
      _If anyone seconds the motion, I can create a more significant change that introduces a new test trait along the lines of `LocalSparkContext`, which provides management of temp directories for subclasses to take advantage of._
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #732 from srowen/SPARK-1798 and squashes the following commits:
      
      5af578e [Sean Owen] Try to consistently delete test temp dirs and files, and set deleteOnExit() for each
      b21b356 [Sean Owen] Remove work/ and checkpoint/ dirs with mvn clean
      bdd0f41 [Sean Owen] Remove duplicate module dir in log4j.properties output path for tests
      7120a297
  21. May 06, 2014
    • Matei Zaharia's avatar
      [SPARK-1549] Add Python support to spark-submit · 951a5d93
      Matei Zaharia authored
      This PR updates spark-submit to allow submitting Python scripts (currently only with deploy-mode=client, but that's all that was supported before) and updates the PySpark code to properly find various paths, etc. One significant change is that we assume we can always find the Python files either from the Spark assembly JAR (which will happen with the Maven assembly build in make-distribution.sh) or from SPARK_HOME (which will exist in local mode even if you use sbt assembly, and should be enough for testing). This means we no longer need a weird hack to modify the environment for YARN.
      
      This patch also updates the Python worker manager to run python with -u, which means unbuffered output (send it to our logs right away instead of waiting a while after stuff was written); this should simplify debugging.
      
      In addition, it fixes https://issues.apache.org/jira/browse/SPARK-1709, setting the main class from a JAR's Main-Class attribute if not specified by the user, and fixes a few help strings and style issues in spark-submit.
      
      In the future we may want to make the `pyspark` shell use spark-submit as well, but it seems unnecessary for 1.0.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #664 from mateiz/py-submit and squashes the following commits:
      
      15e9669 [Matei Zaharia] Fix some uses of path.separator property
      051278c [Matei Zaharia] Small style fixes
      0afe886 [Matei Zaharia] Add license headers
      4650412 [Matei Zaharia] Add pyFiles to PYTHONPATH in executors, remove old YARN stuff, add tests
      15f8e1e [Matei Zaharia] Set PYTHONPATH in PythonWorkerFactory in case it wasn't set from outside
      47c0655 [Matei Zaharia] More work to make spark-submit work with Python:
      d4375bd [Matei Zaharia] Clean up description of spark-submit args a bit and add Python ones
      951a5d93
  22. Apr 29, 2014
    • witgo's avatar
      Improved build configuration · 030f2c21
      witgo authored
      1, Fix SPARK-1441: compile spark core error with hadoop 0.23.x
      2, Fix SPARK-1491: maven hadoop-provided profile fails to build
      3, Fix org.scala-lang: * ,org.apache.avro:* inconsistent versions dependency
      4, A modified on the sql/catalyst/pom.xml,sql/hive/pom.xml,sql/core/pom.xml (Four spaces formatted into two spaces)
      
      Author: witgo <witgo@qq.com>
      
      Closes #480 from witgo/format_pom and squashes the following commits:
      
      03f652f [witgo] review commit
      b452680 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      bee920d [witgo] revert fix SPARK-1629: Spark Core missing commons-lang dependence
      7382a07 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      6902c91 [witgo] fix SPARK-1629: Spark Core missing commons-lang dependence
      0da4bc3 [witgo] merge master
      d1718ed [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      e345919 [witgo] add avro dependency to yarn-alpha
      77fad08 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      62d0862 [witgo] Fix org.scala-lang: * inconsistent versions dependency
      1a162d7 [witgo] Merge branch 'master' of https://github.com/apache/spark into format_pom
      934f24d [witgo] review commit
      cf46edc [witgo] exclude jruby
      06e7328 [witgo] Merge branch 'SparkBuild' into format_pom
      99464d2 [witgo] fix maven hadoop-provided profile fails to build
      0c6c1fc [witgo] Fix compile spark core error with hadoop 0.23.x
      6851bec [witgo] Maintain consistent SparkBuild.scala, pom.xml
      030f2c21
  23. Apr 25, 2014
    • Patrick Wendell's avatar
      SPARK-1619 Launch spark-shell with spark-submit · dc3b640a
      Patrick Wendell authored
      This simplifies the shell a bunch and passes all arguments through to spark-submit.
      
      There is a tiny incompatibility from 0.9.1 which is that you can't put `-c` _or_ `--cores`, only `--cores`. However, spark-submit will give a good error message in this case, I don't think many people used this, and it's a trivial change for users.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #542 from pwendell/spark-shell and squashes the following commits:
      
      9eb3e6f [Patrick Wendell] Updating Spark docs
      b552459 [Patrick Wendell] Andrew's feedback
      97720fa [Patrick Wendell] Review feedback
      aa2900b [Patrick Wendell] SPARK-1619 Launch spark-shell with spark-submit
      dc3b640a
  24. Apr 24, 2014
    • Mridul Muralidharan's avatar
      SPARK-1586 Windows build fixes · 968c0187
      Mridul Muralidharan authored
      Unfortunately, this is not exhaustive - particularly hive tests still fail due to path issues.
      
      Author: Mridul Muralidharan <mridulm80@apache.org>
      
      This patch had conflicts when merged, resolved by
      Committer: Matei Zaharia <matei@databricks.com>
      
      Closes #505 from mridulm/windows_fixes and squashes the following commits:
      
      ef12283 [Mridul Muralidharan] Move to org.apache.commons.lang3 for StringEscapeUtils. Earlier version was buggy appparently
      cdae406 [Mridul Muralidharan] Remove leaked changes from > 2G fix branch
      3267f4b [Mridul Muralidharan] Fix build failures
      35b277a [Mridul Muralidharan] Fix Scalastyle failures
      bc69d14 [Mridul Muralidharan] Change from hardcoded path separator
      10c4d78 [Mridul Muralidharan] Use explicit encoding while using getBytes
      1337abd [Mridul Muralidharan] fix classpath while running in windows
      968c0187
    • Sandeep's avatar
      Fix Scala Style · a03ac222
      Sandeep authored
      Any comments are welcome
      
      Author: Sandeep <sandeep@techaddict.me>
      
      Closes #531 from techaddict/stylefix-1 and squashes the following commits:
      
      7492730 [Sandeep] Pass 4
      98b2428 [Sandeep] fix rxin suggestions
      b5e2e6f [Sandeep] Pass 3
      05932d7 [Sandeep] fix if else styling 2
      08690e5 [Sandeep] fix if else styling
      a03ac222
  25. Apr 19, 2014
    • Michael Armbrust's avatar
      REPL cleanup. · 3a390bfd
      Michael Armbrust authored
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #451 from marmbrus/replCleanup and squashes the following commits:
      
      088526a [Michael Armbrust] REPL cleanup.
      3a390bfd
  26. Apr 13, 2014
    • Patrick Wendell's avatar
      SPARK-1480: Clean up use of classloaders · 4bc07eeb
      Patrick Wendell authored
      The Spark codebase is a bit fast-and-loose when accessing classloaders and this has caused a few bugs to surface in master.
      
      This patch defines some utility methods for accessing classloaders. This makes the intention when accessing a classloader much more explicit in the code and fixes a few cases where the wrong one was chosen.
      
      case (a) -> We want the classloader that loaded Spark
      case (b) -> We want the context class loader, or if not present, we want (a)
      
      This patch provides a better fix for SPARK-1403 (https://issues.apache.org/jira/browse/SPARK-1403) than the current work around, which it reverts. It also fixes a previously unreported bug that the `./spark-submit` script did not work for running with `local` master. It didn't work because the executor classloader did not properly delegate to the context class loader (if it is defined) and in local mode the context class loader is set by the `./spark-submit` script. A unit test is added for that case.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #398 from pwendell/class-loaders and squashes the following commits:
      
      b4a1a58 [Patrick Wendell] Minor clean up
      14f1272 [Patrick Wendell] SPARK-1480: Clean up use of classloaders
      4bc07eeb
  27. Apr 10, 2014
    • Sandeep's avatar
      Remove Unnecessary Whitespace's · 930b70f0
      Sandeep authored
      stack these together in a commit else they show up chunk by chunk in different commits.
      
      Author: Sandeep <sandeep@techaddict.me>
      
      Closes #380 from techaddict/white_space and squashes the following commits:
      
      b58f294 [Sandeep] Remove Unnecessary Whitespace's
      930b70f0
    • Andrew Or's avatar
      [SPARK-1276] Add a HistoryServer to render persisted UI · 79820fe8
      Andrew Or authored
      The new feature of event logging, introduced in #42, allows the user to persist the details of his/her Spark application to storage, and later replay these events to reconstruct an after-the-fact SparkUI.
      Currently, however, a persisted UI can only be rendered through the standalone Master. This greatly limits the use case of this new feature as many people also run Spark on Yarn / Mesos.
      
      This PR introduces a new entity called the HistoryServer, which, given a log directory, keeps track of all completed applications independently of a Spark Master. Unlike Master, the HistoryServer needs not be running while the application is still running. It is relatively light-weight in that it only maintains static information of applications and performs no scheduling.
      
      To quickly test it out, generate event logs with ```spark.eventLog.enabled=true``` and run ```sbin/start-history-server.sh <log-dir-path>```. Your HistoryServer awaits on port 18080.
      
      Comments and feedback are most welcome.
      
      ---
      
      A few other changes introduced in this PR include refactoring the WebUI interface, which is beginning to have a lot of duplicate code now that we have added more functionality to it. Two new SparkListenerEvents have been introduced (SparkListenerApplicationStart/End) to keep track of application name and start/finish times. This PR also clarifies the semantics of the ReplayListenerBus introduced in #42.
      
      A potential TODO in the future (not part of this PR) is to render live applications in addition to just completed applications. This is useful when applications fail, a condition that our current HistoryServer does not handle unless the user manually signals application completion (by creating the APPLICATION_COMPLETION file). Handling live applications becomes significantly more challenging, however, because it is now necessary to render the same SparkUI multiple times. To avoid reading the entire log every time, which is inefficient, we must handle reading the log from where we previously left off, but this becomes fairly complicated because we must deal with the arbitrary behavior of each input stream.
      
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #204 from andrewor14/master and squashes the following commits:
      
      7b7234c [Andrew Or] Finished -> Completed
      b158d98 [Andrew Or] Address Patrick's comments
      69d1b41 [Andrew Or] Do not block on posting SparkListenerApplicationEnd
      19d5dd0 [Andrew Or] Merge github.com:apache/spark
      f7f5bf0 [Andrew Or] Make history server's web UI port a Spark configuration
      2dfb494 [Andrew Or] Decouple checking for application completion from replaying
      d02dbaa [Andrew Or] Expose Spark version and include it in event logs
      2282300 [Andrew Or] Add documentation for the HistoryServer
      567474a [Andrew Or] Merge github.com:apache/spark
      6edf052 [Andrew Or] Merge github.com:apache/spark
      19e1fb4 [Andrew Or] Address Thomas' comments
      248cb3d [Andrew Or] Limit number of live applications + add configurability
      a3598de [Andrew Or] Do not close file system with ReplayBus + fix bind address
      bc46fc8 [Andrew Or] Merge github.com:apache/spark
      e2f4ff9 [Andrew Or] Merge github.com:apache/spark
      050419e [Andrew Or] Merge github.com:apache/spark
      81b568b [Andrew Or] Fix strange error messages...
      0670743 [Andrew Or] Decouple page rendering from loading files from disk
      1b2f391 [Andrew Or] Minor changes
      a9eae7e [Andrew Or] Merge branch 'master' of github.com:apache/spark
      d5154da [Andrew Or] Styling and comments
      5dbfbb4 [Andrew Or] Merge branch 'master' of github.com:apache/spark
      60bc6d5 [Andrew Or] First complete implementation of HistoryServer (only for finished apps)
      7584418 [Andrew Or] Report application start/end times to HistoryServer
      8aac163 [Andrew Or] Add basic application table
      c086bd5 [Andrew Or] Add HistoryServer and scripts ++ Refactor WebUI interface
      79820fe8
  28. Apr 09, 2014
    • Holden Karau's avatar
      Spark-939: allow user jars to take precedence over spark jars · fa0524fd
      Holden Karau authored
      I still need to do a small bit of re-factoring [mostly the one Java file I'll switch it back to a Scala file and use it in both the close loaders], but comments on other things I should do would be great.
      
      Author: Holden Karau <holden@pigscanfly.ca>
      
      Closes #217 from holdenk/spark-939-allow-user-jars-to-take-precedence-over-spark-jars and squashes the following commits:
      
      cf0cac9 [Holden Karau] Fix the executorclassloader
      1955232 [Holden Karau] Fix long line in TestUtils
      8f89965 [Holden Karau] Fix tests for new class name
      7546549 [Holden Karau] CR feedback, merge some of the testutils methods down, rename the classloader
      644719f [Holden Karau] User the class generator for the repl class loader tests too
      f0b7114 [Holden Karau] Fix the core/src/test/scala/org/apache/spark/executor/ExecutorURLClassLoaderSuite.scala tests
      204b199 [Holden Karau] Fix the generated classes
      9f68f10 [Holden Karau] Start rewriting the ExecutorURLClassLoaderSuite to not use the hard coded classes
      858aba2 [Holden Karau] Remove a bunch of test junk
      261aaee [Holden Karau] simplify executorurlclassloader a bit
      7a7bf5f [Holden Karau] CR feedback
      d4ae848 [Holden Karau] rewrite component into scala
      aa95083 [Holden Karau] CR feedback
      7752594 [Holden Karau] re-add https comment
      a0ef85a [Holden Karau] Fix style issues
      125ea7f [Holden Karau] Easier to just remove those files, we don't need them
      bb8d179 [Holden Karau] Fix issues with the repl class loader
      241b03d [Holden Karau] fix my rat excludes
      a343350 [Holden Karau] Update rat-excludes and remove a useless file
      d90d217 [Holden Karau] Fix fall back with custom class loader and add a test for it
      4919bf9 [Holden Karau] Fix parent calling class loader issue
      8a67302 [Holden Karau] Test are good
      9e2d236 [Holden Karau] It works comrade
      691ee00 [Holden Karau] It works ish
      dc4fe44 [Holden Karau] Does not depend on being in my home directory
      47046ff [Holden Karau] Remove bad import'
      22d83cb [Holden Karau] Add a test suite for the executor url class loader suite
      7ef4628 [Holden Karau] Clean up
      792d961 [Holden Karau] Almost works
      16aecd1 [Holden Karau] Doesn't quite work
      8d2241e [Holden Karau] Adda FakeClass for testing ClassLoader precedence options
      648b559 [Holden Karau] Both class loaders compile. Now for testing
      e1d9f71 [Holden Karau] One loader workers.
      fa0524fd
  29. Apr 07, 2014
    • Aaron Davidson's avatar
      SPARK-1099: Introduce local[*] mode to infer number of cores · 0307db0f
      Aaron Davidson authored
      This is the default mode for running spark-shell and pyspark, intended to allow users running spark for the first time to see the performance benefits of using multiple cores, while not breaking backwards compatibility for users who use "local" mode and expect exactly 1 core.
      
      Author: Aaron Davidson <aaron@databricks.com>
      
      Closes #182 from aarondav/110 and squashes the following commits:
      
      a88294c [Aaron Davidson] Rebased changes for new spark-shell
      a9f393e [Aaron Davidson] SPARK-1099: Introduce local[*] mode to infer number of cores
      0307db0f
  30. Apr 06, 2014
  31. Mar 28, 2014
    • Prashant Sharma's avatar
      SPARK-1096, a space after comment start style checker. · 60abc252
      Prashant Sharma authored
      Author: Prashant Sharma <prashant.s@imaginea.com>
      
      Closes #124 from ScrapCodes/SPARK-1096/scalastyle-comment-check and squashes the following commits:
      
      214135a [Prashant Sharma] Review feedback.
      5eba88c [Prashant Sharma] Fixed style checks for ///+ comments.
      e54b2f8 [Prashant Sharma] improved message, work around.
      83e7144 [Prashant Sharma] removed dependency on scalastyle in plugin, since scalastyle sbt plugin already depends on the right version. Incase we update the plugin we will have to adjust our spark-style project to depend on right scalastyle version.
      810a1d6 [Prashant Sharma] SPARK-1096, a space after comment style checker.
      ba33193 [Prashant Sharma] scala style as a project
      60abc252
    • Takuya UESHIN's avatar
      [SPARK-1210] Prevent ContextClassLoader of Actor from becoming ClassLoader of Executo... · 3d89043b
      Takuya UESHIN authored
      ...r.
      
      Constructor of `org.apache.spark.executor.Executor` should not set context class loader of current thread, which is backend Actor's thread.
      
      Run the following code in local-mode REPL.
      
      ```
      scala> case class Foo(i: Int)
      scala> val ret = sc.parallelize((1 to 100).map(Foo), 10).collect
      ```
      
      This causes errors as follows:
      
      ```
      ERROR actor.OneForOneStrategy: [L$line5.$read$$iwC$$iwC$$iwC$$iwC$Foo;
      java.lang.ArrayStoreException: [L$line5.$read$$iwC$$iwC$$iwC$$iwC$Foo;
           at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:88)
           at org.apache.spark.SparkContext$$anonfun$runJob$3.apply(SparkContext.scala:870)
           at org.apache.spark.SparkContext$$anonfun$runJob$3.apply(SparkContext.scala:870)
           at org.apache.spark.scheduler.JobWaiter.taskSucceeded(JobWaiter.scala:56)
           at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:859)
           at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:616)
           at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
           at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
           at akka.actor.ActorCell.invoke(ActorCell.scala:456)
           at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
           at akka.dispatch.Mailbox.run(Mailbox.scala:219)
           at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
           at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
           at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
           at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
           at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
      ```
      
      This is because the class loaders to deserialize result `Foo` instances might be different from backend Actor's, and the Actor's class loader should be the same as Driver's.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #15 from ueshin/wip/wrongcontextclassloader and squashes the following commits:
      
      d79e8c0 [Takuya UESHIN] Change a parent class loader of ExecutorURLClassLoader.
      c6c09b6 [Takuya UESHIN] Add a test to collect objects of class defined in repl.
      43e0feb [Takuya UESHIN] Prevent ContextClassLoader of Actor from becoming ClassLoader of Executor.
      3d89043b
  32. Mar 26, 2014
    • Sean Owen's avatar
      SPARK-1325. The maven build error for Spark Tools · 1fa48d94
      Sean Owen authored
      This is just a slight variation on https://github.com/apache/spark/pull/234 and alternative suggestion for SPARK-1325. `scala-actors` is not necessary. `SparkBuild.scala` should be updated to reflect the direct dependency on `scala-reflect` and `scala-compiler`. And the `repl` build, which has the same dependencies, should also be consistent between Maven / SBT.
      
      Author: Sean Owen <sowen@cloudera.com>
      Author: witgo <witgo@qq.com>
      
      Closes #240 from srowen/SPARK-1325 and squashes the following commits:
      
      25bd7db [Sean Owen] Add necessary dependencies scala-reflect and scala-compiler to tools. Update repl dependencies, which are similar, to be consistent between Maven / SBT in this regard too.
      1fa48d94
  33. Mar 09, 2014
    • Patrick Wendell's avatar
      SPARK-782 Clean up for ASM dependency. · b9be1609
      Patrick Wendell authored
      This makes two changes.
      
      1) Spark uses the shaded version of asm that is (conveniently) published
         with Kryo.
      2) Existing exclude rules around asm are updated to reflect the new groupId
         of `org.ow2.asm`. This made all of the old rules not work with newer Hadoop
         versions that pull in new asm versions.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #100 from pwendell/asm and squashes the following commits:
      
      9235f3f [Patrick Wendell] SPARK-782 Clean up for ASM dependency.
      b9be1609
  34. Mar 08, 2014
    • Sandy Ryza's avatar
      SPARK-1193. Fix indentation in pom.xmls · a99fb374
      Sandy Ryza authored
      Author: Sandy Ryza <sandy@cloudera.com>
      
      Closes #91 from sryza/sandy-spark-1193 and squashes the following commits:
      
      a878124 [Sandy Ryza] SPARK-1193. Fix indentation in pom.xmls
      a99fb374
Loading