Skip to content
Snippets Groups Projects
  1. Aug 02, 2014
    • Chris Fregly's avatar
      [SPARK-1981] Add AWS Kinesis streaming support · 91f9504e
      Chris Fregly authored
      Author: Chris Fregly <chris@fregly.com>
      
      Closes #1434 from cfregly/master and squashes the following commits:
      
      4774581 [Chris Fregly] updated docs, renamed retry to retryRandom to be more clear, removed retries around store() method
      0393795 [Chris Fregly] moved Kinesis examples out of examples/ and back into extras/kinesis-asl
      691a6be [Chris Fregly] fixed tests and formatting, fixed a bug with JavaKinesisWordCount during union of streams
      0e1c67b [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      74e5c7c [Chris Fregly] updated per TD's feedback.  simplified examples, updated docs
      e33cbeb [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      bf614e9 [Chris Fregly] per matei's feedback:  moved the kinesis examples into the examples/ dir
      d17ca6d [Chris Fregly] per TD's feedback:  updated docs, simplified the KinesisUtils api
      912640c [Chris Fregly] changed the foundKinesis class to be a publically-avail class
      db3eefd [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      21de67f [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      6c39561 [Chris Fregly] parameterized the versions of the aws java sdk and kinesis client
      338997e [Chris Fregly] improve build docs for kinesis
      828f8ae [Chris Fregly] more cleanup
      e7c8978 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      cd68c0d [Chris Fregly] fixed typos and backward compatibility
      d18e680 [Chris Fregly] Merge remote-tracking branch 'upstream/master'
      b3b0ff1 [Chris Fregly] [SPARK-1981] Add AWS Kinesis streaming support
      91f9504e
  2. Jul 31, 2014
    • kballou's avatar
      Docs: monitoring, streaming programming guide · cc820502
      kballou authored
      Fix several awkward wordings and grammatical issues in the following
      documents:
      
      *   docs/monitoring.md
      
      *   docs/streaming-programming-guide.md
      
      Author: kballou <kballou@devnulllabs.io>
      
      Closes #1662 from kennyballou/grammar_fixes and squashes the following commits:
      
      e1b8ad6 [kballou] Docs: monitoring, streaming programming guide
      cc820502
  3. Jul 03, 2014
    • Clément MATHIEU's avatar
      Streaming programming guide typos · fdc4c112
      Clément MATHIEU authored
      Fix a bad Java code sample and a broken link in the streaming programming guide.
      
      Author: Clément MATHIEU <clement@unportant.info>
      
      Closes #1286 from cykl/streaming-programming-guide-typos and squashes the following commits:
      
      b0908cb [Clément MATHIEU] Fix broken URL
      9d3c535 [Clément MATHIEU] Spark streaming requires at least two working threads (scala version was OK)
      fdc4c112
  4. Jun 13, 2014
    • akkomar's avatar
      Small correction in Streaming Programming Guide doc · edb1f0e3
      akkomar authored
      Corrected description of `repartition` function under 'Level of Parallelism in Data Receiving'.
      
      Author: akkomar <ak.komar@gmail.com>
      
      Closes #1079 from akkomar/streaming-guide-doc and squashes the following commits:
      
      32dfc62 [akkomar] Corrected description of `repartition` function under 'Level of Parallelism in Data Receiving'.
      edb1f0e3
  5. May 31, 2014
    • CodingCat's avatar
      SPARK-1976: fix the misleading part in streaming docs · 41bfdda3
      CodingCat authored
      Spark streaming requires at least two working threads, but the document gives the example like
      
      import org.apache.spark.api.java.function._
      import org.apache.spark.streaming._
      import org.apache.spark.streaming.api._
      // Create a StreamingContext with a local master
      val ssc = new StreamingContext("local", "NetworkWordCount", Seconds(1))
      http://spark.apache.org/docs/latest/streaming-programming-guide.html
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #924 from CodingCat/master and squashes the following commits:
      
      bb89f20 [CodingCat] update streaming docs
      41bfdda3
  6. May 30, 2014
    • Matei Zaharia's avatar
      [SPARK-1566] consolidate programming guide, and general doc updates · c8bf4131
      Matei Zaharia authored
      This is a fairly large PR to clean up and update the docs for 1.0. The major changes are:
      
      * A unified programming guide for all languages replaces language-specific ones and shows language-specific info in tabs
      * New programming guide sections on key-value pairs, unit testing, input formats beyond text, migrating from 0.9, and passing functions to Spark
      * Spark-submit guide moved to a separate page and expanded slightly
      * Various cleanups of the menu system, security docs, and others
      * Updated look of title bar to differentiate the docs from previous Spark versions
      
      You can find the updated docs at http://people.apache.org/~matei/1.0-docs/_site/ and in particular http://people.apache.org/~matei/1.0-docs/_site/programming-guide.html.
      
      Author: Matei Zaharia <matei@databricks.com>
      
      Closes #896 from mateiz/1.0-docs and squashes the following commits:
      
      03e6853 [Matei Zaharia] Some tweaks to configuration and YARN docs
      0779508 [Matei Zaharia] tweak
      ef671d4 [Matei Zaharia] Keep frames in JavaDoc links, and other small tweaks
      1bf4112 [Matei Zaharia] Review comments
      4414f88 [Matei Zaharia] tweaks
      d04e979 [Matei Zaharia] Fix some old links to Java guide
      a34ed33 [Matei Zaharia] tweak
      541bb3b [Matei Zaharia] miscellaneous changes
      fcefdec [Matei Zaharia] Moved submitting apps to separate doc
      61d72b4 [Matei Zaharia] stuff
      181f217 [Matei Zaharia] migration guide, remove old language guides
      e11a0da [Matei Zaharia] Add more API functions
      6a030a9 [Matei Zaharia] tweaks
      8db0ae3 [Matei Zaharia] Added key-value pairs section
      318d2c9 [Matei Zaharia] tweaks
      1c81477 [Matei Zaharia] New section on basics and function syntax
      e38f559 [Matei Zaharia] Actually added programming guide to Git
      a33d6fe [Matei Zaharia] First pass at updating programming guide to support all languages, plus other tweaks throughout
      3b6a876 [Matei Zaharia] More CSS tweaks
      01ec8bf [Matei Zaharia] More CSS tweaks
      e6d252e [Matei Zaharia] Change color of doc title bar to differentiate from 0.9.0
      c8bf4131
  7. May 28, 2014
    • jmu's avatar
      Fix doc about NetworkWordCount/JavaNetworkWordCount usage of spark streaming · 82eadc3b
      jmu authored
      Usage: NetworkWordCount <master> <hostname> <port>
      -->
      Usage: NetworkWordCount <hostname> <port>
      
      Usage: JavaNetworkWordCount <master> <hostname> <port>
      -->
      Usage: JavaNetworkWordCount <hostname> <port>
      
      Author: jmu <jmujmu@gmail.com>
      
      Closes #826 from jmu/master and squashes the following commits:
      
      9fb7980 [jmu] Merge branch 'master' of https://github.com/jmu/spark
      b9a6b02 [jmu] Fix doc for NetworkWordCount/JavaNetworkWordCount Usage: NetworkWordCount <master> <hostname> <port> --> Usage: NetworkWordCount <hostname> <port>
      82eadc3b
  8. May 14, 2014
    • Chen Chao's avatar
      default task number misleading in several places · 2f639957
      Chen Chao authored
        private[streaming] def defaultPartitioner(numPartitions: Int = self.ssc.sc.defaultParallelism){
          new HashPartitioner(numPartitions)
        }
      
      it represents that the default task number in Spark Streaming relies on the variable defaultParallelism in SparkContext, which is decided by the config property spark.default.parallelism
      
      the property "spark.default.parallelism" refers to https://github.com/apache/spark/pull/389
      
      Author: Chen Chao <crazyjvm@gmail.com>
      
      Closes #766 from CrazyJvm/patch-7 and squashes the following commits:
      
      0b7efba [Chen Chao] Update streaming-programming-guide.md
      cc5b66c [Chen Chao] default task number misleading in several places
      2f639957
  9. May 06, 2014
    • Sandeep's avatar
      SPARK-1637: Clean up examples for 1.0 · a000b5c3
      Sandeep authored
      - [x] Move all of them into subpackages of org.apache.spark.examples (right now some are in org.apache.spark.streaming.examples, for instance, and others are in org.apache.spark.examples.mllib)
      - [x] Move Python examples into examples/src/main/python
      - [x] Update docs to reflect these changes
      
      Author: Sandeep <sandeep@techaddict.me>
      
      This patch had conflicts when merged, resolved by
      Committer: Matei Zaharia <matei@databricks.com>
      
      Closes #571 from techaddict/SPARK-1637 and squashes the following commits:
      
      47ef86c [Sandeep] Changes based on Discussions on PR, removing use of RawTextHelper from examples
      8ed2d3f [Sandeep] Docs Updated for changes, Change for java examples
      5f96121 [Sandeep] Move Python examples into examples/src/main/python
      0a8dd77 [Sandeep] Move all Scala Examples to org.apache.spark.examples (some are in org.apache.spark.streaming.examples, for instance, and others are in org.apache.spark.examples.mllib)
      a000b5c3
  10. May 05, 2014
    • Tathagata Das's avatar
      [SPARK-1504], [SPARK-1505], [SPARK-1558] Updated Spark Streaming guide · a975a19f
      Tathagata Das authored
      - SPARK-1558: Updated custom receiver guide to match it with the new API
      - SPARK-1504: Added deployment and monitoring subsection to streaming
      - SPARK-1505: Added migration guide for migrating from 0.9.x and below to Spark 1.0
      - Updated various Java streaming examples to use JavaReceiverInputDStream to highlight the API change.
      - Removed the requirement for cleaner ttl from streaming guide
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      Closes #652 from tdas/doc-fix and squashes the following commits:
      
      cb4f4b7 [Tathagata Das] Possible fix for flaky graceful shutdown test.
      ab71f7f [Tathagata Das] Merge remote-tracking branch 'apache-github/master' into doc-fix
      8d6ff9b [Tathagata Das] Addded migration guide to Spark Streaming.
      7d171df [Tathagata Das] Added reference to JavaReceiverInputStream in examples and streaming guide.
      49edd7c [Tathagata Das] Change java doc links to use Java docs.
      11528d7 [Tathagata Das] Updated links on index page.
      ff80970 [Tathagata Das] More updates to streaming guide.
      4dc42e9 [Tathagata Das] Added monitoring and other documentation in the streaming guide.
      14c6564 [Tathagata Das] Updated custom receiver guide.
      a975a19f
  11. May 03, 2014
    • Sean Owen's avatar
      SPARK-1663. Corrections for several compile errors in streaming code examples,... · 11d54941
      Sean Owen authored
      SPARK-1663. Corrections for several compile errors in streaming code examples, and updates to follow API changes
      
      I gave the Streaming code examples, both Scala and Java, a test run today. I turned up a number of small errors, mostly compile errors in the Java examples. There were a few typos in the Scala too.
      
      I also took the liberty of adding things like imports, since in several cases they are not obvious. Feel free to push back on some changes.
      
      There's one thing I haven't quite addressed in the changes. `JavaPairDStream` uses the Java API version of `Function2` in almost all cases, as `JFunction2`. However it uses `scala.Function2` in:
      
      ```
        def reduceByKeyAndWindow(reduceFunc: Function2[V, V, V], windowDuration: Duration)
        :JavaPairDStream[K, V] = {
          dstream.reduceByKeyAndWindow(reduceFunc, windowDuration)
        }
      ```
      
      Is that a typo?
      
      Also, in Scala, I could not get this to compile:
      ```
      val windowedWordCounts = pairs.reduceByKeyAndWindow(_ + _, Seconds(30), Seconds(10))
      error: missing parameter type for expanded function ((x$1, x$2) => x$1.$plus(x$2))
      ```
      
      You can see my fix below but am I missing something?
      
      Otherwise I can say these all worked for me!
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #589 from srowen/SPARK-1663 and squashes the following commits:
      
      65a906b [Sean Owen] Corrections for several compile errors in streaming code examples, and updates to follow API changes
      11d54941
  12. Apr 25, 2014
    • Patrick Wendell's avatar
      SPARK-1619 Launch spark-shell with spark-submit · dc3b640a
      Patrick Wendell authored
      This simplifies the shell a bunch and passes all arguments through to spark-submit.
      
      There is a tiny incompatibility from 0.9.1 which is that you can't put `-c` _or_ `--cores`, only `--cores`. However, spark-submit will give a good error message in this case, I don't think many people used this, and it's a trivial change for users.
      
      Author: Patrick Wendell <pwendell@gmail.com>
      
      Closes #542 from pwendell/spark-shell and squashes the following commits:
      
      9eb3e6f [Patrick Wendell] Updating Spark docs
      b552459 [Patrick Wendell] Andrew's feedback
      97720fa [Patrick Wendell] Review feedback
      aa2900b [Patrick Wendell] SPARK-1619 Launch spark-shell with spark-submit
      dc3b640a
  13. Apr 21, 2014
    • Matei Zaharia's avatar
      [SPARK-1439, SPARK-1440] Generate unified Scaladoc across projects and Javadocs · fc783847
      Matei Zaharia authored
      I used the sbt-unidoc plugin (https://github.com/sbt/sbt-unidoc) to create a unified Scaladoc of our public packages, and generate Javadocs as well. One limitation is that I haven't found an easy way to exclude packages in the Javadoc; there is a SBT task that identifies Java sources to run javadoc on, but it's been very difficult to modify it from outside to change what is set in the unidoc package. Some SBT-savvy people should help with this. The Javadoc site also lacks package-level descriptions and things like that, so we may want to look into that. We may decide not to post these right now if it's too limited compared to the Scala one.
      
      Example of the built doc site: http://people.csail.mit.edu/matei/spark-unified-docs/
      
      Author: Matei Zaharia <matei@databricks.com>
      
      This patch had conflicts when merged, resolved by
      Committer: Patrick Wendell <pwendell@gmail.com>
      
      Closes #457 from mateiz/better-docs and squashes the following commits:
      
      a63d4a3 [Matei Zaharia] Skip Java/Scala API docs for Python package
      5ea1f43 [Matei Zaharia] Fix links to Java classes in Java guide, fix some JS for scrolling to anchors on page load
      f05abc0 [Matei Zaharia] Don't include java.lang package names
      995e992 [Matei Zaharia] Skip internal packages and class names with $ in JavaDoc
      a14a93c [Matei Zaharia] typo
      76ce64d [Matei Zaharia] Add groups to Javadoc index page, and a first package-info.java
      ed6f994 [Matei Zaharia] Generate JavaDoc as well, add titles, update doc site to use unified docs
      acb993d [Matei Zaharia] Add Unidoc plugin for the projects we want Unidoced
      fc783847
  14. Mar 10, 2014
    • Chen Chao's avatar
      maintain arbitrary state data for each key · 5d98cfc1
      Chen Chao authored
      RT
      
      Author: Chen Chao <crazyjvm@gmail.com>
      
      Closes #114 from CrazyJvm/patch-1 and squashes the following commits:
      
      dcb0df5 [Chen Chao] maintain arbitrary state data for each key
      5d98cfc1
  15. Mar 03, 2014
    • Aaron Kimball's avatar
      SPARK-1173. (#2) Fix typo in Java streaming example. · f65c1f38
      Aaron Kimball authored
      Companion commit to pull request #64, fix the typo on the Java side of the docs.
      
      Author: Aaron Kimball <aaron@magnify.io>
      
      Closes #65 from kimballa/spark-1173-java-doc-update and squashes the following commits:
      
      8ce11d3 [Aaron Kimball] SPARK-1173. (#2) Fix typo in Java streaming example.
      f65c1f38
    • Aaron Kimball's avatar
      SPARK-1173. Improve scala streaming docs. · 2b53447f
      Aaron Kimball authored
      Clarify imports to add implicit conversions to DStream and
      fix other small typos in the streaming intro documentation.
      
      Tested by inspecting output via a local jekyll server, c&p'ing the scala commands into a spark terminal.
      
      Author: Aaron Kimball <aaron@magnify.io>
      
      Closes #64 from kimballa/spark-1173-streaming-docs and squashes the following commits:
      
      6fbff0e [Aaron Kimball] SPARK-1173. Improve scala streaming docs.
      2b53447f
  16. Feb 19, 2014
  17. Feb 17, 2014
    • Andrew Or's avatar
      Fix typos in Spark Streaming programming guide · 767e3ae1
      Andrew Or authored
      Author: Andrew Or <andrewor14@gmail.com>
      
      Closes #536 from andrewor14/streaming-typos and squashes the following commits:
      
      a05faa6 [Andrew Or] Fix broken link and wording
      bc2e4bc [Andrew Or] Merge github.com:apache/incubator-spark into streaming-typos
      d5515b4 [Andrew Or] TD's comments
      767ef12 [Andrew Or] Fix broken links
      8f4c731 [Andrew Or] Fix typos in programming guide
      767e3ae1
  18. Feb 11, 2014
    • Chen Chao's avatar
      Merge pull request #579 from CrazyJvm/patch-1. · 4afe6ccf
      Chen Chao authored
      "in the source DStream" rather than "int the source DStream"
      
      "flatMap is a one-to-many DStream operation that creates a new DStream by generating multiple new records from each record int the source DStream."
      
      Author: Chen Chao <crazyjvm@gmail.com>
      
      Closes #579 and squashes the following commits:
      
      4abcae3 [Chen Chao] in the source DStream
      4afe6ccf
  19. Jan 28, 2014
    • Tathagata Das's avatar
      Merge pull request #497 from tdas/docs-update · 79302096
      Tathagata Das authored
      Updated Spark Streaming Programming Guide
      
      Here is the updated version of the Spark Streaming Programming Guide. This is still a work in progress, but the major changes are in place. So feedback is most welcome.
      
      In general, I have tried to make the guide to easier to understand even if the reader does not know much about Spark. The updated website is hosted here -
      
      http://www.eecs.berkeley.edu/~tdas/spark_docs/streaming-programming-guide.html
      
      The major changes are:
      - Overview illustrates the usecases of Spark Streaming - various input sources and various output sources
      - An example right after overview to quickly give an idea of what Spark Streaming program looks like
      - Made Java API and examples a first class citizen like Scala by using tabs to show both Scala and Java examples (similar to AMPCamp tutorial's code tabs)
      - Highlighted the DStream operations updateStateByKey and transform because of their powerful nature
      - Updated driver node failure recovery text to highlight automatic recovery in Spark standalone mode
      - Added information about linking and using the external input sources like Kafka and Flume
      - In general, reorganized the sections to better show the Basic section and the more advanced sections like Tuning and Recovery.
      
      Todos:
      - Links to the docs of external Kafka, Flume, etc
      - Illustrate window operation with figure as well as example.
      
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      
      == Merge branch commits ==
      
      commit 18ff10556570b39d672beeb0a32075215cfcc944
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Tue Jan 28 21:49:30 2014 -0800
      
          Fixed a lot of broken links.
      
      commit 34a5a6008dac2e107624c7ff0db0824ee5bae45f
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Tue Jan 28 18:02:28 2014 -0800
      
          Updated github url to use SPARK_GITHUB_URL variable.
      
      commit f338a60ae8069e0a382d2cb170227e5757cc0b7a
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Mon Jan 27 22:42:42 2014 -0800
      
          More updates based on Patrick and Harvey's comments.
      
      commit 89a81ff25726bf6d26163e0dd938290a79582c0f
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Mon Jan 27 13:08:34 2014 -0800
      
          Updated docs based on Patricks PR comments.
      
      commit d5b6196b532b5746e019b959a79ea0cc013a8fc3
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Sun Jan 26 20:15:58 2014 -0800
      
          Added spark.streaming.unpersist config and info on StreamingListener interface.
      
      commit e3dcb46ab83d7071f611d9b5008ba6bc16c9f951
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Sun Jan 26 18:41:12 2014 -0800
      
          Fixed docs on StreamingContext.getOrCreate.
      
      commit 6c29524639463f11eec721e4d17a9d7159f2944b
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Thu Jan 23 18:49:39 2014 -0800
      
          Added example and figure for window operations, and links to Kafka and Flume API docs.
      
      commit f06b964a51bb3b21cde2ff8bdea7d9785f6ce3a9
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Wed Jan 22 22:49:12 2014 -0800
      
          Fixed missing endhighlight tag in the MLlib guide.
      
      commit 036a7d46187ea3f2a0fb8349ef78f10d6c0b43a9
      Merge: eab351d a1cd1851
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Wed Jan 22 22:17:42 2014 -0800
      
          Merge remote-tracking branch 'apache/master' into docs-update
      
      commit eab351d05c0baef1d4b549e1581310087158d78d
      Author: Tathagata Das <tathagata.das1565@gmail.com>
      Date:   Wed Jan 22 22:17:15 2014 -0800
      
          Update Spark Streaming Programming Guide.
      79302096
  20. Jan 14, 2014
  21. Jan 12, 2014
  22. Jan 02, 2014
  23. Dec 30, 2013
  24. Dec 08, 2013
  25. Oct 24, 2013
    • Patrick Wendell's avatar
      Add a `repartition` operator. · 08c1a42d
      Patrick Wendell authored
      This patch adds an operator called repartition with more straightforward
      semantics than the current `coalesce` operator. There are a few use cases
      where this operator is useful:
      
      1. If a user wants to increase the number of partitions in the RDD. This
      is more common now with streaming. E.g. a user is ingesting data on one
      node but they want to add more partitions to ensure parallelism of
      subsequent operations across threads or the cluster.
      
      Right now they have to call rdd.coalesce(numSplits, shuffle=true) - that's
      super confusing.
      
      2. If a user has input data where the number of partitions is not known. E.g.
      
      > sc.textFile("some file").coalesce(50)....
      
      This is both vague semantically (am I growing or shrinking this RDD) but also,
      may not work correctly if the base RDD has fewer than 50 partitions.
      
      The new operator forces shuffles every time, so it will always produce exactly
      the number of new partitions. It also throws an exception rather than silently
      not-working if a bad input is passed.
      
      I am currently adding streaming tests (requires refactoring some of the test
      suite to allow testing at partition granularity), so this is not ready for
      merge yet. But feedback is welcome.
      08c1a42d
  26. Oct 08, 2013
  27. Sep 01, 2013
  28. Aug 29, 2013
    • Matei Zaharia's avatar
      Change build and run instructions to use assemblies · 53cd50c0
      Matei Zaharia authored
      This commit makes Spark invocation saner by using an assembly JAR to
      find all of Spark's dependencies instead of adding all the JARs in
      lib_managed. It also packages the examples into an assembly and uses
      that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script
      with two better-named scripts: "run-examples" for examples, and
      "spark-class" for Spark internal classes (e.g. REPL, master, etc). This
      is also designed to minimize the confusion people have in trying to use
      "run" to run their own classes; it's not meant to do that, but now at
      least if they look at it, they can modify run-examples to do a decent
      job for them.
      
      As part of this, Bagel's examples are also now properly moved to the
      examples package instead of bagel.
      53cd50c0
  29. Aug 22, 2013
  30. Jul 12, 2013
  31. Apr 10, 2013
  32. Feb 27, 2013
  33. Feb 25, 2013
Loading