-
- Downloads
Merge pull request #20 from harveyfeng/hadoop-config-cache
Allow users to pass broadcasted Configurations and cache InputFormats across Hadoop file reads. Note: originally from https://github.com/mesos/spark/pull/942 Currently motivated by Shark queries on Hive-partitioned tables, where there's a JobConf broadcast for every Hive-partition (i.e., every subdirectory read). The only thing different about those JobConfs is the input path - the Hadoop Configuration that the JobConfs are constructed from remain the same. This PR only modifies the old Hadoop API RDDs, but similar additions to the new API might reduce computation latencies a little bit for high-frequency FileInputDStreams (which only uses the new API right now). As a small bonus, added InputFormats caching, to avoid reflection calls for every RDD#compute(). Few other notes: Added a general soft-reference hashmap in SparkHadoopUtil because I wanted to avoid adding another class to SparkEnv. SparkContext default hadoopConfiguration isn't cached. There's no equals() method for Configuration, so there isn't a good way to determine when configuration properties have changed.
No related branches found
No related tags found
Showing
- core/src/main/scala/org/apache/spark/CacheManager.scala 3 additions, 1 deletioncore/src/main/scala/org/apache/spark/CacheManager.scala
- core/src/main/scala/org/apache/spark/SparkContext.scala 31 additions, 8 deletionscore/src/main/scala/org/apache/spark/SparkContext.scala
- core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala 10 additions, 2 deletions.../main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
- core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala 117 additions, 23 deletionscore/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala
Loading
Please register or sign in to comment