Skip to content
Snippets Groups Projects
Commit 9dd635eb authored by witgo's avatar witgo Committed by Patrick Wendell
Browse files

SPARK-2480: Resolve sbt warnings "NOTE: SPARK_YARN is deprecated, please use -Pyarn flag"

Author: witgo <witgo@qq.com>

Closes #1404 from witgo/run-tests and squashes the following commits:

f703aee [witgo] fix Note: implicit method fromPairDStream is not applicable here because it comes after the application point and it lacks an explicit result type
2944f51 [witgo] Remove "NOTE: SPARK_YARN is deprecated, please use -Pyarn flag"
ef59c70 [witgo] fix Note: implicit method fromPairDStream is not applicable here because it comes after the application point and it lacks an explicit result type
6cefee5 [witgo] Remove "NOTE: SPARK_YARN is deprecated, please use -Pyarn flag"
parent cb09e93c
No related branches found
No related tags found
No related merge requests found
...@@ -21,8 +21,7 @@ ...@@ -21,8 +21,7 @@
FWDIR="$(cd `dirname $0`/..; pwd)" FWDIR="$(cd `dirname $0`/..; pwd)"
cd $FWDIR cd $FWDIR
export SPARK_HADOOP_VERSION=2.3.0 export SBT_MAVEN_PROFILES="-Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0"
export SPARK_YARN=true
# Remove work directory # Remove work directory
rm -rf ./work rm -rf ./work
...@@ -66,8 +65,8 @@ echo "=========================================================================" ...@@ -66,8 +65,8 @@ echo "========================================================================="
# (either resolution or compilation) prompts the user for input either q, r, # (either resolution or compilation) prompts the user for input either q, r,
# etc to quit or retry. This echo is there to make it not block. # etc to quit or retry. This echo is there to make it not block.
if [ -n "$_RUN_SQL_TESTS" ]; then if [ -n "$_RUN_SQL_TESTS" ]; then
echo -e "q\n" | SPARK_HIVE=true sbt/sbt clean package assembly/assembly test | \ echo -e "q\n" | SBT_MAVEN_PROFILES="$SBT_MAVEN_PROFILES -Phive" sbt/sbt clean package \
grep -v -e "info.*Resolving" -e "warn.*Merging" -e "info.*Including" assembly/assembly test | grep -v -e "info.*Resolving" -e "warn.*Merging" -e "info.*Including"
else else
echo -e "q\n" | sbt/sbt clean package assembly/assembly test | \ echo -e "q\n" | sbt/sbt clean package assembly/assembly test | \
grep -v -e "info.*Resolving" -e "warn.*Merging" -e "info.*Including" grep -v -e "info.*Resolving" -e "warn.*Merging" -e "info.*Including"
......
...@@ -17,12 +17,12 @@ ...@@ -17,12 +17,12 @@
# limitations under the License. # limitations under the License.
# #
echo -e "q\n" | SPARK_HIVE=true sbt/sbt scalastyle > scalastyle.txt echo -e "q\n" | sbt/sbt -Phive scalastyle > scalastyle.txt
# Check style with YARN alpha built too # Check style with YARN alpha built too
echo -e "q\n" | SPARK_HADOOP_VERSION=0.23.9 SPARK_YARN=true sbt/sbt yarn-alpha/scalastyle \ echo -e "q\n" | sbt/sbt -Pyarn -Phadoop-0.23 -Dhadoop.version=0.23.9 yarn-alpha/scalastyle \
>> scalastyle.txt >> scalastyle.txt
# Check style with YARN built too # Check style with YARN built too
echo -e "q\n" | SPARK_HADOOP_VERSION=2.2.0 SPARK_YARN=true sbt/sbt yarn/scalastyle \ echo -e "q\n" | sbt/sbt -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 yarn/scalastyle \
>> scalastyle.txt >> scalastyle.txt
ERRORS=$(cat scalastyle.txt | grep -e "\<error\>") ERRORS=$(cat scalastyle.txt | grep -e "\<error\>")
......
...@@ -48,9 +48,9 @@ the _exact_ Hadoop version you are running to avoid any compatibility errors. ...@@ -48,9 +48,9 @@ the _exact_ Hadoop version you are running to avoid any compatibility errors.
</tr> </tr>
</table> </table>
In SBT, the equivalent can be achieved by setting the SPARK_HADOOP_VERSION flag: In SBT, the equivalent can be achieved by setting the the `hadoop.version` property:
SPARK_HADOOP_VERSION=1.0.4 sbt/sbt assembly sbt/sbt -Dhadoop.version=1.0.4 assembly
# Linking Applications to the Hadoop Version # Linking Applications to the Hadoop Version
......
...@@ -474,7 +474,7 @@ anotherPeople = sqlContext.jsonRDD(anotherPeopleRDD) ...@@ -474,7 +474,7 @@ anotherPeople = sqlContext.jsonRDD(anotherPeopleRDD)
Spark SQL also supports reading and writing data stored in [Apache Hive](http://hive.apache.org/). Spark SQL also supports reading and writing data stored in [Apache Hive](http://hive.apache.org/).
However, since Hive has a large number of dependencies, it is not included in the default Spark assembly. However, since Hive has a large number of dependencies, it is not included in the default Spark assembly.
In order to use Hive you must first run '`SPARK_HIVE=true sbt/sbt assembly/assembly`' (or use `-Phive` for maven). In order to use Hive you must first run '`sbt/sbt -Phive assembly/assembly`' (or use `-Phive` for maven).
This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present
on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
(SerDes) in order to acccess data stored in Hive. (SerDes) in order to acccess data stored in Hive.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment