Andrew Or
authored
## What changes were proposed in this pull request? There is no way to use the Hive catalog in `pyspark-shell`. This is because we used to create a `SparkContext` before calling `SparkSession.enableHiveSupport().getOrCreate()`, which just gets the existing `SparkContext` instead of creating a new one. As a result, `spark.sql.catalogImplementation` was never propagated. ## How was this patch tested? Manual. Author: Andrew Or <andrew@databricks.com> Closes #13203 from andrewor14/fix-pyspark-shell.
Name | Last commit | Last update |
---|---|---|
.. | ||
docs | ||
lib | ||
pyspark | ||
test_support | ||
.gitignore | ||
pylintrc | ||
run-tests | ||
run-tests.py |