-
Yifan Zhao authoredYifan Zhao authored
Install
Dependencies
The following components are required to be installed on your machine to build HPVM.
- GCC (>=5.1)
- In addition, each version of CUDA-nvcc requires GCC to be not newer than a certain version. See here for the support matrix.
- CMake (>=3.17)
- GNU Make (>=3.79)
- OpenCL (>=1.0.0)
- CUDA (>=9.0, <=10.2) with CUDNN 7 * CUDNN 7 is unsupported beyond CUDA 10.2 (starting from CUDA 11)
- Python (==3.6) with pip (>=20) * Python must be strictly 3.6 (any subversion from 3.6.0 to 3.6.13).
Python Environment
It is strongly recommended to use some Python virtual environment, as HPVM will install a few Python packages during this installation process.
- Some HPVM Python packages contains executables. If you don't use a virtual environment,
these executables are installed to your local
bin
directory, usually$HOME/.local/bin
. Please ensure this directory is in your $PATH variable. Below it is assumed that these executables are visible throught $PATH.
If you use Anaconda for package management,
we provide a conda environment file that covers all Python and package requirements
(hpvm/env.yaml
can be found in the repository):
conda env create -n hpvm -f hpvm/env.yaml
This creates the conda environment hpvm
.
If you use this method, remember to activate the environment each time you enter a bash shell:
conda activate hpvm
Supported Architectures
Supported/tested CPU architectures:
- Intel Xeon E5-2640
- Intel Xeon W-2135
- ARM Cortex A-57
Supported/tested GPU architectures for OpenCL backend:
- Nvidia Quadro P1000
- Nvidia GeForce GTX 1080
Supported/tested GPU architectures for Tensor Backend:
- Nvidia Jetson TX2
- Nvidia GeForce GTX 1080
HPVM has not been tested but might work on other CPUs supported by LLVM Backend, and GPUs supported by OpenCL such as Intel, AMD, etc.
NOTE: Approximations are tuned for Jetson TX2 and same speedups may not exist for other architectures.
Installing from Source
Checkout HPVM and go to directory ./hpvm
under project root:
git clone --recursive -b <current_branch_name> --single-branch https://gitlab.engr.illinois.edu/llvm/hpvm.git
cd hpvm/
If you have already cloned the repository without using --recursive
,
the directory hpvm/projects/predtuner
should be empty,
which can be fixed with git submodule update --recursive --init
.
HPVM needs to be able to find CUDA. If CUDA is installed in your system's $PATH (e.g. if it was installed at the default location), HPVM can find CUDA automatically.
Use HPVM installer script to download, configure and build HPVM along with LLVM and Clang:
./install.sh
-
Without arguments, this script will interactively prompt you for some parameters. Alternatively, use
./install.sh -h
for a list of available arguments and pass arguments as required. -
./install.sh
supports Ninja, a substitute of Make that is considered to build faster on many IO-bottlenecked devices. Passing--ninja
to the installer tells it to use Ninja instead of Make. -
./install.sh
can relay additional arguments to CMake, but the dash must be dropped regardless of using prompt or CLI arguments. For example,./install.sh -j32 DCMAKE_BUILD_TYPE=Release
will compile HPVM with 32 threads in Release mode; similarly, inputting
DCMAKE_BUILD_TYPE=Release
to the prompt will also send-DCMAKE_BUILD_TYPE=Release
to CMake which gives a build in Release mode.
After configuring HPVM, the installer will also compile HPVM by default, which you can opt out of. If you do so, follow the next section "Manually Build HPVM" to manually compile HPVM, and "Benchmarks and Tests" to manually run test cases if you wish so. Otherwise, you can skip the next 2 sections.
- Specifically, the HPVM installer downloads LLVM, and Clang, copies HPVM source into llvm/tools and builds the entire tree. It also builds a modified LLVM C-Backend, based on the one maintained by Julia Computing, as a part of HPVM and is currently used to generate OpenCL kernels for GPUs.
TroubleShooting
If CMake did not find your CUDA, some environment variables will help it:
-
CUDA_TOOLKIT_PATH
--- Path to the CUDA toolkit -
CUDA_INCLUDE_PATH
--- Path to the CUDA headers -
CUDA_LIB_PATH
--- Path to CUDA libraries
You can use set_paths.sh
for this purpose: modify the values of these variables
in set_paths.sh
according to your system, and source the script:
source set_paths.sh
Manually Build HPVM
Alternatively, you can manually build HPVM with CMake.
Please note that in this case,
the installer script still must be executed to obtain some required components,
but without the build step.
In current directory (hpvm/
), do
mkdir build
cd build
cmake ../llvm [options]
Some common options that can be used with CMake are:
-
-DCMAKE_INSTALL_PREFIX=directory
--- Specify for directory the full pathname of where you want the HPVM tools and libraries to be installed. -
-DCMAKE_BUILD_TYPE=type
--- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. -
-DLLVM_ENABLE_ASSERTIONS=On
--- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).
Now, compile the HPVM Compilation Tool hpvm-clang
using:
make -j<number of threads> hpvm-clang
With all the aforementioned steps, HPVM should be built, installed, tested and ready to use.
In particular, hpvm-clang
should be an executable command from your command line.
Benchmarks and Tests
We provide a number of general benchmarks, DNN benchmarks, and test cases, written in HPVM.
make
targets check-hpvm-pass
, check-hpvm-dnn
, and check-hpvm-profiler
tests various components of HPVM and are increasingly time-consuming.
You can run tests similarly as how hpvm-clang
is compiled: for example,
make -j<number of threads> check-hpvm-pass
runs check-hpvm-pass
tests. See :doc:`/tests` for details on benchmarks and test cases.