# The HPVM Compiler Infrastructure This repository contains the source code and documentation for the HPVM Compiler Infrastructure. The README briefly describes how to get started with building and installing HPVM. It also provides a benchmark suite to test the compiler infrastructure. HPVM is currently at **version 1.0**. For more about what HPVM is, see [our website](https://publish.illinois.edu/hpvm-project/). ## Papers [PPoPP'18 paper](https://dl.acm.org/doi/pdf/10.1145/3200691.3178493) [OOPSLA'19 paper](https://dl.acm.org/doi/10.1145/3360612) [PPoPP'21 paper](https://dl.acm.org/doi/10.1145/3437801.3446108) ## Resources [HPVM IR Specification](/hpvm/docs/hpvm-specification.md) [HPVM-C Language Specification](/hpvm/docs/hpvm-c.md) [HPVM Compilation Process](/hpvm/docs/compilation.md) ## Dependencies The following components are required to be installed on your machine to build HPVM. * GCC (>=5.1) * In addition, each version of CUDA-nvcc requires GCC to be not newer than a certain version. See [here](https://gist.github.com/ax3l/9489132) for the support matrix. * CMake (>=3.17) * Python (>=3.7) with Pip * GNU Make (>=3.79) * OpenCL (>=1.0.0) * CUDA (>=9.1) ## Supported Targets Supported/tested CPU architectures: * Intel Xeon E5-2640 * Intel Xeon W-2135 * ARM Cortex A-57 Supported/tested GPU architectures for OpenCL backend: * Nvidia Quadro P1000 * Nvidia GeForce GTX 1080 Supported/tested GPU architectures for Tensor Backend: * Nvidia Jetson TX2 * Nvidia GeForce GTX 1080 HPVM has not been tested but might work on other CPUs supported by LLVM Backend, and GPUs supported by OpenCL such as Intel, AMD, etc. **NOTE**: Approximations are tuned for Jetson TX2 and same speedups may not exist for other architectures. ## Getting Started ### Getting source code and setting up environment Checkout HPVM and go to directory `./hpvm` under project root: ```shell git clone --recursive https://gitlab.engr.illinois.edu/llvm/hpvm.git cd hpvm/ git checkout approx_hpvm_reorg_keras cd hpvm/ ``` HPVM needs to be able to find CUDA. If CUDA is installed in your system's $PATH (e.g. if it was installed at the default location), HPVM can find CUDA automatically. Otherwise, some environment variables are required: * `CUDA_TOOLKIT_PATH` --- Path to the CUDA toolkit * `CUDA_INCLUDE_PATH` --- Path to the CUDA headers * `CUDA_LIB_PATH` --- Path to CUDA libraries `set_paths.sh` can be used for this. Modify the values of these variables in `set_paths.sh` according to your system, and source the script: ```shell source set_paths.sh ``` HPVM installer script can be used to download, configure and build HPVM along with LLVM and Clang. ```shell bash install.sh ``` On launch, the installer asks whether it should also build HPVM. If HPVM is to be built, the installer asks the number of threads to be used. The default number of threads used to build HPVM is two (2). If you use this automatic build, skip the next section. * Specifically, the HPVM installer downloads LLVM, and Clang, copies HPVM source into llvm/tools and builds the entire tree. It also builds a modified LLVM C-Backend, based on the one maintained by [Julia Computing](https://github.com/JuliaComputing/llvm-cbe), as a part of HPVM and is currently used to generate OpenCL kernels for GPUs. ### Manually Build HPVM Alternatively, you can manually build HPVM with CMake. Please note that in this case, the installer script still *must* be executed to obtain some required components, but without the build step. In current directory (`hpvm/`), do ```shell mkdir build cd build cmake ../llvm [options] export PATH=$(realpath ./bin):$PATH ``` Some common options that can be used with CMake are: * -DCMAKE_INSTALL_PREFIX=directory --- Specify for directory the full pathname of where you want the HPVM tools and libraries to be installed. * -DCMAKE_BUILD_TYPE=type --- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. * -DLLVM_ENABLE_ASSERTIONS=On --- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types). **Note** that if the installer script was not used, you must _manually add `build/bin` directory to your $PATH variable_ as absolute path (as shown above). Now, compile the HPVM Compilation Tool `approxhpvm.py` using: ```shell make -j<number of threads> approxhpvm.py ``` With all the aforementioned steps, HPVM should be built, installed, tested and ready to use. In particular, `approxhpvm.py` should be an executable command from your command line. When not using the installer, you may want to run the regression tests using this script (outside of build directory): ```shell cd .. bash scripts/automate_tests.sh ``` ## Benchmarks and Tests We are providing the following [HPVM benchmarks](/hpvm/test/benchmarks): * Select benchmarks from the [Parboil](http://impact.crhc.illinois.edu/parboil/parboil.aspx) benchmark suite, located under [test/benchmarks/parboil](/hpvm/test/benchmarks/parboil). * An edge detection pipeline benchmark, located under [test/benchmarks/pipeline](/hpvm/test/benchmarks/pipeline). * A Camera ISP pipeline, located under [test/benchmarks/hpvm-cava](/hpvm/test/benchmarks/hpvm-cava), adapted from C code provided from our collaborators at [Harvard](http://vlsiarch.eecs.harvard.edu). Benchmark descriptions and instructions on how to compile and run them are [here](/hpvm/test/benchmarks). We are also providing [unit tests](/hpvm/test/unitTests) and [regression tests](/hpvm/test/regressionTests). ## Support All questions can be directed to [hpvm-dev@lists.cs.illinois.edu](mailto:hpvm-dev@lists.cs.illinois.edu).