Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
Install
===============
Dependencies
------------
The following components are required to be installed on your machine to build HPVM.
* GCC (>=5.1)
* In addition, each version of CUDA-nvcc requires GCC to be not newer than a certain version.
See `here <https://gist.github.com/ax3l/9489132>`_ for the support matrix.
* CMake (>=3.17)
* GNU Make (>=3.79)
* OpenCL (>=1.0.0)
* CUDA (>=9.1)
* Python (==3.6) with pip (>=20)
Python must be strictly 3.6 (any subversion from 3.6.0 to 3.6.13).
Alternatively, if you use Anaconda for package management,
we provide a conda environment file that covers all Python and package requirements:
.. code-block:: bash
conda env create -n hpvm -f hpvm/env.yaml
Supported Architectures
-----------------------
Supported/tested CPU architectures:
* Intel Xeon E5-2640
* Intel Xeon W-2135
* ARM Cortex A-57
Supported/tested GPU architectures for OpenCL backend:
* Nvidia Quadro P1000
* Nvidia GeForce GTX 1080
Supported/tested GPU architectures for Tensor Backend:
* Nvidia Jetson TX2
* Nvidia GeForce GTX 1080
HPVM has not been tested but might work on other CPUs supported by LLVM Backend,
and GPUs supported by OpenCL such as Intel, AMD, etc.
**NOTE**: Approximations are tuned for Jetson TX2 and same speedups may not exist for other architectures.
Installing from Source
----------------------
Checkout HPVM and go to directory ``./hpvm`` under project root:
.. code-block:: shell
git clone --recursive -b approx_hpvm_reorg --single-branch https://gitlab.engr.illinois.edu/llvm/hpvm.git
cd hpvm/
HPVM needs to be able to find CUDA.
If CUDA is installed in your system's $PATH (e.g. if it was installed at the default location),
HPVM can find CUDA automatically.
Use HPVM installer script to download, configure and build HPVM along with LLVM and Clang:
.. code-block:: shell
./install.sh
* Without arguments, this script will interactively prompt you for some parameters.
Alternatively, use ``./install.sh -h`` for a list of available arguments
and pass arguments as required.
* ``./install.sh`` can relay additional arguments to CMake, but the dash must be dropped
regardless of using prompt or CLI arguments.
For example,
.. code-block:: shell
./install.sh -j32 DCMAKE_BUILD_TYPE=Release
will compile HPVM with 32 threads in Release mode; similarly, inputting
``DCMAKE_BUILD_TYPE=Release`` to the prompt will also send ``-DCMAKE_BUILD_TYPE=Release``
to CMake which gives a build in Release mode.
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
After configuring HPVM,
the installer will also compile HPVM by default, which you can opt out of.
If you do so, follow the next section "Manually Build HPVM" to manually compile HPVM,
and "Benchmarks and Tests" to manually run test cases if you wish so.
Otherwise, you can skip the next 2 sections.
* Specifically, the HPVM installer downloads LLVM, and Clang, copies HPVM source into
llvm/tools and builds the entire tree. It also builds a modified LLVM C-Backend,
based on the one maintained by `Julia Computing <https://github.com/JuliaComputing/llvm-cbe>`_,
as a part of HPVM and is currently used to generate OpenCL kernels for GPUs.
TroubleShooting
^^^^^^^^^^^^^^^
If CMake did not find your CUDA, some environment variables will help it:
* ``CUDA_TOOLKIT_PATH`` --- Path to the CUDA toolkit
* ``CUDA_INCLUDE_PATH`` --- Path to the CUDA headers
* ``CUDA_LIB_PATH`` --- Path to CUDA libraries
You can use ``set_paths.sh`` for this purpose: modify the values of these variables
in ``set_paths.sh`` according to your system, and source the script:
.. code-block:: shell
source set_paths.sh
Manually Build HPVM
-------------------
Alternatively, you can manually build HPVM with CMake.
Please note that in this case,
the installer script still *must* be executed to obtain some required components,
but without the build step.
In current directory (``hpvm/``), do
.. code-block:: shell
mkdir build
cd build
cmake ../llvm [options]
export PATH=$(realpath ./bin):$PATH
**Note** that you must *manually add ``build/bin`` directory to your $PATH variable*
as absolute path (as shown above).
Some common options that can be used with CMake are:
* ``-DCMAKE_INSTALL_PREFIX=directory`` --- Specify for directory the full pathname of where you want the HPVM tools and libraries to be installed.
* ``-DCMAKE_BUILD_TYPE=type`` --- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug.
* ``-DLLVM_ENABLE_ASSERTIONS=On`` --- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).
Now, compile the HPVM Compilation Tool ``approxhpvm.py`` using:
.. code-block:: shell
make -j<number of threads> approxhpvm.py
With all the aforementioned steps, HPVM should be built, installed, tested and ready to use.
In particular, ``approxhpvm.py`` should be an executable command from your command line.
Benchmarks and Tests
--------------------
We provide a number of general benchmarks, DNN benchmarks, and test cases, written in HPVM.
``make`` targets ``check-hpvm-pass``, ``check-hpvm-dnn``, and ``check-hpvm-profiler``
tests various components of HPVM and are increasingly time-consuming.
You can run tests similarly as how ``approxhpvm.py`` is compiled: for example,
.. code-block:: shell
make -j<number of threads> check-hpvm-pass
runs ``check-hpvm-pass`` tests. See :doc:`/tests` for details on benchmarks and test cases.