The `hpvm/test` directory holds all tests and benchmarks in HPVM and is organized as follows:
The ``hpvm/test`` directory holds all tests and benchmarks in HPVM and is organized as follows:
*
* ``hpvm_pass/``: unit and regression tests for HPVM Passes, written in LLVM-bitcode.
``hpvm_pass/``: unit and regression tests for HPVM Passes, written in LLVM-bitcode.
*
* ``benchmarks/``: includes a few applications written in HPVM-C, a template, and directions for compiling and running these benchmarks.
``benchmarks/``: includes a few applications written in HPVM-C, a template, and directions for compiling and running these benchmarks.
* ``benchmarks/parboil``: Selected benchmarks from the `Parboil <http://impact.crhc.illinois.edu/parboil/parboil.aspx>`_ benchmark suite.
* ``benchmarks/parboil``: Selected benchmarks from the `Parboil <http://impact.crhc.illinois.edu/parboil/parboil.aspx>`_ benchmark suite.
* ``benchmarks/pipeline``: An edge detection pipeline benchmark.
* ``benchmarks/pipeline``: An edge detection pipeline benchmark.
* ``benchmarks/hpvm-cava``: A Camera ISP pipeline, adapted from C code provided from our collaborators at `Harvard <http://vlsiarch.eecs.harvard.edu>`_.
* ``benchmarks/hpvm-cava``: A Camera ISP pipeline, adapted from C code provided from our collaborators at `Harvard <http://vlsiarch.eecs.harvard.edu>`_.
*
* ``dnn_benchmarks/``: ten (10) DNN benchmarks in HPVM-C, Keras and PyTorch, supported by ApproxHPVM.
``dnn_benchmarks/``: ten (10) DNN benchmarks in HPVM-C, Keras and PyTorch, supported by ApproxHPVM.
This tests HPVM as well as the Keras and PyTorch frontends.
This tests HPVM as well as the Keras and PyTorch frontends.
*
*
...
@@ -25,18 +22,27 @@ The `hpvm/test` directory holds all tests and benchmarks in HPVM and is organize
...
@@ -25,18 +22,27 @@ The `hpvm/test` directory holds all tests and benchmarks in HPVM and is organize
Their organization and usage are similar to the benchmarks under ``benchmarks/``.
Their organization and usage are similar to the benchmarks under ``benchmarks/``.
Each subfolder contains a DNN with 2 versions (2 ``.cpp`` files):
Each subfolder contains a DNN with 2 versions (2 ``.cpp`` files):
the ``tensor``-targeted version which compiles to ``tensor_runtime``,
the ``tensor``-targeted version which compiles to `tensor_runtime`,
and the ``cudnn``-targeted version which compiles to operators in ``cuDNN``
and the ``cudnn``-targeted version which compiles to operators in ``cuDNN``
(has ``_cudnn`` in name).
(has ``_cudnn`` in name).
*
* ``dnn_benchmarks/keras`` contains these DNNs implemented in Keras,
``dnn_benchmarks/keras`` contains these DNNs implemented in Keras,
and code for generating them down to HPVM-C (testing Keras frontend).
and code for generating them down to HPVM-C (testing Keras frontend).
* ``dnn_benchmarks/pytorch`` contains these DNNs in PyTorch
* ``dnn_benchmarks/pytorch`` contains these DNNs in PyTorch
and code for generating them down to HPVM-C (testing PyTorch/ONNX frontend).
and code for generating them down to HPVM-C (testing PyTorch/ONNX frontend).
The code generated from Keras and PyTorch frontend should be largely similar and functionally equivalent.
* ``./dnn`` is a local package with these 10 DNNs implemented in PyTorch as examples.
This package is not installed with HPVM.
* ``./test_frontend`` contains tests on inference accuracy of code generated by the PyTorch frontend.
* ``./test_{profiling|tuning}`` contains tests on performing profiling/tuning
on frontend-generated binary.
* ``dnn_benchmarks/tensor-rt-src`` contains these DNNs directly implemented in `tensor_runtime`
functions. These are for reference purpose only and not actively used in the HPVM system or testing.
Running Test Cases and Benchmarks
Running Test Cases and Benchmarks
---------------------------------
---------------------------------
...
@@ -45,29 +51,29 @@ The easiest way to run tests is to use ``make`` targets,
...
@@ -45,29 +51,29 @@ The easiest way to run tests is to use ``make`` targets,
which will also take care of all compilation of test cases and test fixtures.
which will also take care of all compilation of test cases and test fixtures.
The following targets runs these tests respectively:
The following targets runs these tests respectively:
* ``make -j check-hpvm-pass`` runs tests in ``hpvm_pass``: ``hpvm_pass/**/*.ll``.
* ``make -j check-hpvm-pass`` runs tests in ``hpvm_pass``: ``hpvm_pass/**/*.ll``.
These are regression and unit tests for HPVM passes.
These are regression and unit tests for HPVM passes.
*
``make -j check-hpvm-dnn`` runs all 20 DNN benchmarks under ``dnn_benchmarks/hpvm-c``
* ``make -j check-hpvm-dnn`` runs all 20 DNN benchmarks under ``dnn_benchmarks/hpvm-c``
(10 DNNs x 2 versions) and validates their accuracy.
(10 DNNs x 2 versions) and validates their accuracy.
*Note* that this can take quite long due to the size of DNNs and datasets.
*Note* that this can take quite long due to the size of DNNs and datasets.
Depending on your hardware capability, this test can take 5-30 minutes.
Depending on your hardware capability, this test can take 5-30 minutes.
Also, this is set to run sequentially out of GPU memory concerns.
Also, this is set to run sequentially out of GPU memory concerns.
*
* ``make -j check-hpvm-torch-acc`` generates all 10 DNNs with torch frontend,
``make -j check-hpvm-profiler`` runs ``hpvm-profiler`` on some smaller networks
runs them and checks their accuracy. This tests the torch frontend in isolation.
(as it is extremely time-consuming) and presents the tradeoff curve with profiled speedup.
*Note* that if you're on an NVIDIA Jetson TX2, you may want to run
* ``make -j check-hpvm-torch-tuning`` runs `predtuner` with binaries from torch frontend