Skip to content
Snippets Groups Projects
Commit b1225ff2 authored by Yifan Zhao's avatar Yifan Zhao
Browse files

Updated test readme

parent a3217925
No related branches found
No related tags found
No related merge requests found
......@@ -4,20 +4,17 @@ Test and Benchmarks
Directory Organization
----------------------
The `hpvm/test` directory holds all tests and benchmarks in HPVM and is organized as follows:
The ``hpvm/test`` directory holds all tests and benchmarks in HPVM and is organized as follows:
*
``hpvm_pass/``: unit and regression tests for HPVM Passes, written in LLVM-bitcode.
* ``hpvm_pass/``: unit and regression tests for HPVM Passes, written in LLVM-bitcode.
*
``benchmarks/``: includes a few applications written in HPVM-C, a template, and directions for compiling and running these benchmarks.
* ``benchmarks/``: includes a few applications written in HPVM-C, a template, and directions for compiling and running these benchmarks.
* ``benchmarks/parboil``: Selected benchmarks from the `Parboil <http://impact.crhc.illinois.edu/parboil/parboil.aspx>`_ benchmark suite.
* ``benchmarks/pipeline``: An edge detection pipeline benchmark.
* ``benchmarks/hpvm-cava``: A Camera ISP pipeline, adapted from C code provided from our collaborators at `Harvard <http://vlsiarch.eecs.harvard.edu>`_.
*
``dnn_benchmarks/``: ten (10) DNN benchmarks in HPVM-C, Keras and PyTorch, supported by ApproxHPVM.
* ``dnn_benchmarks/``: ten (10) DNN benchmarks in HPVM-C, Keras and PyTorch, supported by ApproxHPVM.
This tests HPVM as well as the Keras and PyTorch frontends.
*
......@@ -25,18 +22,27 @@ The `hpvm/test` directory holds all tests and benchmarks in HPVM and is organize
Their organization and usage are similar to the benchmarks under ``benchmarks/``.
Each subfolder contains a DNN with 2 versions (2 ``.cpp`` files):
the ``tensor``-targeted version which compiles to ``tensor_runtime``,
the ``tensor``-targeted version which compiles to `tensor_runtime`,
and the ``cudnn``-targeted version which compiles to operators in ``cuDNN``
(has ``_cudnn`` in name).
*
``dnn_benchmarks/keras`` contains these DNNs implemented in Keras,
* ``dnn_benchmarks/keras`` contains these DNNs implemented in Keras,
and code for generating them down to HPVM-C (testing Keras frontend).
* ``dnn_benchmarks/pytorch`` contains these DNNs in PyTorch
and code for generating them down to HPVM-C (testing PyTorch/ONNX frontend).
The code generated from Keras and PyTorch frontend should be largely similar and functionally equivalent.
* ``./dnn`` is a local package with these 10 DNNs implemented in PyTorch as examples.
This package is not installed with HPVM.
* ``./test_frontend`` contains tests on inference accuracy of code generated by the PyTorch frontend.
* ``./test_{profiling|tuning}`` contains tests on performing profiling/tuning
on frontend-generated binary.
* ``dnn_benchmarks/tensor-rt-src`` contains these DNNs directly implemented in `tensor_runtime`
functions. These are for reference purpose only and not actively used in the HPVM system or testing.
Running Test Cases and Benchmarks
---------------------------------
......@@ -45,29 +51,29 @@ The easiest way to run tests is to use ``make`` targets,
which will also take care of all compilation of test cases and test fixtures.
The following targets runs these tests respectively:
* ``make -j check-hpvm-pass`` runs tests in ``hpvm_pass``: ``hpvm_pass/**/*.ll``.
These are regression and unit tests for HPVM passes.
*
``make -j check-hpvm-dnn`` runs all 20 DNN benchmarks under ``dnn_benchmarks/hpvm-c``
* ``make -j check-hpvm-dnn`` runs all 20 DNN benchmarks under ``dnn_benchmarks/hpvm-c``
(10 DNNs x 2 versions) and validates their accuracy.
*Note* that this can take quite long due to the size of DNNs and datasets.
Depending on your hardware capability, this test can take 5-30 minutes.
Also, this is set to run sequentially out of GPU memory concerns.
*
``make -j check-hpvm-profiler`` runs ``hpvm-profiler`` on some smaller networks
(as it is extremely time-consuming) and presents the tradeoff curve with profiled speedup.
* ``make -j check-hpvm-torch-acc`` generates all 10 DNNs with torch frontend,
runs them and checks their accuracy. This tests the torch frontend in isolation.
*Note* that if you're on an NVIDIA Jetson TX2, you may want to run
``bash dnn_benchmarks/profiling/jetson_clocks.sh``
to ensure that the clocks are running at the maximum frequency
* ``make -j check-hpvm-torch-tuning`` runs `predtuner` with binaries from torch frontend
to exercise both empirical and predictive autotuning.
This is only done for a few smaller networks for 5 iterations,
as it is extremely time-consuming.
Underneath, ``llvm-lit`` is used to discover and run the tests.
* ``make -j check-hpvm-torch-profiling`` runs `hpvm-profiler` with binaries from torch frontend,
and presents the tradeoff curve with profiled speedup.
This is only done for a few smaller networks.
``benchmarks/`` can only be compiled in-source with ``make``.
We are working to migrate it into the ``cmake`` system.
Underneath, ``llvm-lit`` is used to discover and run the tests.
Compiling Benchmarks
--------------------
......@@ -119,4 +125,20 @@ Currently, there are 20 of them. These are:
``_cudnn`` suffix indicates the code is generated onto cuDNN functions.
Otherwise they are generated to ``tensor_runtime`` DNN functions which are hand-written in CUDA.
Otherwise they are generated to `tensor_runtime` DNN functions which are hand-written in CUDA.
Other HPVM-C Benchmarks
^^^^^^^^^^^^^^^^^^^^^^^
There are 6 benchmarks under ``benchmarks/``:
``hpvm-cava`` and ``pipeline`` are single benchmarks, while ``parboil/`` is a collection of 4 benchmarks.
To build ``hpvm-cava`` or ``pipeline``,
use ``make -j hpvm_cava_{cpu|gpu}`` or ``make -j pipeline_{cpu|gpu}``.
The cpu or gpu suffix indicates the device the kernels in the benchmark run on.
For ``hpvm-cava``, the binary is generated under
``${build_dir}/tools/hpvm/test/benchmarks/hpvm-cava``,
while pipeline binaries are under ``${build_dir}/tools/hpvm/test/benchmarks/pipeline``.
The parboil benchmarks are only available through Makefile.
We will move them into CMake in the next release.
......@@ -5,7 +5,7 @@ configure_lit_site_cfg(
MAIN_CONFIG
${CMAKE_CURRENT_SOURCE_DIR}/test_frontend/lit.cfg.py
)
add_lit_testsuite(check-hpvm-torch2hpvm "Run accuracy tests for HPVM PyTorch frontend"
add_lit_testsuite(check-hpvm-torch-acc "Run accuracy tests for HPVM PyTorch frontend"
${CMAKE_CURRENT_BINARY_DIR}/test_frontend
# We depend on check_dnn_acc.py defined in ../hpvm-c/
# to compare the inference accuracy of our frontend-generated binary
......@@ -21,7 +21,7 @@ configure_lit_site_cfg(
MAIN_CONFIG
${CMAKE_CURRENT_SOURCE_DIR}/test_profiling/lit.cfg.py
)
add_lit_testsuite(check-hpvm-profiling "Run tests for frontend+profiling"
add_lit_testsuite(check-hpvm-torch-profiling "Run tests for torch frontend + profiling"
${CMAKE_CURRENT_BINARY_DIR}/test_profiling
ARGS "-j1" # Run DNN benchmarks sequentially
)
......@@ -33,7 +33,7 @@ configure_lit_site_cfg(
MAIN_CONFIG
${CMAKE_CURRENT_SOURCE_DIR}/test_tuning/lit.cfg.py
)
add_lit_testsuite(check-hpvm-tuning "Run tests for frontend+autotuning"
add_lit_testsuite(check-hpvm-torch-tuning "Run tests for torch frontend + autotuning"
${CMAKE_CURRENT_BINARY_DIR}/test_tuning
ARGS "-j1" # Run tuning tests sequentially
)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment