Skip to content
Snippets Groups Projects

HPVM Test and Benchmarks

Directory Organization

This directory is organized as follows:

  • unitTests/ and regressionTests/: unit and regression tests for HPVM. These are LLVM-bitcode test cases for HPVM passes.

  • benchmarks/: includes a few applications written in HPVM-C, a template, and directions for compiling and running these benchmarks.

  • dnn_benchmarks/: ten (10) DNN benchmarks in HPVM-C, Keras and PyTorch, supported by ApproxHPVM. This tests HPVM as well as the Keras and PyTorch frontends.

    • dnn_benchmarks/hpvm-c contains the HPVM-C version of these DNNs. Their organization and usage are similar to the benchmarks under benchmarks/.

      Each subfolder contains a DNN with 2 versions (2 .cpp files): the tensor-targeted version which compiles to tensor_runtime, and the cudnn-targeted version which compiles to operators in cuDNN (has _cudnn in name).

    • dnn_benchmarks/keras contains these DNNs implemented in Keras, and code for generating them down to HPVM-C (testing Keras frontend).

    • dnn_benchmarks/pytorch contains these DNNs in PyTorch and code for generating them down to HPVM-C (testing PyTorch/ONNX frontend).

    The code generated from Keras and PyTorch frontend should be largely similar and functionally equivalent.

Running Test Cases and Benchmarks

The easiest way to run tests is to use make targets, which will also take care of all compilation of test cases and test fixtures. The following targets runs these tests respectively:

  • make -j check-hpvm-pass runs tests in hpvm_pass: hpvm_pass/**/*.ll. These are regression and unit tests for HPVM passes.

  • make -j check-hpvm-dnn runs all 20 DNN benchmarks under dnn_benchmarks/hpvm-c (10 DNNs x 2 versions) and validates their accuracy.

    Note that this is quite time-consuming due to the size of DNNs and datasets. Depending on your hardware capability, this test can take 5-30 minutes. Also, this is set to run sequentially out of GPU memory concerns.

Underneath, llvm-lit is used to discover and run the tests.

benchmarks/ can only be compiled in-source with make. We are working to migrate it into the cmake system.

Compiling Benchmarks

This section explains how to compile the benchmarks without running them as tests.

HPVM-C DNN Benchmarks

To build (not run) all dnn_benchmarks/hpvm-c, use make -j dnn_benchmarks. For each benchmark ${bench_name}, the binary is generated at ${build_dir}/tools/hpvm/test/dnn_benchmarks/hpvm-c/${bench_name}.

Alternatively, it's possible to build just 1 DNN benchmark. The output of CMake shows a list of these benchmarks as target names, starting with

List of test dnn benchmarks: alexnet2_cifar10;alexnet2_cifar10...

Currently, there are 20 of them. These are:

lenet_mnist lenet_mnist_cudnn
alexnet_cifar10 alexnet_cifar10_cudnn
alexnet2_cifar10 alexnet2_cifar10_cudnn
vgg16_cifar10 vgg16_cifar10_cudnn
vgg16_cifar100 vgg16_cifar100_cudnn
mobilenet_cifar10 mobilenet_cifar10_cudnn
resnet18_cifar10 resnet18_cifar10_cudnn
alexnet_imagenet alexnet_imagenet_cudnn
vgg16_imagenet vgg16_imagenet_cudnn
resnet50_imagenet resnet50_imagenet_cudnn

_cudnn suffix indicates the code is generated onto cuDNN functions. Otherwise they are generated to tensor_runtime DNN functions which are hand-written in CUDA.

TODO: figure out how to

  1. Auto run Keras and PyTorch tests (generating, compiling and running all DNNs)