diff --git a/hpvm/projects/hpvm-tensor-rt/README.md b/hpvm/projects/hpvm-tensor-rt/README.md
index d3aad77a43a957148a6bf76d18ee4be15cab066e..df92f0c1995e627bbe20f6672bf1e7c6b4d92e18 100644
--- a/hpvm/projects/hpvm-tensor-rt/README.md
+++ b/hpvm/projects/hpvm-tensor-rt/README.md
@@ -1,63 +1,51 @@
-# ApproxHPVM Tensor Runtime
+# HPVM Tensor Runtime
 
-## Getting Started
 
-### Dependencies
+## Dependencies
 
-- CUDA-9.1 or above
-  - Your device must have a CUDA-enabled nVidia GPU
-  - CUBLAS-9.1 or above - included with CUDA by default
+- CUDA >= 9.1
 
-- cuDNN-7.0 or above
+- cuDNN >= 7
 
-- `cmake >= 3.18`
+## Building Tensor Runtime
 
-- `make >= 4`
+Tensor Runtime and the DNN sources using the Tensor runtime are built with the unified HPVM build system. These 
+can also be separately built. HPVM Tensor Runtime can be built under the `build` directory as:
 
-- `gcc < 8` or `3.2 <= clang < 9`
-  - We have an upperbound for compiler version because CUDA doesn't support too recent compilers
+```
+make -j ${NUM_THREADS} tensor_runtime
+```
 
-### Building the Tensor Runtime
+The tensor runtime is built as a static library under `build/lib/liibtensor_runtime.a` 
 
-The following commands will compile the tensor runtime library (`build/libtensor_runtime.a`)
-as well as a number of exemplary benchmarks (DNN models):
+### TensorRT DNN Benchmarks
 
-```shell
-mkdir build && cd build
-cmake ../
-make -j
-```
+To assist development of tensor-based programs using only the tensor runtime, we include 
+sources under `dnn_sources` that directly invoke the HPVM Tensor Runtime API calls for tensor operations, e.g., convolution, matrix multiplication, 
+add, relu, among others.
 
-### Tensor Runtime APIs
+Each benchmark can be build under your `build` directory as:
 
-- `tensor_runtime/include/tensor_runtime.h` declares all the functions available in the runtime.
-
-  TODO: the tensor runtime is generally under-documented at the time.
-  More documentation will be added in the first public release.
+```
+make -j ${NUM_THREADS} ${BENCHMARK}
+``` 
 
-- For examples of using `tensor_runtime` functions, see `dnn_sources/src/alexnet_cifar10.cc`.
-  - Also, try running `build/alexnet_cifar10` which is compiled from that file and runnable out of the box.
+Currently, 17 Benchmarks included:
 
-## Developer Notes
+|                        |                        |
+|------------------------|------------------------|
+| lenet_mnist_fp32       | lenet_mnist_fp16       |
+| alexnet_cifar10_fp32   | alexnet_cifar10_fp16   |
+| alexnet2_cifar10_fp32  | alexnet2_cifar10_fp16  |
+| vgg16_cifar10_fp32     | vgg16_cifar10_fp16     |
+| vgg16_cifar100_fp32    | vgg16_cifar100_fp16    |
+| mobilenet_cifar10_fp32 | mobilenet_cifar10_fp16 |
+| resnet18_cifar10_fp32  | resnet18_cifar10_fp16  |
+| alexnet_imagenet_fp32  |                        |
+| vgg16_imagenet_fp32    |                        |
+| resnet50_imagenet_fp32 |                        |
 
-### Directory Structure
+`_fp32` suffix denotes fp32 binaries - these use the FP32 API calls 
 
-- ./tensor_runtime:
-  - ./tensor_runtime/include/: Include files for Tensor Runtime
-  - ./tensor_runtime/include/tensor_signatures.cc: Include file with Tensor RT signatures
-    - NOTE: UPDATE this with updated API
-  - ./tensor_runtime/src/: HPVM TensorRT sources
-  
-- ./dnn_sources:
-  - ./dnn_sources/src/${BENCH}.cc: Per Bench FULL-precision source
-  - ./dnn_sources/src/half/${BENCH}.cc: Per Bench HALF-precision source
-  - ./dnn_sources/src/promise/${BENCH}.cc: Per Bench Layer-API source
+`_fp_16` suffix denotes fp16 binaries - these use FP16 (half precision) calls.
 
-- ./bin:
-  - ./bin/install_runtime.sh: Script for moving Tensor RT files to ./lib
-  - ./bin/run_autotuner.py: Python script for running Autotuner experiments
-  - ./bin/setup_tyler_paths.sh: Tyler-specific path setup for Tensor RT
-  - ./bin/setup_jetson.sh: Jetson board specific path setup for Tensor RT
-  - ./bin/setup_cuda_paths.sh: Place-holder script for setting CUDA paths
-  - ./bin/swing_selection.py: Script for hardware mapping
-    - NOTE: Includes the L2,L1 norm mapping to hardware knobs
diff --git a/hpvm/projects/keras/README.md b/hpvm/projects/keras/README.md
index 2ce981403d2550442a7e8fefd8df887916496731..4d1085595b9ec1bda23ec13fbf94d390470c3b40 100644
--- a/hpvm/projects/keras/README.md
+++ b/hpvm/projects/keras/README.md
@@ -25,7 +25,7 @@ conda activate keras_python36
 At the root of this project (`/projects/keras/`) install the Keras frontend pip package as:
 
 ```
-pip install -e .
+pip install -e ./
 ```
 
 **NOTE:** If you are using the conda environment, activate it prior to this step.