Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • llvm/predtuner
1 result
Show changes
Commits on Source (27)
Showing
with 776 additions and 387 deletions
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
sphinx:
configuration: doc/conf.py
python:
version: 3.8
install:
- requirements: doc/requirements.txt
MIT License
University of Illinois/NCSA Open Source License
Copyright (c) 2021 Yifan Zhao
Copyright (c) 2020 Illinois LLVM Group. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Developed by: The Illinois LLVM Group
University of Illinois at Urbana Champaign
https://hpvm.cs.illinois.edu
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation files
(the "Software"), to deal with the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimers.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimers in the
documentation and/or other materials provided with the distribution.
* Neither the names of [fullname], [project] nor the names of its
contributors may be used to endorse or promote products derived from
this Software without specific prior written permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
\ No newline at end of file
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH
THE SOFTWARE.
# Autotuning and Predictive Autotuning
`predtuner` performs autotuning on program approximation knobs using an error-predictive proxy
in place of the original program, to greatly speedup autotuning while getting results
comparable in quality. `current_version == 0.3`.
## Requirements
`predtuner` requires `python >= 3.7` and `pip`, preferrably `pip >= 20`.
To install from PyPI (currently TestPyPI), use
```bash
python -m pip install -i https://test.pypi.org/simple/ predtuner
```
### Install from Source
Alternatively, you can install this package from source.
At the root directory of this repository, do:
```bash
python -m pip install -e ./
```
With the flag `-e`, any changes to code in this repo is reflected on the installed version automatically.
It can be omitted if you don't intend to modify the code in this package.
## Getting Started
The documentation page contains a full tutorial.
Build the documentation by:
```bash
pip install sphinx sphinx_rtd_theme sphinx_autodoc_typehints
cd doc
make html
```
The documentation page will be created as `doc/build/html/index.html`.
You can open this in the browser and browse to "Getting Started" section.
### Model Data for Example / Testing
`predtuner` contains 10 demo models which are also used in tests.
- Download and extract [this](https://drive.google.com/file/d/1V_yd9sKcZQ7zhnO5YhRpOsaBPLEEvM9u/view?usp=sharing) file containing all 10 models, for testing purposes.
- The "Getting Started" example on the documentation page only uses VGG16-CIFAR10.
If you don't need the other models, get the data for VGG16-CIFAR10
[here](https://drive.google.com/file/d/1Z84z-nsv_nbrr8t9i28UoxSJg-Sd_Ddu/view?usp=sharing).
In either case, there should be a `model_params/` folder at the root of repo after extraction.
## Tuning with HPVM Binary
This branch (`hpvm`) contains beta support for HPVM binaries.
Please refer to `examples/tune_hpvm_bin.py` for an example with explanations.
Autotuning and Predictive Autotuning
====================================
PredTuner performs autotuning on program approximation knobs using an error-predictive proxy
in place of the original program, to greatly speedup autotuning while getting results
comparable in quality. ``current_version == 0.3``.
Read our `documentation here <https://predtuner.readthedocs.io/en/latest/index.html>`_
for how to install and use PredTuner.
Tuning with HPVM Binary
-----------------------
This branch (`hpvm`) contains beta support for HPVM binaries.
Please refer to `examples/tune_hpvm_bin.py` for an example with explanations.
......@@ -6,7 +6,7 @@ We use Sphinx for generating the API and reference documentation.
Install the following Python packages needed to build the documentation by entering:
```bash
pip install sphinx sphinx-autodoc-typehints sphinx-rtd-theme
pip install -r requirements.txt
```
To build the HTML documentation, enter::
......
doc/_static/result_no_model.png

216 KiB

doc/_static/result_with_model.png

253 KiB

from datetime import date
import sphinx_rtd_theme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath(".."))
from pathlib import Path
this_folder = Path(__file__).parent
sys.path.insert(0, (this_folder / "..").absolute().as_posix())
# General configuration
# ---------------------
......@@ -18,16 +17,15 @@ sys.path.insert(0, os.path.abspath(".."))
extensions = [
"sphinx.ext.autosummary",
"sphinx.ext.autodoc",
"sphinx_autodoc_typehints",
"sphinx.ext.coverage",
"sphinx.ext.doctest",
"sphinx.ext.intersphinx",
"sphinx.ext.mathjax",
"sphinx.ext.todo",
"sphinx.ext.viewcode",
"numpydoc",
]
always_document_param_types = True
autodoc_typehints = "description"
# generate autosummary pages
autosummary_generate = True
......@@ -48,48 +46,27 @@ master_doc = "index"
project = "PredTuner"
copyright = f"2020-{date.today().year}, University of Illinois"
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
# unused_docs = ['']
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# show_authors = True
# The name of the Pygments (syntax highlighting) style to use.
# pygments_style = 'friendly'
pygments_style = "sphinx"
# A list of prefixs that are ignored when creating the module index. (new in Sphinx 0.6)
# modindex_common_prefix = ["networkx."]
# doctest_global_setup = "import networkx as nx"
# modindex_common_prefix = []
# Options for HTML output
# -----------------------
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_theme = "pydata_sphinx_theme"
html_theme_options = {
"canonical_url": "https://networkx.org/documentation/stable/",
"navigation_depth": 3,
"logo_only": True,
# "github_url": "https://gitlab.engr.illinois.edu/llvm/hpvm-beta",
"show_prev_next": False,
"search_bar_position": "sidebar",
}
# html_logo = "_static/networkx_logo.svg"
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
......@@ -104,20 +81,6 @@ html_static_path = ["_static"]
# using the given strftime format.
html_last_updated_fmt = "%b %d, %Y"
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Content template for the index page.
# html_index = 'index.html'
# Custom sidebar templates, maps page names to templates.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# templates.
# html_additional_pages = {'': ''}
# If true, the reST sources are included in the HTML build as _sources/<name>.
html_copy_source = False
......@@ -129,9 +92,6 @@ latex_engine = "xelatex"
# The paper size ('letter' or 'a4').
latex_paper_size = "letter"
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
latex_appendices = ["tutorial"]
# Intersphinx mapping
......@@ -147,10 +107,3 @@ intersphinx_mapping = {
# The reST default role (used for this markup: `text`) to use for all
# documents.
default_role = "obj"
numpydoc_show_class_members = False
def setup(app):
app.add_css_file("custom.css")
app.add_js_file("copybutton.js")
......@@ -6,24 +6,43 @@ This guide can help you start working with PredTuner.
Installation
------------
Install PredTuner from source using `pip`:
* PredTuner requires ``python >= 3.6`` and ``pip``, preferrably ``pip >= 20``.
To install this package from source, at the root directory of this repository, do:
.. code-block:: shell
pip install -e .
python3 -m pip install -e ./
PredTuner will also be available on PyPi in the future after we publish the first release.
* With the flag ``-e``, any changes to code in this repo is reflected on the installed version automatically.
It can be omitted if you don't intend to modify the code in this package.
Model Data for Example / Testing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PredTuner contains 10 demo models which are also used in tests.
* Download and extract `this <https://drive.google.com/file/d/1V_yd9sKcZQ7zhnO5YhRpOsaBPLEEvM9u/view?usp=sharing>`_ file containing all 10 models, for testing purposes.
* In the tutorial below, we will only use VGG16-CIFAR10.
If you don't need the other models, get the data for VGG16-CIFAR10
`here <https://drive.google.com/file/d/1Z84z-nsv_nbrr8t9i28UoxSJg-Sd_Ddu/view?usp=sharing>`_.
In either case, there should be a ``model_params/`` folder at the root of repo after extraction.
Tuning a PyTorch DNN
--------------------
* The code used in the following example can be found at ``examples/tune_vgg16_cifar10.py``.
PredTuner can tune any user-defined application,
but it is optimized for tuning DNN applications defined in PyTorch.
We will use models predefined in PredTuner for demonstration purposes.
Download pretrained VGG16 model parameters and CIFAR10 dataset from `here
<https://drive.google.com/file/d/1Z84z-nsv_nbrr8t9i28UoxSJg-Sd_Ddu/view?usp=sharing>`_.
After extraction, there should be a `model_params/` folder in current directory.
After extraction, there should be a ``model_params/`` folder in current directory.
Load the tuning and test subsets of CIFAR10 dataset, and create a pretrained VGG16 model:
......@@ -55,7 +74,7 @@ while the test dataset is used to evaluate configurations found in autotuning.
This is similar to the split between training and validation set in machine learning tasks.
In this case, both tuning and test datasets contain 5000 images.
Create an instance of `TorchApp` for tuning PyTorch DNN:
Create an instance of `~predtuner.torchapp.TorchApp` for tuning PyTorch DNN:
.. code-block:: python
......@@ -69,31 +88,33 @@ Create an instance of `TorchApp` for tuning PyTorch DNN:
model_storage_folder="vgg16_cifar10/",
)
PredTuner provides `TorchApp`, which is specialized for the use scenario of tuning PyTorch DNNs.
PredTuner provides `~predtuner.torchapp.TorchApp`,
which is specialized for the use scenario of tuning PyTorch DNNs.
In addition, two more functions from PredTuner are used:
`pt.accuracy` is the *classification accuracy* metric,
:py:meth:`pt.accuracy <predtuner.torchutil.accuracy>`
is the *classification accuracy* metric,
which receives the probability distribution output from the VGG16 model,
compare it to the groundtruth in the dataset,
and returns a scalar between 0 and 100 for the classification accuracy
and returns a scalar between 0 and 100 for the classification accuracy.
`pt.get_knobs_from_file()` returns a set of approximations preloaded in PredTuner,
:py:meth:`pt.get_knobs_from_file <predtuner.approxes.get_knobs_from_file>`
returns a set of approximations preloaded in PredTuner,
which are applied to `torch.nn.Conv2d` layers.
See ??? for these approximations and how to define custom approximations.
Now we can obtain a tuner object from the application and start tuning.
We will keep configurations that don't exceed 3% loss of accuracy,
but encourage the tuner to find configurations with loss of accuracy below 2.1%.
but encourage the tuner to find configurations with loss of accuracy below 2.0%.
.. code-block:: python
tuner = app.get_tuner()
tuner.tune(
max_iter=500,
qos_tuner_threshold=2.1, # QoS threshold to guide tuner into
max_iter=1000,
qos_tuner_threshold=2.0, # QoS threshold to guide tuner into
qos_keep_threshold=3.0, # QoS threshold for which we actually keep the configurations
is_threshold_relative=True, # Thresholds are relative to baseline -- baseline_acc - 2.1
take_best_n=50,
is_threshold_relative=True, # Thresholds are relative to baseline -- baseline_acc - 2.0
take_best_n=20,
cost_model="cost_linear", # Use linear cost predictor
)
......@@ -101,8 +122,11 @@ but encourage the tuner to find configurations with loss of accuracy below 2.1%.
e.g., here it refers to the accuracy of DNN over given datasets.
We will be using the term QoS throughout the tutorials.
:py:meth:`tuner.tune <predtuner.modeledapp.ApproxModeledTuner.tune>`
is the main method for running a tuning session.
It accepts a few parameters which controls the behavior of tuning.
`max_iter` defines the number of iterations to use in autotuning.
Within 500 iterations, PredTuner should find about 200 valid configurations.
Within 1000 iterations, PredTuner should find about 200 valid configurations.
PredTuner will also automatically mark out `Pareto-optimal
<https://en.wikipedia.org/wiki/Pareto_efficiency>`_
configurations.
......@@ -111,7 +135,7 @@ in contrast to "valid" configurations which are the configurations that satisfy
(`tuner.kept_configs`).
`take_best_n` allows taking some extra close-optimal configurations in addition to Pareto-optimal ones.
500 iterations is for demonstration; in practice,
1000 iterations is for demonstration; in practice,
at least 10000 iterations are necessary on VGG16-sized models to converge to a set of good configurations.
Depending on hardware performance, this tuning should take several minutes to several tens of minutes.
......@@ -130,7 +154,8 @@ and visualize all configurations in a figure:
The generated figure should look like this:
.. image:: tuning_result.png
.. image:: _static/result_no_model.png
:target: _static/result_no_model.png
where the blue points shows the QoS and speedup of all valid configurations,
and the "best" configurations are marked out in orange.
......@@ -148,11 +173,11 @@ To do that, simply use the argument `qos_model` when calling `tuner.tune()`:
tuner = app.get_tuner()
tuner.tune(
max_iter=500,
qos_tuner_threshold=2.1, # QoS threshold to guide tuner into
max_iter=1000,
qos_tuner_threshold=2.0, # QoS threshold to guide tuner into
qos_keep_threshold=3.0, # QoS threshold for which we actually keep the configurations
is_threshold_relative=True, # Thresholds are relative to baseline -- baseline_acc - 2.1
take_best_n=50,
is_threshold_relative=True, # Thresholds are relative to baseline -- baseline_acc - 2.0
take_best_n=20,
cost_model="cost_linear", # Use linear cost predictor
qos_model="qos_p1"
)
......@@ -162,3 +187,17 @@ when it learns about the behavior of each knob on each operator (DNN layer).
Because the configurations will end up with predicted QoS values after tuning,
this will add a *validation* stage at the end of tuning where the QoS of best configurations are empirically measured,
and the bad ones are removed.
Following the procedure above to plot a figure of the configurations,
the generated figure should look like this,
with one extra subfigure (middle) comparing the predicted and measured QoS.
.. image:: _static/result_with_model.png
:target: _static/result_with_model.png
----------------------------------------------------------
This concludes the tutorial for installing and using PredTuner.
What we have just used is the PyTorch API of PredTuner.
:doc:`reference/index` shows the reference of this API along with two sets of lower-level APIs
that allows tuning applications that are not PyTorch DNN.
General Application Autotuning API
==================================
.. autoclass:: predtuner.approxapp.ApproxApp
:members:
.. autoclass:: predtuner.approxapp.ApproxTuner
:members:
.. autoclass:: predtuner.approxapp.ApproxKnob
:members:
.. autoclass:: predtuner.approxapp.Config
:members:
PyTorch Autotuning API
======================
PredTuner Autotuning API
========================
.. autoclass:: predtuner.torchapp.TorchApp
:members:
:undoc-members:
:doc:`pytorch-app` documents a high-level API for autotuning PyTorch Module.
.. autoclass:: predtuner.modeledapp.ApproxModeledTuner
:members:
:inherited-members:
:undoc-members:
PredTuner also supports predictive tuning of general applications that are not PyTorch Module,
or even empirical tuning of general application that doesn't support predictive models.
These lower-level APIs are documented in :doc:`modeled-app` and :doc:`approx-app` respectively.
.. toctree::
:maxdepth: 1
pytorch-app
modeled-app
approx-app
Predictive (Modeled) Autotuning API
===================================
.. autoclass:: predtuner.modeledapp.ModeledApp
:show-inheritance:
:members:
.. autoclass:: predtuner.modeledapp.ApproxModeledTuner
:show-inheritance:
:members:
.. autoclass:: predtuner.modeledapp.ValConfig
:show-inheritance:
:members:
Predictive Model Interface
----------------------------
.. autoclass:: predtuner.modeledapp.IQoSModel
:members:
.. autoclass:: predtuner.modeledapp.ICostModel
:members:
Predefined Predictive Models
----------------------------
Below is a list of cost and QoS models already defined:
* `predtuner.modeledapp.LinearCostModel`
* `predtuner.modeledapp.QoSModelP1`
* `predtuner.modeledapp.QoSModelP2`
.. autoclass:: predtuner.modeledapp.LinearCostModel
:show-inheritance:
.. autoclass:: predtuner.modeledapp.QoSModelP1
:show-inheritance:
.. autoclass:: predtuner.modeledapp.QoSModelP2
:show-inheritance:
PyTorch Autotuning API
======================
.. autoclass:: predtuner.torchapp.TorchApp
:show-inheritance:
:members: get_tuner
.. autofunction:: predtuner.approxes.get_knobs_from_file
.. autofunction:: predtuner.torchutil.accuracy
Defining New Approximation Knobs
--------------------------------
.. autoclass:: predtuner.torchapp.TorchApproxKnob
:show-inheritance:
:members:
sphinx>=3.5
pydata-sphinx-theme==0.5.2
numpydoc>=1.1
\ No newline at end of file
doc/tuning_result.png

26.3 KiB

......@@ -42,10 +42,11 @@ baseline, _ = app.measure_qos_cost({}, False)
# Get a tuner object and start tuning!
tuner = app.get_tuner()
tuner.tune(
max_iter=500, # TODO: In practice, use at least 5000, or 10000
qos_tuner_threshold=2.1, # QoS threshold to guide tuner into
max_iter=1000, # TODO: In practice, use at least 5000, or 10000
qos_tuner_threshold=2.0, # QoS threshold to guide tuner into
qos_keep_threshold=3.0, # QoS threshold for which we actually keep the configurations
is_threshold_relative=True, # Thresholds are relative to baseline -- baseline_acc - 2.1
take_best_n=20, # Take the best 20 configs (not just the "strictly" best ones)
cost_model="cost_linear", # Use linear performance predictor
qos_model="qos_p1", # Use P1 QoS predictor
)
......
......@@ -2,9 +2,9 @@ from ._logging import config_pylogger
from .approxapp import ApproxApp, ApproxKnob, ApproxTuner
from .approxes import get_knobs_from_file
from .modeledapp import (
IPerfModel,
ICostModel,
IQoSModel,
LinearPerfModel,
LinearCostModel,
ModeledApp,
QoSModelP1,
QoSModelP2,
......
This diff is collapsed.
......@@ -393,6 +393,21 @@ def get_knobs_from_file(
filepath: PathLike = default_knob_file,
extra_name_to_class: Dict[str, Type[TorchApproxKnob]] = None,
) -> Set[TorchApproxKnob]:
"""get_knobs_from_file(filepath=default_knob_file, extra_name_to_class=None)
Constructs and returns a set of `TorchApproxKnob` from a knob declaration file.
`default_knob_file` points to a file that is contained in the predtuner package,
so just calling ``get_knobs_from_file()`` should provide a set of predefined knobs already.
:param filepath: the knob declaration file (JSON) to read from.
:param extra_name_to_class: a mapping from the name of the approximation to the
class (implementation) of the approximation.
If not given, only the builtin approximations will be considered
when parsing the declaration file.
:type extra_name_to_class: Dict[str, Type[TorchApproxKnob]]
:rtype: Set[TorchApproxKnob]
"""
import json
extra_name_to_class = extra_name_to_class or {}
......
This diff is collapsed.