-
Guy Jacob authored
This NCF implementation is based on the implementation found in the MLPerf Training GitHub repository, specifically on the last revision of the code before the switch to the extended dataset. See: https://github.com/mlperf/training/tree/fe17e837ed12974d15c86d5173fe8f2c188434d5/recommendation/pytorch We've made several modifications to the code: * Removed all MLPerf specific code including logging * In ncf.py: * Added calls to Distiller compression APIs * Added progress indication in training and evaluation flows * In neumf.py: * Added option to split final FC layer * Replaced all functional calls with modules so they can be detected by Distiller * In dataset.py: * Speed up data loading - On first data will is loaded from CSVs and then pickled. On subsequent runs the pickle is loaded. This is much faster than the original implementation, but still very slow. * Added progress indication during data load process * Removed some irrelevant content from README.md
Guy Jacob authoredThis NCF implementation is based on the implementation found in the MLPerf Training GitHub repository, specifically on the last revision of the code before the switch to the extended dataset. See: https://github.com/mlperf/training/tree/fe17e837ed12974d15c86d5173fe8f2c188434d5/recommendation/pytorch We've made several modifications to the code: * Removed all MLPerf specific code including logging * In ncf.py: * Added calls to Distiller compression APIs * Added progress indication in training and evaluation flows * In neumf.py: * Added option to split final FC layer * Replaced all functional calls with modules so they can be detected by Distiller * In dataset.py: * Speed up data loading - On first data will is loaded from CSVs and then pickled. On subsequent runs the pickle is loaded. This is much faster than the original implementation, but still very slow. * Added progress indication during data load process * Removed some irrelevant content from README.md