README.md 1.65 KiB
Instructions for Training ResNet Inpainting Model, Saving Predictions, and Evaluating Accuracy
We assume you have activated the dav2
Conda environment.
Or, you can activate the terra-torch3d
Conda environment, and install requirements for pytorch-msssim
and lpips
.
You should also have generated delayed reprojections using a given depth model by following instructions here.
Train Model
- Update file names and valid timesteps to load for train, validation, and test splits in the dataset script here
- Update path to pretrained VisionNavNet in the model file here
- Run
python train_resnet_inpaint.py
with desired arguments to train an inpainting model
Collect Predictions
- Save predictions from pretrained ResNet inpainting model:
- Update path to trained checkpoint to load and names of videos to evaluate in
save_resnet_inpaint.py
- Run
python save_resnet_inpaint.py
for each video in dataset that you want to inpaint
- Update path to trained checkpoint to load and names of videos to evaluate in
- Save predictions from classic Telea inpainting method:
- Update names of videos to evaluate in
save_telea_inpaint.py
- Run
python save_telea_inpaint.py
for each video in dataset that you want to inpaint
- Update names of videos to evaluate in
Evaluate Accuracy
- To compute quantitative accuracy of inpainted images
- Run
python eval_inpaint.py
for each video in dataset that you want to evaluate
- Run