Skip to content
Snippets Groups Projects
Commit 0a8a3b31 authored by Neta Zmora's avatar Neta Zmora
Browse files

Bug fix: remove softmax layer from model loading code

We should only add softmax when we explicitly require it (as when
exporting to ONNX), because CrossEntropyLoss implicitly computes
softmax on the logits it receives as input.

This cade was left there by mistake and should have never been
pushed to git.
parent e0bfc796
No related branches found
No related tags found
No related merge requests found
......@@ -75,9 +75,5 @@ def create_model(pretrained, dataset, arch, parallel=True, device_ids=None):
elif parallel:
model = torch.nn.DataParallel(model, device_ids=device_ids)
# explicitly add a softmax layer, because it is useful when exporting to ONNX
model.original_forward = model.forward
softmax = torch.nn.Softmax(dim=1)
model.forward = lambda input: softmax(model.original_forward(input))
model.cuda()
return model
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment