pytorch/test/onnx/debug_embed_params.py
Edward Yang 173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00

64 lines
1.9 KiB
Python

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
import torch
import torch.jit
from torch.autograd import Variable
import onnx
import caffe2.python.onnx.backend as c2
from test_pytorch_common import flatten
torch.set_default_tensor_type('torch.FloatTensor')
try:
import torch
except ImportError:
print('Cannot import torch, hence caffe2-torch test will not run.')
sys.exit(0)
def run_embed_params(proto, model, input, state_dict=None, use_gpu=True):
"""
This is only a helper debug function so we can test embed_params=False
case as well on pytorch front
This should likely be removed from the release version of the code
"""
device = 'CPU'
if use_gpu:
device = 'CUDA'
model_def = onnx.ModelProto.FromString(proto)
onnx.checker.check_model(model_def)
prepared = c2.prepare(model_def, device=device)
if state_dict:
parameters = []
# Passed in state_dict may have a different order. Make
# sure our order is consistent with the model's order.
# TODO: Even better: keyword arguments!
for k in model.state_dict():
if k not in state_dict:
# Once PyTorch Module adds unnecessary paramter, the old pre-trained model does not have it.
# Just simply pass the new one.
# TODO: Please don't export unnecessary parameter.
parameters.append(model.state_dict()[k])
else:
parameters.append(state_dict[k])
else:
parameters = list(model.state_dict().values())
W = {}
for k, v in zip(model_def.graph.input, flatten((input, parameters))):
if isinstance(v, Variable):
W[k.name] = v.data.cpu().numpy()
else:
W[k.name] = v.cpu().numpy()
caffe2_out = prepared.run(inputs=W)
return caffe2_out