Commit Graph

40 Commits

Author SHA1 Message Date
Jeff Daily
01abb5af21 additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)
Follow up to #107586.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115214
Approved by: https://github.com/peterbell10, https://github.com/malfet
2024-01-22 18:33:41 +00:00
PyTorch MergeBot
b637fdc8b3 Revert "additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)"
This reverts commit 74e1362499.

Reverted https://github.com/pytorch/pytorch/pull/115214 on behalf of https://github.com/PaliC due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/115214#issuecomment-1900815152))
2024-01-19 17:35:04 +00:00
Jeff Daily
74e1362499 additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)
Follow up to #107586.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115214
Approved by: https://github.com/peterbell10
2024-01-19 00:50:18 +00:00
Nikita Shulga
d6744a698c [torch/csrc/onnx] Use nested namespaces (2/N) (#113992)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113992
Approved by: https://github.com/ZainRizvi
ghstack dependencies: #113991
2023-11-18 00:20:19 +00:00
Aaron Bockover
bd1229477d [ONNX] Add initial support for FP8 ONNX export (#107962)
This PR resurrects @tcherckez-nvidia's #106379 with changes to resolve conflicts against newer `main` and defines our own constants for the new ONNX types to [avoid breaking Meta's internal usage of an old ONNX](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675189340).

- `::torch::onnx::TensorProto_DataType_FLOAT8E4M3FN=17`
- `::torch::onnx::TensorProto_DataType_FLOAT8E5M2=19`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107962
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms
2023-09-08 20:40:39 +00:00
cyy
054f3f1d8f [3/N] fix clang-tidy warnings in torch/csrc (#108024)
Apply fixes to some found issues by clang-tidy in torch/csrc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108024
Approved by: https://github.com/Skylion007, https://github.com/albanD, https://github.com/malfet
2023-08-28 18:00:00 +00:00
PyTorch MergeBot
71be8f2223 Revert "Add initial support for FP8 ONNX export (#106379)"
This reverts commit 08704f96f0.

Reverted https://github.com/pytorch/pytorch/pull/106379 on behalf of https://github.com/kit1980 due to breaking multiple internal builds ([comment](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675192700))
2023-08-11 18:22:35 +00:00
Tal Cherckez
08704f96f0 Add initial support for FP8 ONNX export (#106379)
Add support for ONNX_NAMESPACE::TensorProto_DataType_FLOAT8E5M2 and ONNX_NAMESPACE::TensorProto_DataType_FLOAT8E4M3FN to enable export of torch models that use FP8 (E4M3 and E5M2) to ONNX (opset 19)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106379
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi, https://github.com/malfet
2023-08-10 01:02:45 +00:00
BowenBao
88d0235b73 [ONNX] Update CI test environment; Add symbolic functions (#94564)
* CI Test environment to install onnx and onnx-script.
* Add symbolic function for `bitwise_or`, `convert_element_type` and `masked_fill_`.
* Update symbolic function for `slice` and `arange`.
* Update .pyi signature for `_jit_pass_onnx_graph_shape_type_inference`.

Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94564
Approved by: https://github.com/abock
2023-02-10 20:44:59 +00:00
AllenTiTaiWang
04b06c9627 [ONNX] Use optional op to keep None in results for ONNX internal tests (#84789)
All this time, PyTorch and ONNX has different strategy for None in output. And in internal test, we flatten the torch outputs to see if the rest of them matched. However, this doesn't work anymore in scripting after Optional node is introduced, since some of None would be kept.

#83184 forces script module to keep all Nones from Pytorch, but in ONNX, the model only keeps the ones generated with Optional node, and deletes those meaningless None.

This PR uses Optional node to keep those meaningless None in output as well, so when it comes to script module result comparison, Pytorch and ONNX should have the same amount of Nones.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84789
Approved by: https://github.com/BowenBao
2023-02-08 23:04:47 +00:00
AllenTiTaiWang
b27ac6dc56 [ONNX] Add full checker mode in torch.onnx.export (#83186)
Fix #82589
Why:
1. **full_check** works in `onnx::checker::check_model` function as it turns on **strict_mode** in `onnx::shape_inference::InferShapes()` which I think that was the intention of this part of code.
2. **strict_mode** catches failed shape type inference (invalid ONNX model from onnx perspective) and ONNXRUNTIME can't run these invalid models, as ONNXRUNTIME actually rely on ONNX shape type inference to optimize ONNX graph. Why we don't set it True for default? >>> some of existing users use other platform, such as caffe2 to run ONNX model which doesn't need valid ONNX model to run.
3. This PR doesn't change the original behavior of `check_onnx_proto`, but add a warning message for those models which can't pass strict shape type inference, saying the models would fail on onnxruntime.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83186
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi, https://github.com/jcwchen, https://github.com/BowenBao
2023-02-08 22:47:25 +00:00
BowenBao
806878518f [ONNX][Reland] Export node and value with scope name (#82040)
Introduce `_jit_pass_onnx_assign_node_and_value_names` to parse and assign
scoped name for nodes and values in exported onnx graph.
Module layer information is obtained from `ONNXScopeName` captured in `scope`
attribute in nodes. For nodes, the processed onnx node name are stored in
attribute `onnx_name`. For values, the processed onnx output name are stored
as `debugName`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82040
Approved by: https://github.com/AllenTiTaiWang, https://github.com/justinchuby, https://github.com/abock
2022-08-29 20:10:38 +00:00
PyTorch MergeBot
8e6207bcd8 Revert "[ONNX] Export node and value with scope name (#82040)"
This reverts commit 6a3666282d.

Reverted https://github.com/pytorch/pytorch/pull/82040 on behalf of https://github.com/weiwangmeta due to Diff reverted internally
2022-08-29 06:36:18 +00:00
BowenBao
6a3666282d [ONNX] Export node and value with scope name (#82040)
Introduce `_jit_pass_onnx_assign_node_and_value_names` to parse and assign
scoped name for nodes and values in exported onnx graph.
Module layer information is obtained from `ONNXScopeName` captured in `scope`
attribute in nodes. For nodes, the processed onnx node name are stored in
attribute `onnx_name`. For values, the processed onnx output name are stored
as `debugName`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82040
Approved by: https://github.com/AllenTiTaiWang, https://github.com/justinchuby, https://github.com/abock
2022-08-26 20:59:12 +00:00
BowenBao
daca0ee5e2 [ONNX] Introduce ONNXScopeName (#82038)
Update `_setup_trace_module_map` to always record module/layer info
in `Scope` attribute for nodes.
Extend `Scope` name to not only record module typename, but also
module object variable name. Both names are formatted and stored
as `name` attribute in `Scope`.
Introduce `ONNXScopeName` class to manage the formatting and parsing.
Updated local function export code adjusting to this update.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82038
Approved by: https://github.com/AllenTiTaiWang, https://github.com/justinchuby, https://github.com/abock, https://github.com/malfet
2022-08-22 20:49:21 +00:00
shubhambhokare1
95d873855e [ONNX] Inline prim::PythonOp for Autograd Function Export (#74765)
Add flag (inline_autograd) to enable inline export of model consisting of autograd functions. Currently, this flag should only be used in TrainingMode.EVAL and not for training.

An example:

If a model containing ``autograd.Function`` is as follows
```
                class AutogradFunc(torch.autograd.Function):
                  @staticmethod
                  def forward(ctx, i):
                      result = i.exp()
                      result = result.log()
                      ctx.save_for_backward(result)
                      return result
```
Then the model is exported as
```
                graph(%0 : Float):
                  %1 : Float = ^AutogradFunc(%0)
                  return (%1)
```
If inline_autograd is set to True, this will be exported as
```
                graph(%0 : Float):
                  %1 : Float = onnx::Exp(%0)
                  %2 : Float = onnx::Log(%1)
                  return (%2)
```

If one of the ops within the autograd module is not supported, that particular node is exported as is mirroring ONNX_FALLTHROUGH mode

Fixes: #61813
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74765
Approved by: https://github.com/BowenBao, https://github.com/malfet
2022-08-03 23:30:19 +00:00
Justin Chu
491c2ed281 [ONNX] Use TORCH_WARN for warnings (#78441)
Warnings were output to `std::cerr` in onnx jit passes. This prevents them from being filtered out. This PR replaces them with `TORCH_WARN` so we get more pythonic warnings.

- Use `TORCH_WARN` to for warnings
- Wrap jit passes with `wrap_pybind_function` when binding with python to handle the warnings properly

Calm test outputs, nice:

![image](https://user-images.githubusercontent.com/11205048/171510581-67299e9a-2dcd-4950-9cf3-ed67431f1f0c.png)

![image](https://user-images.githubusercontent.com/11205048/171516351-98bd342b-5f0a-4877-98c2-3be863b7f795.png)

Fixes #77494
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78441
Approved by: https://github.com/garymm
2022-06-06 18:28:50 +00:00
BowenBao
679fc90cdb [ONNX] Support optional type (#68793) (#73284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
(cherry picked from commit 822e79f31ae54d73407f34f166b654f4ba115ea5)
2022-05-04 20:24:30 +00:00
BowenBao
54a6942f8d [ONNX] ONNX Exporter logging (#71342)
Summary:
Add ONNX exporter logging facility. Supporting both C++/Python logging api. Logging can be turned on/off. Logging output stream can be either set to `stdout` or `stderr`.

A few other changes:
* When exception is raised in passes, the current IR graph being processed will be logged.
* When exception is raised from `_jit_pass_onnx` (the pass that converts nodes from namespace `ATen` to `ONNX`), both ATen IR graph and ONNX IR graph under construction will be logged.
* Exception message for ConstantFolding is truncated to avoid being too verbose.
* Update the final printed IR graph with node name in ONNX ModelProto as node attribute. Torch IR Node does not have name. Adding this to printed IR graph helps debugging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71342

Reviewed By: msaroufim

Differential Revision: D34433473

Pulled By: malfet

fbshipit-source-id: 4b137dfd6a33eb681a5f2612f19aadf5dfe3d84a
(cherry picked from commit 67a8ebed5192c266f604bdcca931df6fe589699f)
2022-03-17 19:40:03 +00:00
BowenBao
b3cfc74f0f [ONNX] Capture annotated attributes for local function
Enables local function export to capture annotated attributes.
For example:
```python
class M(torch.nn.Module):
    num_layers: int

    def __init__(self, num_layers):
        super().__init__()
        self.num_layers = num_layers

    def forward(self, args):
        ...
```
`num_layers` will now be captured as attribute of local function `M`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72883
2022-02-28 18:56:18 +00:00
BowenBao
2791725a84 Integrate full ONNX check into ONNX export API (#71125)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72988
2022-02-18 18:40:09 +00:00
BowenBao
32f6a1e2a2 [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
Co-authored-by: David Fan <jiafamicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72986
2022-02-18 18:27:26 +00:00
BowenBao
bbac8c9c48 [ONNX] List of files to consider for mergebot onnx rule (#72297)
Summary:
Based on past PRs, here is an non-exhaustive list of files to consider for extension. The PR is not meant to be final. Based on feedback and discussion, files could be dropped from the list, or PR could be updated to move code around such that extension is no longer needed.

List of files below and description:

* These files are for converting from IR to ONNX proto. These should be used only for ONNX.
```
"torch/csrc/jit/serialization/export.*",
"torch/csrc/jit/serialization/onnx.*",
```

* This file is touched whenever pass signature is updated.
```
"torch/_C/__init__.pyi.in",
```

* These files are touched whenever pass signature is updated. Somehow it's been convention that onnx passes are also added here, but it could be possible to move them. Let me know what you think.
~~"torch/csrc/jit/python/init.cpp",~~
~~"torch/csrc/jit/python/script_init.cpp",~~
Update: Bowen will move onnx passes to files under onnx folder.

* ~~Touched when need new attr::xxx, or onnx::xxx.~~
~~"aten/src/ATen/core/interned_strings.h"~~
Update: Nikita will help separate this file.

malfet

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72297

Reviewed By: H-Huang

Differential Revision: D34254666

Pulled By: malfet

fbshipit-source-id: 032cfa590cbedf4648b7335fe8f09a2380ab14cb
(cherry picked from commit 88653eadbf)
2022-02-16 23:01:13 +00:00
BowenBao
eb4238fc26 Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68490

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

 Co-authored-by: Nikita Shulga <nshulga@fb.com>

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483781

Pulled By: malfet

fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124
(cherry picked from commit 5fb1eb1b19)
2022-02-10 03:26:48 +00:00
hwangdeyu
c76c6e9bd3 [ONNX] Add BFloat16 type support when export to ONNX (#66788)
Summary:
- PyTorch and ONNX has supported BFloat16, add this to unblock some mixed-precision training model.
- Support PyTorch TNLG model to use BFloat16 tensors for the inputs/outputs of the layers that run on the NPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66788

Reviewed By: jansel

Differential Revision: D32283510

Pulled By: malfet

fbshipit-source-id: 150d69b1465b2b917dd6554505eca58042c1262a
2021-12-14 12:23:32 -08:00
Bowen Bao
02e35ce17b [ONNX] Update onnx function export with comments and clean up (#66817) (#67803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67803

* Addresses comments from #63589

[ONNX] remove torch::onnx::PRODUCER_VERSION (#67107)

Use constants from version.h instead.
This simplifies things since we no longer have to update
PRODUCER_VERSION for each release.

Also add TORCH_VERSION to version.h so that a string is available for
this purpose.

[ONNX] Set `ir_version` based on opset_version. (#67128)

This increases the odds that the exported ONNX model will be usable.
Before this change, we were setting the IR version to a value which may
be higher than what the model consumer supports.

Also some minor clean-up in the test code:
* Fix string replacement.
* Use a temporary file so as to not leave files around in the test
  current working directory.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181306

Pulled By: malfet

fbshipit-source-id: 02f136d34ef8f664ade0bc1985a584f0e8c2b663

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-11-05 10:35:35 -07:00
Gary Miguel
4b91355232 [ONNX] remove raw export type (#59160)
Summary:
[ONNX] remove raw export type

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59160

Reviewed By: tugsbayasgalan

Differential Revision: D28937039

Pulled By: SplitInfinity

fbshipit-source-id: 79bf91605526aa32a7304e75f50fe55d872bd4e8
2021-06-11 00:08:06 -07:00
Negin Raoof
b7b99ab0c8 [ONNX] Remove Aten ops from ONNX export (#37239)
Summary:
This PR adds a new operator export type to exporter: ONNX_FALLTHROUGH
This new type allows ops that are not supported to pass through.
This PR also removes all aten ops in ONNX operator export type mode.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37239

Reviewed By: hl475

Differential Revision: D21440509

Pulled By: houseroad

fbshipit-source-id: 38b826677cf3431ea44868efebefe1ff51c9aa75
2020-05-29 21:20:14 -07:00
Lara Haidar
728c7dcea3 ONNX Update training ops and training amenable export API (#35567)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35567

Reviewed By: hl475

Differential Revision: D20715339

Pulled By: houseroad

fbshipit-source-id: ad88097e76b169035ab5814b769dc1bed54c6008
2020-03-29 23:14:25 -07:00
Alban Desmaison
45e1be9762 Revert D19710370: [pytorch][PR] ONNX Update training ops and training amenable export API
Test Plan: revert-hammer

Differential Revision:
D19710370

Original commit changeset: e5e79d385529

fbshipit-source-id: d0114dc561a3415869805d3fbf43b92730bbcf54
2020-03-27 06:51:05 -07:00
Lara Haidar
025a0abe5a ONNX Update training ops and training amenable export API (#32950)
Summary:
- Update Dropout and Batchnorm in opset 12 : https://github.com/onnx/onnx/pull/2568
- Update api logic for exporting to ONNX training amenable models
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32950

Reviewed By: hl475

Differential Revision: D19710370

Pulled By: houseroad

fbshipit-source-id: e5e79d38552936966662c41d39ddf33be1ba3e35
2020-03-27 00:39:39 -07:00
BowenBao
0e548a76eb Upgrade exported ONNX IR version to 6 (#31025)
Summary:
Upgrade IR version from 4 to 6, below is change doc from ONNX. The upgrade should be backward compatible.

```
  // IR VERSION 5 published on March 18, 2019
  // - Add message TensorAnnotation.
  // - Add quantization annotation in GraphProto to map tensor with its scale and zero point quantization parameters.
  IR_VERSION_2019_3_18 = 0x0000000000000005;

  // IR VERSION 6 published on Sep 19, 2019
  // - Add support for sparse tensor constants stored in model.
  //   - Add message SparseTensorProto
  //   - Add sparse initializers
  IR_VERSION = 0x0000000000000006;
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31025

Reviewed By: hl475

Differential Revision: D18935444

Pulled By: houseroad

fbshipit-source-id: 9ba47f9657fa1a668db291cf04af07d5e8d73c21
2019-12-16 23:18:22 -08:00
Edward Yang
517c7c9861 Canonicalize all includes in PyTorch. (#14849)
Summary:
Anywhere we used #include "foo.h", we now say #include <foo.h>
Paths are adjusted to be rooted out of aten/src, torch/lib, or
the root level directory.

I modified CMakeLists.txt by hand to remove TH and THC from
the include paths.

I used the following script to do the canonicalization:

```
  import subprocess
  import re
  import os.path

  files = subprocess.check_output(['git', 'ls-files']).decode('utf-8').rstrip().split('\n')
  for fn in files:
      if not any(fn.endswith(suff) for suff in ['.cu', '.cpp', '.in', '.h', '.hpp', '.cu', '.cuh', '.cc']):
          continue
      if not any(fn.startswith(pref) for pref in ["aten/", "torch/"]):
          continue
      with open(fn, 'r') as f:
          c = f.read()
      def fmt(p):
          return "#include <{}>".format(p)
      def repl(m):
          p = m.group(1)
          if p in ["dlfcn.h", "unistd.h", "nvrtc.h", "cuda.h", "cuda_runtime.h", "cstdint", "cudnn.h", "Python.h", "cusparse.h", "cuda_runtime_api.h", "cuda_fp16.h", "cublas_v2.h", "stdint.h", "curand_kernel.h"]:
              return fmt(p)
          if any(p.startswith(pref) for pref in ["torch/csrc", "c10/", "ATen/", "caffe2/", "TH/", "THC/", "Eigen/", "gtest/", "zdl/", "gloo/", "onnx/", "miopen/"]):
              return fmt(p)
          for root in ["aten/src", "torch/lib", ""]:
              for bad_root in [os.path.dirname(fn), "aten/src/TH", "aten/src/THC", "torch/csrc"]:
                  new_p = os.path.relpath(os.path.join(bad_root, p), root)
                  if not new_p.startswith("../") and (os.path.exists(os.path.join(root, new_p)) or os.path.exists(os.path.join(root, new_p + ".in"))):
                      return fmt(new_p)
          print("ERROR: ", fn, p)
          return m.group(0)
      new_c = re.sub(r'#include "([^"]+)"', repl, c)
      if new_c != c:
          print(fn)
          with open(fn, 'w') as f:
              f.write(new_c)
```

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14849

Reviewed By: dzhulgakov

Differential Revision: D13363445

Pulled By: ezyang

fbshipit-source-id: 52361f878a672785f9306c9e9ab2513128092b68
2018-12-08 19:38:30 -08:00
Orion Reblitz-Richardson
0a6931cfee Only reference ONNX through onnx_pb.h (#11609)
Summary:
I think this is needed to land https://github.com/onnx/onnx/pull/1407 without CI errors.

cc mingzhe09088 houseroad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11609

Reviewed By: houseroad

Differential Revision: D9803490

Pulled By: orionr

fbshipit-source-id: 26193f38ab0a2eef9ad7d0da9a0310dc40ef0f2d
2018-09-12 18:25:58 -07:00
Lu Fang
b23d59ce1a Make ONNX_ATEN_FALLBACK as internal default option
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10629

Reviewed By: bddppq

Differential Revision: D9381106

fbshipit-source-id: 03d42c95d17a70a68fe0f38dad68f1793996dfce
2018-08-21 10:10:50 -07:00
Roy Li
f908b2b919 Use google protobuf in pytorch onnx import/export
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/8469

Reviewed By: houseroad

Differential Revision: D9102041

Pulled By: li-roy

fbshipit-source-id: 805c473745d181b71c7deebf0b9afd0f0849ba4f
2018-08-01 12:54:41 -07:00
Orion Reblitz-Richardson
5a7b4840d9 Move nanopb-generated ONNX to unique file name (#8773)
* Move nanopb-generated ONNX to unique file name

* fix other places
2018-06-22 09:51:56 -04:00
James Reed
04503962ff
[ONNX] Add an ATen fallback pathway for ONNX export (#8273)
* ATen fallback for ONNX export

* Move to enum

* Fix model test

* Add comment

* Address comments

BC interface
2018-06-12 22:59:45 -07:00
James Reed
ef76e24f60
[JIT][script][ONNX] ScriptModule ONNX export + ONNX export for control flow nodes (#6608)
* ScriptModule ONNX export

* ScriptModule ONNX export

* Export for control flow nodes

* Add pretty-print capability for ONNX export testing

* Update tests and handling of mutliple GraphProto names

* Maybe bugfix?

* factor out code from export and pretty print
2018-04-19 23:45:03 -07:00
bddppq
c43c911662
Export onnx protobuf bindings to python (#6651)
* Export onnx protobuf bindings to python

* rename native onnx module to _onnx
2018-04-17 16:38:57 -07:00