Commit Graph

7 Commits

Author SHA1 Message Date
Jez Ng
4667e20b3f Delete a bunch of type-ignores (#113990)
* Replaced `ignore[import]` by mypy config file entries
* Removed a bunch of ignores around previously-fixed attr-defined /
  call-arg issues
* Fixed some invalid / undefined types; added a few more type-ignores to
  squelch the downstream errors this exposed

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113990
Approved by: https://github.com/eellison, https://github.com/Skylion007
ghstack dependencies: #113979
2023-11-18 02:48:38 +00:00
Aaron Gokaslan
cb856b08b2 [BE]: Attach cause to some exceptions and enable RUFF TRY200 (#111496)
Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
2023-10-19 21:56:36 +00:00
JackCaoG
08e49fe97a Make openxla and opexla_eval backend show up in list_backends (#107905)
The reason to keep the non-aot(openxla_eval) backend is discussed in https://github.com/pytorch/xla/issues/5430#issuecomment-1683191662.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107905
Approved by: https://github.com/jansel
2023-08-25 21:52:17 +00:00
JackCaoG
139437bb84 Make Openxla dynamo backend take boxed input (#107260)
Fixes https://github.com/pytorch/xla/issues/5454

Also adding the inference(non-aot) backend back since we see a speed regression when using the aot-backend compared to the non-aot openxla backend. It is being tracked in https://github.com/pytorch/xla/issues/5430

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107260
Approved by: https://github.com/shunting314, https://github.com/jansel
2023-08-18 16:58:05 +00:00
JackCaoG
c9eb95cca4 Update XLA dyanmo backend name (#106489)
This is to deprecate the old XLA dyanmo backend and rename it `openxla`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106489
Approved by: https://github.com/jansel, https://github.com/shunting314
2023-08-03 20:00:37 +00:00
Jason Ansel
e071d72f3c Tag dynamo backends as debug/experimental (#93878)
Hides debug/experimental backends by default.

Before:
```
torch._dynamo.list_backends()
['aot_eager', 'aot_eager_decomp_partition', 'aot_torchxla_trace_once', 'aot_torchxla_trivial', 'aot_ts', 'aot_ts_nvfuser', 'cudagraphs', 'dynamo_accuracy_minifier_backend', 'dynamo_minifier_backend', 'eager', 'inductor', 'ipex', 'nvprims_aten', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'torchxla_trace_once', 'torchxla_trivial', 'ts', 'tvm']
```

After:
```
torch._dynamo.list_backends()
['aot_ts_nvfuser', 'cudagraphs', 'inductor', 'ipex', 'nvprims_nvfuser', 'onnxrt', 'tensorrt', 'tvm']
```

Fixes https://github.com/pytorch/pytorch/issues/93733

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93878
Approved by: https://github.com/voznesenskym
2023-02-04 00:50:51 +00:00
Jason Ansel
60e8c766b5 Refactor dynamo training backends (#93409)
This splits training.py into many files and moves them from `dynamo.optimizations.training` to `dynamo.backends.*`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93409
Approved by: https://github.com/ezyang
2023-02-03 03:07:15 +00:00