Ivan Yashchuk
0eea05b11e
Remove "prims_nvfuser" backend for TorchDynamo ( #88083 )
...
Removing "prims_nvfuser" backend according to the discussion in https://github.com/pytorch/torchdynamo/pull/1281#discussion_r979468355 .
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88083
Approved by: https://github.com/ezyang
2022-11-01 03:09:37 +00:00
Sherlock Huang
e271e823c7
Avoid calling logging.basicConfig ( #86959 )
...
Fixes https://github.com/pytorch/pytorch/issues/85952
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86959
Approved by: https://github.com/xwang233 , https://github.com/davidberard98
2022-10-17 16:45:21 +00:00
Elias Ellison
8bd9fe3f49
Changes to prepare for fake tensors on in functorch by default ( #84432 )
...
Fixes some errors you run into in dynamo when turning on fake tensors. I'm waiting on flipping the switch because I need to also get some fixes into dynamo + do benchmarking.
I could manually turn off fake tensors in functorch in dynamo, and then turn it on here if requested, although the changes here are pretty minimal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84432
Approved by: https://github.com/Chillee
2022-09-08 04:29:30 +00:00
Sergii Dymchenko
a0b3854548
Change seperate -> separate ( #83056 )
...
One instance was caught by Meta-internal "exact-word-misspell" linter in D38505529.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83056
Approved by: https://github.com/huydhn , https://github.com/seemethere
2022-08-09 23:11:34 +00:00
Sherlock Huang
dc3c1ade4b
Some fixes for FX pass with nvFuser backend ( #81911 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81911
Approved by: https://github.com/jjsjann123 , https://github.com/IvanYashchuk , https://github.com/davidberard98
2022-07-22 19:49:33 +00:00
Edward Z. Yang
3c2c2cc947
cudagraphs dynamo backend ( #80566 )
...
This backend handles cases where the preexisting cuda graphs
implementation from dynamo is unsound/has errors.
Requires this functorch bug fix: https://github.com/pytorch/functorch/pull/935
Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80566
Approved by: https://github.com/ngimel , https://github.com/wconstab
2022-07-22 14:06:07 +00:00
Sherlock Huang
d625637c7c
Include aten.where.self in NvFuserOperatorSupport ( #81436 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81436
Approved by: https://github.com/davidberard98
2022-07-16 03:29:27 +00:00
Sherlock Huang
6b280e880a
Update NvFuserOperatorSupport ( #81311 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81311
Approved by: https://github.com/davidberard98
2022-07-12 21:19:37 +00:00
Sherlock Huang
fc10a63727
Prims+NvFuser Backend Prototype ( #80591 )
...
This PR integrates FX graph partitioner + Aten2Prims DecompositionInterpreter + Prims' TraceExecutor + naive caches for nvFuser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80591
Approved by: https://github.com/jjsjann123 , https://github.com/ezyang
2022-07-08 19:53:03 +00:00
Sherlock Huang
752c06e0e1
FX graph partitioner and fuser ( #79439 )
...
This PR introduces two components.
CapabilityBasedPartitioner for FX graph: given a list of supported operators, this partitioner tries to forms the largest subgraphs that only contain the supported ops.
Fuser utility: given a list of nodes in FX graph, it lifts them as a sub-GraphModule in the original graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79439
Approved by: https://github.com/jjsjann123 , https://github.com/davidberard98
2022-06-24 18:49:37 +00:00