Commit Graph

103 Commits

Author SHA1 Message Date
Maggie Moss
84fe848503 Fix pyrefly error syntax (2/n) (#166448)
Ensrues pyrefly ignores only silence one error code.

After this, only ~40 files left to clean up .

pyrefly check
lintrunner

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166448
Approved by: https://github.com/Skylion007
2025-10-29 00:36:40 +00:00
mansiag05
850ba8c96d [Code Clean] Clean asserts in torch/autograd. (#165627)
Replaces 78 assert statements across 10 files in torch.autograd with explicit if-checks raising AssertionError to prevent assertions from being disabled with Python -O flag. This ensures error checking remains active in optimized builds.

fix partially #164878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165627
Approved by: https://github.com/albanD
2025-10-20 23:03:47 +00:00
Maggie Moss
f414aa8e0d Add pyrefly suppressions (3/n) (#164588)
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283

Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check

step 1: uncomment lines in the pyrefly.toml file
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/bb31574ac8a59893c9cf52189e67bb2d

after:

 0 errors (1,970 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164588
Approved by: https://github.com/oulgen
2025-10-03 22:03:03 +00:00
Yu, Guangye
ad81eeb7c7 Refactor to use torch.accelerator.device_index instead of torch.cuda.device for generic device context manager (#148880)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148880
Approved by: https://github.com/EikanWang, https://github.com/albanD
ghstack dependencies: #148864
2025-04-25 09:45:25 +00:00
Xuehai Pan
f3fce597e9 [BE][Easy][17/19] enforce style for empty lines in import segments in torch/[a-c]*/ and torch/[e-n]*/ (#129769)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129769
Approved by: https://github.com/ezyang
2024-08-04 10:24:09 +00:00
Aaron Orenstein
62bcdc0ac9 Flip default value for mypy disallow_untyped_defs [4/11] (#127841)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127841
Approved by: https://github.com/oulgen
2024-06-08 18:36:48 +00:00
Xuehai Pan
67ef2683d9 [BE] wrap deprecated function/class with typing_extensions.deprecated (#127689)
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.

Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.

Resolves #126888

- #126888

This PR is split from PR #126898.

- #126898

------

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127689
Approved by: https://github.com/Skylion007
2024-06-02 12:30:43 +00:00
PyTorch MergeBot
033e733021 Revert "[BE] wrap deprecated function/class with typing_extensions.deprecated (#126898)"
This reverts commit 749a132fb0.

Reverted https://github.com/pytorch/pytorch/pull/126898 on behalf of https://github.com/fbgheith due to switching typing-extensions=4.3.0 to 4.9.0 causes internal failure ([comment](https://github.com/pytorch/pytorch/pull/126898#issuecomment-2142884456))
2024-05-31 19:47:24 +00:00
Xuehai Pan
749a132fb0 [BE] wrap deprecated function/class with typing_extensions.deprecated (#126898)
Use `typing_extensions.deprecated` for deprecation annotation if possible. Otherwise, add `category=FutureWarning` to `warnings.warn("message")` if the category is missing.

Note that only warnings that their messages contain `[Dd]eprecat(ed|ion)` are updated in this PR.

UPDATE: Use `FutureWarning` instead of `DeprecationWarning`.

Resolves #126888

- #126888

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126898
Approved by: https://github.com/albanD
2024-05-29 12:09:27 +00:00
Aaron Gokaslan
6de28e92d2 [BE]: Apply FURB118 (prev): replaces unnecessary lambdas with operator. (#116027)
This replaces a bunch of unnecessary lambdas with the operator package. This is semantically equivalent, but the operator package is faster, and arguably more readable. When the FURB rules are taken out of preview, I will enable it as a ruff check.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116027
Approved by: https://github.com/malfet
2023-12-20 19:35:08 +00:00
Edward Z. Yang
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
shibo19
2961ea80f5 Deprecate "Type" and support more devices for save_on_cpu (#103245)
Fixes #ISSUE_NUMBER
1、the class named "Type" has not been used anymore in anywhere, so I add warning message  to remove it in the future.
2、add a arg(default is "cuda") for save_on_cpu so that it can support more device type (like privateuse1)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103245
Approved by: https://github.com/soulitzer
2023-06-09 05:05:01 +00:00
Zafar
59052e39b8 [quant] qtensor resize (#36442)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36442

Test Plan: Imported from OSS

Differential Revision: D20984080

Pulled By: z-a-f

fbshipit-source-id: 7fcf24bd2f92f038b670f510118b012d8c7acc74
2020-04-25 15:52:35 -07:00
Sam Gross
895aebac08
Use Variable instead of Tensor in Function.forward (#4786)
The Tensor and Variable classes are being merged.
autograd.Function.forward is now called on Variables, but with "no-grad"
mode (torch.no_grad()) enabled.

One benefit is that we no longer have to explicitly track shared
storages.
2018-02-06 17:24:27 -05:00
gchanan
260a246192
Move repeat autograd to C++. (#4885) 2018-01-29 15:09:59 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
gchanan
e426020c87
Move prod, cumprod backwards to C++ (#4394)
* Add view_as as a native_function.

* Move prod, cumprod backwards to C++.

* Update for review requets.

* Review comments.

* Reorder slice parameters so dim is first.

* Update test_slice.

* Update test_autograd.

* Fix flake8.
2018-01-03 16:27:50 -05:00
Sam Gross
bec0349280 Implement Variable.cuda and Variable.type using ATen (#4139)
* Implement Variable.cuda using ATen

This adds an optional async flag to Tensor::copy_, which attempts to do
a non-blocking copy if the one of the tensors is in pinned memory and
the other is a CUDA tensor.

* Perform cross-device copy in CopyBackwards

Also call torch.cuda._lazy_init() from Variable.cuda()

* Implement Variable.type via ATen

* Changes from review:

 - remove copy_out
 - remove unnecessary include
 - fix default device for .cuda()

* Combine if statements in dispatch_type
2017-12-18 01:54:35 -05:00
Sam Gross
ed640010ce
Delete unused autograd functions (#3856) 2017-11-24 14:31:11 -05:00
Sam Gross
4518793aa2
Implement indexing in ATen (#3725)
Implements basic and advanced indexing using ATen tensors/variables.
Basic indexing is translated at the Python-binding level
(python_variable_indexing.cpp) to slice/squeeze/unsqueeze/select calls.
Advanced indexing is implemented in ATen in terms of take() and put()
calls.
2017-11-21 13:19:00 -05:00
Sam Gross
fde355f7d4
Allow in-place operations on views (#3384)
Allow in-place operations on views

Adds VariableViewImpl, a subclass of VariableImpl which has a pointer to
the base Variable on which it is a view. In-place operations on views
change the grad_fn of the base.

Note that in-place operations only work on views that are the first output of the function that created them. All C++/ATen implemented functions have this behavior, but it's possible to write Python-implemented autograd functions that do not. In-place operations on these view will raise an exception.

Fixes #3313
2017-11-06 18:19:56 -05:00
Sam Gross
48fe5d4622
Move select and permute to ATen/C++ (#3421)
Move select and permute to ATen/C++
2017-11-02 15:17:36 -04:00
Edward Z. Yang
8fbe003d4e Miscellaneous ONNX fixes and behavior changes.
- Deleted Addmm/Concat Function class, as this is now native ATen operator

- Resurrected ONNX operator for Concat (now called 'cat')

- Add a "fake" Expand ONNX operator, which we now do the optimization on;
  this helps prevent us from emitting a warning that 'expand' is not supported.
  We still fail if any of these Expand operators make it to the final model,
  until we actually formalize Expand in ONNX.  This also simplifies the
  fuseBroadcast code, because single-return ONNX nodes don't get select nodes.

- New error reporting strategy.  If we fail to export an operator because of
  something, we emit a warning, but otherwise keep going.  At the very end,
  in export.cpp, we now check if there are any ATen operators left over.  If
  there are, we bug out.  This assumes that ATen is lower case and ONNX is upper
  case.  You're now supposed to 'return _unimplemented(msg)' in these cases.

- New toString() method on Graph, for getting the string graph (useful for
  slapping it into error messages.)

- Some of the legacy symbolics (still in Python symbolic method of Function
  subclass) have been cleaned up for clarity.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-29 23:50:34 -04:00
bddppq
ac8f56656d Adapt ONNX Slice op changes (#3316) 2017-10-28 00:03:29 -04:00
James Reed
869bdeb936 Symbolic implementation of Index supporting tuple of slices. (#3294) 2017-10-27 02:39:38 +05:30
Edward Z. Yang
9989bb1a43 Export index constants as long, not int (onnx-caffe2 needs it.) (#3274)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-25 09:50:33 +02:00
Gregory Chanan
4b1e85d266 Remove split/chunk python autograd. 2017-10-24 19:33:37 -04:00
Lu Fang
5691b0b8d2 Fix the Slice changes in ONNX (#3216) 2017-10-24 14:12:54 -04:00
Edward Z. Yang
53fe804322 Make ONNX work with new C++ autograd world.
The general strategy is there is a new module, torch.onnx.symbolic, which
contains a function for every ATen method name with the ONNX translation.
While implementing this, I took the opportunity to expunge all references
of 'g' from the public API; instead, it is managed by a global variable in
torch.onnx which tracks the "current graph".

Other changes:

- If you pass a Tensor to op as an argument, it will now automatically be
  converted into a Constant ONNX node.  This lets us remove needing to
  implement ONNX

- Rename value to other, wherever there is both a Scalar and Tensor overload.
  This way, keyword dispatch can work uniformly in both cases.

- Deleted any autograd Function classes that both had a symbolic and were ported
  to the new C++ autograd implementation.  There may still be some straggling
  classes that didn't have symbolic.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-20 15:38:01 -04:00
SsnL
fce3ed19e5 Change device_id to device in python land (#3133)
* change device_id to device in python land

* cuda/random.py
2017-10-17 00:54:26 +02:00
Junjie Bai
e4701e63f6 Fix exporting Reshape with single torch.Size argument 2017-10-02 23:29:49 +02:00
Trevor Killeen
ad414908d7 Advanced Indexing with variables for autograd (#2590) 2017-09-20 14:50:07 -04:00
Adam Paszke
b66d90c84f Add a pass to remove all non-standard ONNX nodes before export (#225) 2017-09-19 10:53:32 -04:00
Adam Paszke
28828e033f Make certain functions traceable 2017-09-19 10:53:32 -04:00
Soumith Chintala
462f95ed6d fix bug in autograd type() for non-default GPU input 2017-09-13 15:33:37 -04:00
Zach DeVito
3c61b59fd4 codemod primspec -> symbol, PrimSpec -> Symbolic 2017-09-06 13:45:39 -04:00
Zach DeVito
6d8d5bab4c Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match 2017-09-06 13:45:39 -04:00
Edward Z. Yang
4fc54af010 Code review comments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Lu Fang
f1e4de9a63 Add primspec for Sub, Index, Chunk, and Embedding 2017-09-05 17:48:55 -04:00
Edward Z. Yang
394ff072eb Update to latest ToffeeIR operator schema.
- Conv no longer supports bias, so we create an explicit broadcasted
  addition afterwards.  There is one minor problem, however, which is that
  ConvTranspose in Caffe2 has mandatory bias.  So there's a hack.
  See Note [Caffe2ConvTranspose] for the details.
- Squeeze: dims -> axes
- Transpose: axes -> perm
- Reshape lost its extra output (yay!)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Zach DeVito
52e693022a helper methods appendNewNode and NewNode for python Graph API
uses suffixes to disambiguate attribute types
2017-09-05 17:48:55 -04:00
Edward Z. Yang
5c82aefa24 Fix bug in Transpose export.
This is a case of two wrongs make a right.  There were a pair of
related bugs;

- We incorrectly translated Transpose as if it were a Permute;
  but Torch transpose actually is a *swap* between dimensions.

- Why didn't we ever notice it?  In all of our tests, a transpose
  was *solely* done to get a weight matrix into the correct form.
  But Caffe2's FC operator *implicitly* does a transpose on
  the weight matrix.

This commit fixes both of these problems.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
b5833551f3 Documentation, and inplace support.
This adds the PyTorch API user documentation for Toffee.
To make the example work, I also converted all "inplace"
ops to export out-of-place in Toffee.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
57eb8bd288 Frontend refactor, and some documentation.
- BC BREAKING: export now also takes a mandatory file-ish argument, specifying
  the file to export the protobuf to.  I rewrote the tests to use BytesIO to
  get out the string so they could parse it again.

- BC BREAKING: export no longer returns the tensors that were computed.  To
  get these, use the internal _export function.

- Multiple inputs to models are now supported by passing a tuple to input.
  (Old API of a single Variable still works.)

- Keyword arguments to models are now supported via kwargs keyword arg.

- Renamed embed_params to export_params, and it now defaults to True.

- Toffee tests now live in their own test_toffee.py file.  I had to
  rename a pile of expect files for this.

- Removed defunct torch.toffee imports from autograd to solve module import
  cycle.

- Helper function _with_file_like to abstract over opening file-ish arguments,
  taken from torch.save()

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Edward Z. Yang
1f77d482d5 Don't insert Transpose if it is no-op.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Priya Goyal
1e0171f436 Super resolution network (#148) 2017-09-05 17:48:55 -04:00
Zach DeVito
dc6378d891 merge fixes for Squeeze and ConvTranspose 2017-09-05 17:48:55 -04:00
Zach DeVito
35bddb6b7e pr feedback 2017-09-05 17:48:55 -04:00
Zach DeVito
c9f7f2eff4 Change pipeline for exporting to toffeeIR
previously:
  PythonOp/CppOp Graph -> ToffeeIR, primspecs worked with protobufs
now:
  PythonOp/CppOp --ToToffeIR--> jit::Graph of in-memory ToffeIR -> protobufs of ToffeIR

This commit let's primspec functions work directly with JIT IR nodes,
which makes it possible to do a lot more stuff in those functions.
2017-09-05 17:48:55 -04:00
Priya Goyal
91dcf2938a Miscellaneous fixes needed to make caffe2 E2E 2017-09-05 17:48:55 -04:00