Commit Graph

185 Commits

Author SHA1 Message Date
Edward Z. Yang
3bf922a6ce Apply UFMT to low traffic torch modules (#106249)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106249
Approved by: https://github.com/Skylion007
2023-07-29 23:37:30 +00:00
Justin Chu
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
shibo19
2961ea80f5 Deprecate "Type" and support more devices for save_on_cpu (#103245)
Fixes #ISSUE_NUMBER
1、the class named "Type" has not been used anymore in anywhere, so I add warning message  to remove it in the future.
2、add a arg(default is "cuda") for save_on_cpu so that it can support more device type (like privateuse1)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103245
Approved by: https://github.com/soulitzer
2023-06-09 05:05:01 +00:00
Sam Estep
4753100a3b Un-ignore F403 in .flake8 (#55838)
Summary:
Generally wildcard imports are bad for the reasons described here: https://www.flake8rules.com/rules/F403.html

This PR replaces wildcard imports with an explicit list of imported items where possible, and adds a `# noqa: F403` comment in the other cases (mostly re-exports in `__init__.py` files).

This is a prerequisite for https://github.com/pytorch/pytorch/issues/55816, because currently [`tools/codegen/dest/register_dispatch_key.py` simply fails if you sort its imports](https://github.com/pytorch/pytorch/actions/runs/742505908).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55838

Test Plan: CI. You can also run `flake8` locally.

Reviewed By: jbschlosser

Differential Revision: D27724232

Pulled By: samestep

fbshipit-source-id: 269fb09cb4168f8a51fd65bfaacc6cda7fb87c34
2021-04-13 09:24:07 -07:00
Zafar
59052e39b8 [quant] qtensor resize (#36442)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36442

Test Plan: Imported from OSS

Differential Revision: D20984080

Pulled By: z-a-f

fbshipit-source-id: 7fcf24bd2f92f038b670f510118b012d8c7acc74
2020-04-25 15:52:35 -07:00
Negin Raoof
ebc216a076 Opset 11 updates (#28225)
Summary:
This PR contains:
1- pad updates for opset11 symbolic
2- Updated avg_pool for opset11
3- TopK updates for opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28225

Reviewed By: hl475

Differential Revision: D18282928

Pulled By: houseroad

fbshipit-source-id: aff2cabca9a155a9b475e35fed69a678544d6669
2019-11-04 12:16:12 -08:00
なるみ
d83389d327 Ignore F401 in all __init__.py without putting noqa (#25823)
Summary:
By adding `per-file-ignores = __init__.py: F401` into `.flake8` with `flake8>=3.7`, we can ignore F410 in all `__init__.py` without putting `# noqa: F401` line by line.

http://flake8.pycqa.org/en/latest/user/options.html?highlight=per-file-ignores#cmdoption-flake8-per-file-ignores
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25823

Differential Revision: D17252182

Pulled By: soumith

fbshipit-source-id: 87b174075b79e4078953a7521bd1a8f82405646b
2019-10-23 15:28:13 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Sam Gross
895aebac08
Use Variable instead of Tensor in Function.forward (#4786)
The Tensor and Variable classes are being merged.
autograd.Function.forward is now called on Variables, but with "no-grad"
mode (torch.no_grad()) enabled.

One benefit is that we no longer have to explicitly track shared
storages.
2018-02-06 17:24:27 -05:00
gchanan
260a246192
Move repeat autograd to C++. (#4885) 2018-01-29 15:09:59 -05:00
gchanan
1fee7cd626 Delete some dead expand code. (#4755) 2018-01-19 22:27:17 -05:00
Sam Gross
a8bdce38fe
Replace PowConstant (#4711) 2018-01-17 17:30:56 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
gchanan
e426020c87
Move prod, cumprod backwards to C++ (#4394)
* Add view_as as a native_function.

* Move prod, cumprod backwards to C++.

* Update for review requets.

* Review comments.

* Reorder slice parameters so dim is first.

* Update test_slice.

* Update test_autograd.

* Fix flake8.
2018-01-03 16:27:50 -05:00
Sam Gross
bec0349280 Implement Variable.cuda and Variable.type using ATen (#4139)
* Implement Variable.cuda using ATen

This adds an optional async flag to Tensor::copy_, which attempts to do
a non-blocking copy if the one of the tensors is in pinned memory and
the other is a CUDA tensor.

* Perform cross-device copy in CopyBackwards

Also call torch.cuda._lazy_init() from Variable.cuda()

* Implement Variable.type via ATen

* Changes from review:

 - remove copy_out
 - remove unnecessary include
 - fix default device for .cuda()

* Combine if statements in dispatch_type
2017-12-18 01:54:35 -05:00
Dmytro Dzhulgakov
709fcfda8a Now actually fix padding (the tests are added in onnx-pytorch) (#3893)
* Now actually fix padding (the tests are added in onnx-pytorch)

* fix test
2017-11-27 23:39:48 -05:00
Sam Gross
ed640010ce
Delete unused autograd functions (#3856) 2017-11-24 14:31:11 -05:00
Sam Gross
4518793aa2
Implement indexing in ATen (#3725)
Implements basic and advanced indexing using ATen tensors/variables.
Basic indexing is translated at the Python-binding level
(python_variable_indexing.cpp) to slice/squeeze/unsqueeze/select calls.
Advanced indexing is implemented in ATen in terms of take() and put()
calls.
2017-11-21 13:19:00 -05:00
Adam Paszke
cf407213f9 Clean up stochastic function related dead code (#3782) 2017-11-20 12:44:45 -05:00
Fritz Obermeyer
1f64c2ef91 Rename pyro.distributions.Multinomial -> .Categorical (#3766)
* Rename distributions.Multinomial -> distributions.Categorical

* Rename Multinomial -> Categorical

* Update docs

* Update variable.py

* Update distributions.py

* Update variable.py
2017-11-18 16:10:07 -05:00
Sam Gross
2453bc2876
Implement clamp using ATen (#3739) 2017-11-17 13:12:36 -05:00
Sam Gross
fde355f7d4
Allow in-place operations on views (#3384)
Allow in-place operations on views

Adds VariableViewImpl, a subclass of VariableImpl which has a pointer to
the base Variable on which it is a view. In-place operations on views
change the grad_fn of the base.

Note that in-place operations only work on views that are the first output of the function that created them. All C++/ATen implemented functions have this behavior, but it's possible to write Python-implemented autograd functions that do not. In-place operations on these view will raise an exception.

Fixes #3313
2017-11-06 18:19:56 -05:00
Sam Gross
48fe5d4622
Move select and permute to ATen/C++ (#3421)
Move select and permute to ATen/C++
2017-11-02 15:17:36 -04:00
Edward Z. Yang
8fbe003d4e Miscellaneous ONNX fixes and behavior changes.
- Deleted Addmm/Concat Function class, as this is now native ATen operator

- Resurrected ONNX operator for Concat (now called 'cat')

- Add a "fake" Expand ONNX operator, which we now do the optimization on;
  this helps prevent us from emitting a warning that 'expand' is not supported.
  We still fail if any of these Expand operators make it to the final model,
  until we actually formalize Expand in ONNX.  This also simplifies the
  fuseBroadcast code, because single-return ONNX nodes don't get select nodes.

- New error reporting strategy.  If we fail to export an operator because of
  something, we emit a warning, but otherwise keep going.  At the very end,
  in export.cpp, we now check if there are any ATen operators left over.  If
  there are, we bug out.  This assumes that ATen is lower case and ONNX is upper
  case.  You're now supposed to 'return _unimplemented(msg)' in these cases.

- New toString() method on Graph, for getting the string graph (useful for
  slapping it into error messages.)

- Some of the legacy symbolics (still in Python symbolic method of Function
  subclass) have been cleaned up for clarity.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-29 23:50:34 -04:00
bddppq
ac8f56656d Adapt ONNX Slice op changes (#3316) 2017-10-28 00:03:29 -04:00
James Reed
869bdeb936 Symbolic implementation of Index supporting tuple of slices. (#3294) 2017-10-27 02:39:38 +05:30
Edward Z. Yang
9989bb1a43 Export index constants as long, not int (onnx-caffe2 needs it.) (#3274)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-25 09:50:33 +02:00
Gregory Chanan
4b1e85d266 Remove split/chunk python autograd. 2017-10-24 19:33:37 -04:00
Lu Fang
5691b0b8d2 Fix the Slice changes in ONNX (#3216) 2017-10-24 14:12:54 -04:00
Edward Z. Yang
53fe804322 Make ONNX work with new C++ autograd world.
The general strategy is there is a new module, torch.onnx.symbolic, which
contains a function for every ATen method name with the ONNX translation.
While implementing this, I took the opportunity to expunge all references
of 'g' from the public API; instead, it is managed by a global variable in
torch.onnx which tracks the "current graph".

Other changes:

- If you pass a Tensor to op as an argument, it will now automatically be
  converted into a Constant ONNX node.  This lets us remove needing to
  implement ONNX

- Rename value to other, wherever there is both a Scalar and Tensor overload.
  This way, keyword dispatch can work uniformly in both cases.

- Deleted any autograd Function classes that both had a symbolic and were ported
  to the new C++ autograd implementation.  There may still be some straggling
  classes that didn't have symbolic.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-20 15:38:01 -04:00
Sam Gross
d9b89a352c Replace StochasticFunctions v2 (#3165)
This removes the StochasticFunctions for bernoulli, multinomial, and
normal and replaces them with classes in the torch.distributions
package. Each distribution supports the differentiable log_prob function
that returns the log of the pdf/pmf of the samples.

The current StochasticFunction implementation has a few problems: it can
be painful to use when there are multiple stochastic outputs which need
to be back-propagated through. It also requires that we store grad_fns
on Variables that have requires_grad=False in order to find stochastic
nodes.
2017-10-19 15:05:07 -04:00
SsnL
fce3ed19e5 Change device_id to device in python land (#3133)
* change device_id to device in python land

* cuda/random.py
2017-10-17 00:54:26 +02:00
Lu Fang
3261e1337a Use 0D (1-element) tensor instead of 1D tensor 2017-10-16 17:47:36 -04:00
Lu Fang
93e1749c85 Add ONNX support for AddConstant and SubConstant 2017-10-16 17:47:36 -04:00
Lu Fang
a1deb2d47f Move the exception logic to the helper function 2017-10-16 16:57:16 -04:00
Lu Fang
cad9438bb9 Add unit tests for onnx helper functions 2017-10-16 16:57:16 -04:00
Lu Fang
864bd934b0 Add a helper function to check broadcasting (#3115) 2017-10-13 23:22:16 -04:00
Sam Gross
61bb0d2954 Remove unused parameter 'input' from Tanh 2017-10-13 01:31:48 +02:00
Edward Z. Yang
b9cd45adcf Add note about inplace status in ONNX and JIT.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-13 01:31:22 +02:00
Lu Fang
9ef39a50ee Fix the broadcast in Addmm's symbolic (#3063)
* Fix the broadcast in Addmm's symbolic

* fix the non-matching dimension cases

* Add exception for non-supported case, remove onnx test cases (moved to onnx-pytorch repo)

* remove the test_onnx.py in run_test.sh

* lint the code
2017-10-11 22:23:11 -04:00
Sam Gross
7bc154f8ea Remove unused argument 'input' to Sigmoid_updateGradInput (#3079) 2017-10-11 23:52:50 +02:00
bddppq
bd9b4df6e9 Add support for exporting MulConstant, DivConstant and Softmax to ONNX (#2923)
* Add support for exporting MulConstant and Softmax

* Add support for MulConstant in autograd execution

* Also add support for DivConstant
2017-10-11 13:03:33 -04:00
Lu Fang
8d8a99c244 Add ONNX Pad reflect and edge mode support (#3048) 2017-10-10 17:02:08 -04:00
Lu Fang
c489445c46 Add ONNX support for Mean (#2956) 2017-10-03 18:16:45 -04:00
Edward Z. Yang
6fbdf40284 Translate addmm into Gemm operator / fix alpha-beta mixup / execute in JIT.
The alpha/beta naming in addmm was flipped; this commit fixes that
problem.  It also fixes the ONNX export of alpha/beta parameters.
Finally, it supports executing matmul in the JIT.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-03 17:23:43 -04:00
Junjie Bai
e4701e63f6 Fix exporting Reshape with single torch.Size argument 2017-10-02 23:29:49 +02:00
Alykhan Tejani
621603169c initialize new tensor 2017-10-02 09:53:21 -04:00
Alykhan Tejani
ca644ca204 Add inplace zero to variable (#2212) 2017-10-02 14:02:24 +02:00
Junjie Bai
287f434900 Add support for exporting Addmm with alpha != 1 or beta != 1 2017-09-23 11:17:27 -04:00