grep_linter.py was using the `-P` flag of `grep`, which is available in
GNU grep but notably *not* available in the BSD grep that is installed
on Macs.
Use `-E` instead, which uses ERE instead of PCRE. Sadly we were actually
using two PCRE features in our linters:
- Negative lookaheads. I changed these to less-accurate-but-still-good-enough
versions that use `[^...]` expressions.
- Apparently ERE doesn't support the `\t` atom lol. So I used a literal tab
character instead (and then had to disable the TAB linter for
`.lintrunner.toml` lol.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76947
Approved by: https://github.com/ezyang
This PR allows user to author a CUDA kernel in python.
```
from torch.cuda.jiterator import create_jit_fn
code_string = "template <typename T> T my_kernel(T x, T y, T alpha) { return -x * y + x - y + alpha; }"
jitted_fn = create_jit_fn(code_string, alpha=0)
a = torch.rand(3, device='cuda')
b = torch.rand(3, device='cuda')
result = jitted_fn(a, b, alpha=1.0)
```
Limitations:
- Only supports elementwise kernel
- 1~8 tensor inputs (empty input, e.g. factory methods, is not supported)
- inputs tensors must live in cuda device
- cpu Scalar is not supported
- kwargs must be pre-declared when calling create_jit_fn
- kwargs must be convertible to at::Scalar, one of float64, int64_t, bool. (complex not support for now)
TODOs:
- [x] consolidate union and c10::variant implementation
- [x] plug into existing op testing framework
- [ ] rename files, place files in the right folder
- [ ] place util functions in the right file
- [x] enforce assumptions in python interface e.g <8 inputs, kwargs types
- [x] Add user-facing documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76394
Approved by: https://github.com/mruberry
We would for some reason report formatting-based lints as showing up at
line 1 column 1. This removes them for now. Maybe eventually we can
recover better line numbers from the formatting diff and post messages
for each diff cluster, but that requires actual changes to the linting
engine.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75928
Approved by: https://github.com/janeyx99
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74563
This is used inconsistently in all the generate_code program
invocations. Nevertheless, nothing consumes this flag, so we can
safely remove it.
This was removed in #25353.
ghstack-source-id: 152249818
Test Plan: Should be a no-op, rely on CI.
Reviewed By: malfet
Differential Revision: D35053096
fbshipit-source-id: 3ad19e83ca14649b514dc163c3caff6cbd118e14
(cherry picked from commit a43f05bb43553249caac3c3479986cbc45d286ae)
Summary:
As it should never be negative, should it?
Also, add `torch/csrc/deploy` to the list of clang-format checked folders (as they are internally)
Last but not least: clang-tidy correctly identifies `totNumModules <= SIZE_MAX / sizeof(struct _frozen) - 1` as unneeded always true check (as `totNumModules` is int32, while SIZE_MAX is int64 and `sizeof(sturct_frozen)` is less than 4Gb ;) )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74978
Reviewed By: suo, tugsbayasgalan
Differential Revision: D35261476
Pulled By: malfet
fbshipit-source-id: 8a3432d2d9e96ded3f08baee14ccb43d2635a67d
(cherry picked from commit 21f6c33166c8e4e16dcac0248cb9006f69e222a1)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74387
Make temporary python bindings for flatbuffer to test ScriptModule save / load.
(Note: this ignores all push blocking failures!)
Test Plan: unittest
Reviewed By: iseeyuan
Differential Revision: D34968080
fbshipit-source-id: d23b16abda6e4b7ecf6b1198ed6e00908a3db903
(cherry picked from commit 5cbbc390c5f54146a1c469106ab4a6286c754325)
Summary:
Also enables bazel build to run lazy codegen. Bazel (oss) build feeds off the same filelists as cmake/buck (build_variables.bzl), so enabling it is easier than keeping it disabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74111
Test Plan: Run CI and verify test_lazy_ops is running via OSS cmake builds
Reviewed By: bdhirsh
Differential Revision: D34772403
fbshipit-source-id: 8a63f58b9536e6ac1be530667932176ef2549496
(cherry picked from commit e807ffb1918853d10b924fdc24f85ee5b1a39021)
Summary:
Update flatbuffer generated header and add it to ignore for clang format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73810
Test Plan: CI
Reviewed By: iseeyuan
Differential Revision: D34652217
Pulled By: qihqi
fbshipit-source-id: fe281afd25d618d2e4852d6b76b813e2fbee0ddc
(cherry picked from commit 095ee360b573506ac946de142bd266b8d3bac58e)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68247
This splits `Functions.h`, `Operators.h`, `NativeFunctions.h` and
`NativeMetaFunctions.h` into seperate headers per operator base name.
With `at::sum` as an example, we can include:
```cpp
<ATen/core/sum.h> // Like Functions.h
<ATen/core/sum_ops.h> // Like Operators.h
<ATen/core/sum_native.h> // Like NativeFunctions.h
<ATen/core/sum_meta.h> // Like NativeMetaFunctions.h
```
The umbrella headers are still being generated, but all they do is
include from the `ATen/ops' folder.
Further, `TensorBody.h` now only includes the operators that have
method variants. Which means files that only include `Tensor.h` don't
need to be rebuilt when you modify function-only operators. Currently
there are about 680 operators that don't have method variants, so this
is potentially a significant win for incremental builds.
Test Plan: Imported from OSS
Reviewed By: mrshenli
Differential Revision: D32596272
Pulled By: albanD
fbshipit-source-id: 447671b2b6adc1364f66ed9717c896dae25fa272
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68292
- noqa was typo-d to be the same as type: ignore
- generalize clang-tidy initialization and use it for clang_format as well
- Add a script that lets you update the binaries in s3 relatively easily
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D32403934
Pulled By: suo
fbshipit-source-id: 4e21b22605216f013d87d636a205707ca8e0af36
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68192
- Run on exactly the same stuff as the existing linter checks.
- Exclude deploy interpreter headers from being reported.
Test Plan: Imported from OSS
Reviewed By: janeyx99
Differential Revision: D32364023
Pulled By: suo
fbshipit-source-id: c27eca4a802534875d609d004fa9f6fca59ae6a5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68069
- executable bit
- cub include
- raw CUDA API usage
Test Plan: Imported from OSS
Reviewed By: janeyx99
Differential Revision: D32286559
Pulled By: suo
fbshipit-source-id: 21d58e259c951424f9c6cbf1dac6d79fe7236aa4
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67942
- Change "name" to "code" for consistency with linttool and LintMessage
format.
- Change "args" and "init_args" to "command" and "init_command" for
consistency with internal representation.
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D32250606
Pulled By: suo
fbshipit-source-id: 557fef731bab9adca7ab1e7cc41b996956076b05
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67932
Also various improvements to grep_linter.py, including the ability to
specify a replacement pattern.
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D32250603
Pulled By: suo
fbshipit-source-id: e07eb182e9473a268e2b805a68a859b91228bfbb
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67936
- Add the strict config
- Make the patterns exactly match the current CI
- Add init_args
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D32250605
Pulled By: suo
fbshipit-source-id: a71d434bf6024db4462260a460a1bc2d9ac66a32
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67894
As title. Confirmed that the code base passes by running:
```
lintrunner --paths-cmd='git grep -Il ""' --take NEWLINE
```
and seeing that it pases
Test Plan: Imported from OSS
Reviewed By: H-Huang
Differential Revision: D32250604
Pulled By: suo
fbshipit-source-id: de9bcba635d21f8832bb25147b19b7b2e8802247
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67890
Adding another linter. I also added a generic initializer that installs
the right pip packages (you can invoke it by running `lintrunner init`).
Differential Revision:
D32197366
D32197366
Test Plan: Imported from OSS
Reviewed By: driazati
Pulled By: suo
fbshipit-source-id: 82844e78f1ee3047220d8444874eab41d7cc0e9e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67872
As title. This demonstrates some of the nice features of lintrunner:
- Uniform error reporting means you get a nice diff of the changes for
free
- Can run with -a to just accept the changes (don't need to tell people
to run a special regenerate command since the linter adaper already knows how.)
Differential Revision:
D32187386
D32187386
Test Plan: Imported from OSS
Reviewed By: driazati
Pulled By: suo
fbshipit-source-id: 71de6b042730be80ff6794652039e9bc655a72b1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67496
gen_autograd.py doesn't use `Declarations.yaml` any more, and removing
the dependency allows it to run in parallel with
`tools/codegen/gen.py`.
Test Plan: Imported from OSS
Reviewed By: dagitses, ejguan
Differential Revision: D32027251
Pulled By: albanD
fbshipit-source-id: 2cc0bbe36478e6ec497f77a56ab8d01c76145703