Commit Graph

47378 Commits

Author SHA1 Message Date
kshitij12345
a732bbea23 [meta] Add meta support for fft ops (#79311)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79311
Approved by: https://github.com/ezyang
2022-06-13 01:56:42 +00:00
kshitij12345
bd1a35dfc8 [meta] diag ops, trace (#79341)
meta registration for `diag.out` and update test skips/expectedFailures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79341
Approved by: https://github.com/ezyang
2022-06-12 18:45:03 +00:00
Michael Suo
aa681e401d [ci] fix env var propagation
- Add `CI` env var propagation to some places it was missing
- Use the append redirect `>>` instead of `>`, which incorrectly
  overwrites the file.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79374

Approved by: https://github.com/ezyang
2022-06-12 18:33:29 +00:00
Edward Z. Yang
213a8fc992 Put symint overloads on a different name
Due to implicit conversion shenanigans, having both IntArrayRef
and SymIntArrayRef overloads makes {} ambiguous.  While we could
fix this by making a single unified type that accepts all the overloads
we want, an easier fix was to just push the SymIntArrayRef overload
to its own name.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79281

Approved by: https://github.com/suo
2022-06-12 14:36:39 +00:00
Wei Wei
f23685f008 [transformer] BT enablement on fairseq - pytorch change (#79186)
The fairseq diff is split into two parts.
The first diff (this one)
This diff is about creating a mask left align function to check the mask condition for nested tensor. It is necessary for torchscript deployment.

The second diff (D37082681)
Fork the inference path inside the forward function. If loaded the checkpoint file and perform the inference, we will deploy BT. Otherwise, fairseq take the position.

Reviewed By: mikekgfb

Differential Revision: D36057338

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79186
Approved by: https://github.com/erichan1
2022-06-12 05:56:26 +00:00
Taylor Robie
c321dfe1b5 move tree tests to the start of test_profiler.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79301

Approved by: https://github.com/davidchencsl
2022-06-12 04:24:30 +00:00
pritam
a81be44410 Fix shard_module to appropriately deal with sub process groups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79264

`shard_module` API didn't work correctly with a sub-pg since
`dist.scatter` actually takes the global rank as input for `src`.

Fixing this by passing in the appropriate rank to `dist.scatter`

Differential Revision: [D37062766](https://our.internmc.facebook.com/intern/diff/D37062766/)

Approved by: https://github.com/fduwjj, https://github.com/wanchaol
2022-06-12 03:50:45 +00:00
PyTorch MergeBot
5413580f9e Revert "[JIT] Propagate profiled information to DifferentiableGraph outputs"
This reverts commit 1d2a6c2e94.

Reverted https://github.com/pytorch/pytorch/pull/78875 on behalf of https://github.com/davidberard98 due to Internal failures were bisected to this change
2022-06-12 00:14:08 +00:00
PyTorch MergeBot
66460c4a6a Revert "Fixes maybe_broadcast to actually broadcast only when needed (#79298)"
This reverts commit 1cb1c2c08c.

Reverted https://github.com/pytorch/pytorch/pull/79298 on behalf of https://github.com/suo due to Broke FakeTensor tests on master, see: 1cb1c2c08c
2022-06-11 23:36:18 +00:00
Mike Ruberry
1cb1c2c08c Fixes maybe_broadcast to actually broadcast only when needed (#79298)
Adds a `same_shape` util and updates maybe_broadcast to use it; previously maybe_broadcast was always broadcasting because its equality check was always failing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79298
Approved by: https://github.com/ezyang
2022-06-11 22:04:47 +00:00
Michael Suo
30fb2c4aba [lint] autoformat test/cpp and torch/csrc
Let's have some fun.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78828

Approved by: https://github.com/ezyang
2022-06-11 21:11:16 +00:00
Mikayla Gawarecki
1ec30a6647 Add offsets-based reduction to segment_reduce (CPU, CUDA)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78907

Approved by: https://github.com/cpuhrsch
2022-06-11 17:43:42 +00:00
Michael Suo
c978b609f7 [ci] remove IN_CI env var
The conventional env var to set is CI. Both circle and GHA set it, so
IN_CI is unnecessary

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79229

Approved by: https://github.com/janeyx99
2022-06-11 17:16:30 +00:00
Michael Suo
f51d5233f2 [ci] fix GITHUB_ACTIONS env var checks
`GITHUB_ACTIONS` is set to `true`, but some of our code checks that it
is `1`. Make the checks more general.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79290

Approved by: https://github.com/janeyx99
2022-06-11 17:16:30 +00:00
Michael Suo
a8bafd2659 [ci] do not pass GITHUB_RUN_ID explicitly to docker image
This is already covered by the `--env-file`, so passing it explicitly is
redundant.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79247

Approved by: https://github.com/janeyx99, https://github.com/seemethere
2022-06-11 17:16:29 +00:00
yanbing-j
2edcc9e600 Update ideep for ideep conv changes -v2 (#79258)
The original PR https://github.com/pytorch/pytorch/pull/78238 has been covered. This is to reopen.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79258
Approved by: https://github.com/frank-wei
2022-06-11 14:19:03 +00:00
George Qi
164029f783 masked logasumexp/logaddexp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78291

Approved by: https://github.com/cpuhrsch
2022-06-11 05:46:36 +00:00
lezcano
9fc2518a8a Update and improve the heuristics for linalg.lu_solve
This PR adds getrf_cublas to the functions considered in the heuristics
for lu_solve.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73878

Approved by: https://github.com/nikitaved, https://github.com/IvanYashchuk, https://github.com/mruberry
2022-06-11 04:06:40 +00:00
lezcano
1880d293b6 Expose lu_factor_batched_cublas
We had bindings for this function, but they were not exposed as a
function inside ATen. This PR exposes them.

This function is tested in the next PR of the stack, where it is used in
linalg.lu_solve.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73877

Approved by: https://github.com/nikitaved, https://github.com/IvanYashchuk, https://github.com/mruberry
2022-06-11 04:06:40 +00:00
lezcano
54949a5abc Simplify and optimize linalg.solve
This PR heavily simplifies the code of `linalg.solve`. At the same time,
this implementation saves quite a few copies of the input data in some
cases (e.g. A is contiguous)

We also implement it in such a way that the derivative goes from
computing two LU decompositions and two LU solves to no LU
decompositions and one LU solves. It also avoids a number of unnecessary
copies the derivative was unnecessarily performing (at least the copy of
two matrices).

On top of this, we add a `left` kw-only arg that allows the user to
solve `XA = B` rather concisely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74046

Approved by: https://github.com/nikitaved, https://github.com/IvanYashchuk, https://github.com/mruberry
2022-06-11 04:06:40 +00:00
Akshay Parashar
65a37923f9 [Static Runtime] Exception handling during fork subgraph execution (#79292)
Summary:
- Exception handling was not performed in forked subgraph execution
- forked subgraph runtime can throw runtime exception. Future returned by prim::fork needs to handle exceptions so that aten::wait handles it.

Test Plan:
local test cases:
  - buck test caffe2/benchmarks/static_runtime/fb:test_fb_operators
  - buck test mode/opt caffe2/benchmarks/static_runtime:static_runtime_cpptest
  - buck test mode/opt caffe2/test:static_runtime

Async execution of the subgraph is tested by adding pytorch profiler hooks on the StaticRuntime execution via below code. Async execution in threadpool is verfiied by checking trace
  with profile(activities=[ProfilerActivity.CPU]) as prof:
      static_runtime_module(inputs)
  prof.export_chrome_trace("trace.json")

Differential Revision: D37072493

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79292
Approved by: https://github.com/mikeiovine
2022-06-11 03:11:49 +00:00
Michael Andreas Dagitses
ab2ca95dd1 turn on -Werror=unused-variable in our Bazel CPU build
Summary:
We also fix any existing issues. Note that we only do this for the CPU
build because nvcc is considered a C++ toolchain but it does not have
the same flag support. Adding flags to the GPU build will cause nvcc
errors.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79156

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-11 02:46:34 +00:00
Mikayla Gawarecki
e727539c29 Support multi-dimensional lengths in segment_reduce to support pytorch_scatter.segment_* functionalities (CUDA)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77061

Approved by: https://github.com/cpuhrsch
2022-06-11 01:45:22 +00:00
anjali411
38350acf8f Autogen Tags enum, and allow specifying tags while defining an op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79322

Approved by: https://github.com/albanD
2022-06-11 00:29:32 +00:00
drisspg
bdcee8f995 update is_same_size to work with nested tensor dispatch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79297

Approved by: https://github.com/soulitzer
2022-06-11 00:07:27 +00:00
drisspg
10dfdf2066 engine changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79296

Approved by: https://github.com/soulitzer
2022-06-11 00:07:27 +00:00
mingfeima
50e1301293 qcat: use direct memcpy when all the inputs and output share the same scale and zero_point
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71903

Approved by: https://github.com/jerryzh168
2022-06-10 23:59:14 +00:00
Sergii Dymchenko
77d5eb7d1c Remove linux-xenial-py3.7-gcc7-no-ops from distributed checks (#79320)
The workflow is gone some time ago.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79320
Approved by: https://github.com/malfet
2022-06-10 23:55:05 +00:00
Michael Andreas Dagitses
606b234336 turn on -Werror=unused-function in our Bazel CPU build
Summary:
We also fix any existing issues. Note that we only do this for the CPU
build because nvcc is considered a C++ toolchain but it does not have
the same flag support. Adding flags to the GPU build will cause nvcc
errors.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79154

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-10 22:11:54 +00:00
albanD
4aca751921 remove spurious warning in amp (#79203)
fix https://github.com/pytorch/pytorch/issues/72527
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79203
Approved by: https://github.com/anjali411
2022-06-10 21:53:58 +00:00
Svetlana Karslioglu
68136828e0 Add Design Philosophy (#79248)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79248
Approved by: https://github.com/albanD
2022-06-10 21:21:05 +00:00
Mengwei Liu
24050a5801 [RFC][Codegen] Add custom namespace support (#78015)
Summary:
Adding a feature to allow user to specify namespaces for operator and kernels.

# Feature
There's a feature request to allow DSL to:
1. take in an operator namespace other than `aten`.
2. take in a kernel that is in a different namespace than `at::native`.

For both features, we only allow user to have a single layer of namespace for the sake of simplicity. If user specify `custom::function` as kernel, the codegen will depend on `custom::native::function` where `native` is hardcoded.

# Proposal

For feature 1, add a `namespace` attribute to data class `NativeFunction`. The namespace will be extract out by matching pattern "::" on the `func` variable. For `NativeFunctionsGroup` there's an assumption that all variants (function, inplace, out) will have the same namespace. By default (if not specified) the namespace will be "aten".

For feature 2, add a `namespace` attribute to `BackendMetadata` class, similarly match pattern "::" on the kernel field. Remove the `cpp_namespace` field from `register_dispatch_key` data class. By default (if not specified) the namespace for a kernel would be "at::native".

Test Plan:
Example yaml entries:
```
- func: custom::gelu.out(Tensor self, *, str approximate='none', Tensor(a!) out) -> Tensor(a!)
  structured: True
  structured_inherits: TensorIteratorBase
  device_check: NoCheck   # TensorIterator
  python_module: nn
  dispatch:
    CPU: custom::gelu_out_cpu
    CUDA: custom::gelu_out_cuda
    MPS: custom::gelu_out_mps

- func: custom::gelu_(Tensor(a!) self, *, str approximate='none') -> Tensor(a!)
  structured_delegate: gelu.out
  device_check: NoCheck   # TensorIterator
  python_module: nn
  dispatch:
    NestedTensorCPU, NestedTensorCUDA: custom::NestedTensor_gelu_

- func: custom::gelu(Tensor self, *, str approximate='none') -> Tensor
  structured_delegate: gelu.out
  device_check: NoCheck   # TensorIterator
  python_module: nn
  dispatch:
    MkldnnCPU: custom::mkldnn_gelu
    QuantizedCPU: custom::gelu_quantized_cpu
    NestedTensorCPU, NestedTensorCUDA: custom::NestedTensor_gelu
```

see generated code:

`RegisterCPU.cpp`:
```
TORCH_LIBRARY_IMPL(aten, CPU, m) {
  ...
}
TORCH_LIBRARY_IMPL(custom, CPU, m) {
    m.impl("gelu", TORCH_FN(wrapper_gelu));
    m.impl("gelu.out", TORCH_FN(wrapper_gelu_out_out));
    m.impl("gelu_", TORCH_FN(wrapper_gelu_));
};
```
```
struct structured_gelu_out_cpu_inplace final : public custom::native::structured_gelu_out_cpu {
    structured_gelu_out_cpu_inplace(Tensor& self) : outputs_{std::ref(self)} {}

    void set_output_strided(
        int64_t output_idx, IntArrayRef sizes, IntArrayRef strides,
        TensorOptions options, DimnameList names
    ) override {

        const auto& out = outputs_[output_idx].get();
        check_inplace(out, sizes, options);

        auto maybe_proxy = maybe_create_proxy(out, sizes, strides, options);
        if (C10_UNLIKELY(maybe_proxy.has_value())) {
            proxy_outputs_[output_idx] = c10::ExclusivelyOwned<Tensor>(std::move(maybe_proxy).value());
        }

        if (!names.empty()) {
          namedinference::propagate_names(outputs_[output_idx], names);
        }
        // super must happen after, so that downstream can use maybe_get_output
        // to retrieve the output
        custom::native::structured_gelu_out_cpu::set_output_raw_strided(output_idx, sizes, strides, options, names);
    }

    void set_output_raw_strided(
        int64_t output_idx, IntArrayRef sizes, IntArrayRef strides,
        TensorOptions options, DimnameList names
    ) override {

        const auto& out = outputs_[output_idx].get();
        check_inplace(out, sizes, options);

        if (!names.empty()) {
          namedinference::propagate_names(outputs_[output_idx], names);
        }
        // super must happen after, so that downstream can use maybe_get_output
        // to retrieve the output
        custom::native::structured_gelu_out_cpu::set_output_raw_strided(output_idx, sizes, strides, options, names);
    }

    const Tensor& maybe_get_output(int64_t output_idx) override {
      return proxy_outputs_[output_idx].has_value() ? **proxy_outputs_[output_idx] : outputs_[output_idx].get();

    }
    std::array<std::reference_wrapper<Tensor>, 1> outputs_;
    std::array<c10::optional<c10::ExclusivelyOwned<Tensor>>, 1> proxy_outputs_;
};
```

`RegisterSchema.cpp`
```
TORCH_LIBRARY(aten, m) {
  ...
}
TORCH_LIBRARY(custom, m) {
    m.def("gelu.out(Tensor self, *, str approximate='none', Tensor(a!) out) -> Tensor(a!)");

    m.def("gelu_(Tensor(a!) self, *, str approximate='none') -> Tensor(a!)");

    m.def("gelu(Tensor self, *, str approximate='none') -> Tensor");
};
```

Differential Revision: D36558459

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78015
Approved by: https://github.com/bdhirsh
2022-06-10 21:04:36 +00:00
PyTorch MergeBot
d28e9e145b Revert "[nvfuser_upstream_push] nvfuser code base bump 060822 (#79147)"
This reverts commit 49c41b87a2.

Reverted https://github.com/pytorch/pytorch/pull/79147 on behalf of https://github.com/janeyx99 due to Broke 11.3 builds on trunk 49c41b87a2
2022-06-10 20:55:10 +00:00
PyTorch MergeBot
bcd7a20953 Revert "turn on -Werror=unused-function in our Bazel CPU build"
This reverts commit 67d313a032.

Reverted https://github.com/pytorch/pytorch/pull/79154 on behalf of https://github.com/malfet due to Breaks bazel build: 67d313a032
2022-06-10 20:43:03 +00:00
Hansong Zhang
2e8312c704 [vulkan] replication_pad2d.glsl: use clamp() instead of min(max()) (#79291)
Summary:
clamp — constrain a value to lie between two further values

use `clamp` instead of `min(max())` to calculate the coordinate.

Test Plan:
```
buck run //xplat/caffe2:pt_vulkan_api_test_binAppleMac

[ RUN      ] VulkanAPITest.replication_pad2d
[       OK ] VulkanAPITest.replication_pad2d (161 ms)
```

Differential Revision: D37063026

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79291
Approved by: https://github.com/SS-JIA
2022-06-10 20:35:37 +00:00
kshitij12345
5e656eaae5 [refs] ravel (#78421)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78421
Approved by: https://github.com/mruberry
2022-06-10 20:20:13 +00:00
kshitij12345
3d77017674 [primTorch] refs: masked_fill (#78132)
TODO

* [x] Add error inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78132
Approved by: https://github.com/mruberry
2022-06-10 20:19:48 +00:00
PyTorch MergeBot
b712467cd1 Revert "Add mutation checks for tensor inputs"
This reverts commit 83c0a2bc38.

Reverted https://github.com/pytorch/pytorch/pull/79078 on behalf of https://github.com/davidberard98 due to broke bazel build-and-test, see [https://github.com/pytorch/pytorch/runs/6836001002?check_suite_focus=true](https://github.com/pytorch/pytorch/runs/6836001002?check_suite_focus=true%22)
2022-06-10 20:15:30 +00:00
kshitij12345
7b307e5fca [meta] angle, angle.out (#79278)
meta registration for `angle, angle.out`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79278
Approved by: https://github.com/anjali411
2022-06-10 20:06:31 +00:00
jjsjann123
49c41b87a2 [nvfuser_upstream_push] nvfuser code base bump 060822 (#79147)
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Bug fixes and minor refactor

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
4c60e7dff22a494632370e5df55c011007340d06 Add examples infrastructure for using nvFuser in a standalone program (#1725)
02a05d98334ffa580d73ccb28fdb8c577ad296fe Fix issue #1751 (#1753)
8a69aa320bd7629e1709fe5ceb7104d2c88ec84c Refactor NvFuser transpose API to match eager mode behavior (#1746)
ffdf6b7709048170d768217fcd7083fc8387f932 Remove BroadcastWithoutStride. (#1738)
02bab16035e70734450c02124f5cdaa95cf5749d Fix flipping of a boolean flag (#1745)
465d66890c8242e811224359cbdb1c2915490741 cleanup (#1744)
26d354e68720bc7dd2d3b1338ac01b707a230b6a fixing noncontig broadcast (#1742)
856b6b2f9073662dd98ca22ba6c3540e20eb1cdd Add IterDomainBuilder (#1736)
1fd974f912cd4c1e21cbd16e2abb23598d66a02f fixing warning for gcc7 (#1732)
de2740a43a869f8272c2648e091d7b8235097db9 disabling complex in python tests for #1730 (#1733)
fbbbe0a2e7c7a63e0e2719b8bfccb759b714221a fixing MSVC build (#1728)
b5feee5e2b28be688dbddc766f3c0220389c8175 Fix the fused reduction runtime kernel (#1729)
5247682dff5980bb66edf8d3aac25dea2ef2ced5 Re-entrant GroupedGridReduction (#1727)
```

RUN_TORCHBENCH: nvfuser
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79147
Approved by: https://github.com/davidberard98
2022-06-10 19:37:42 +00:00
PyTorch MergeBot
35eda5f959 [DataPipe] Correcting deprecation version
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79302

Approved by: https://github.com/ejguan
2022-06-10 19:31:29 +00:00
samdow
5e926aafab add utils for checking that all modes are in the same scope and finding the outermost mode
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78847

Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-06-10 19:31:05 +00:00
Scott Wolchok
fff1948b02 [PyTorch] intrusive_ptr: don't guarantee release_resources will be called
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76767

We're spending a virtual function call in the common case
where there are no weak references just to save a small amount of care
in intrusive_ptr_target subclasses that override release_resources, of
which there aren't very many.

Differential Revision: [D36109757](https://our.internmc.facebook.com/intern/diff/D36109757/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D36109757/)!

Approved by: https://github.com/ezyang
2022-06-10 19:30:35 +00:00
Scott Wolchok
cfd697c979 [PyTorch] add intrusive_ptr exclusive ownership benchmark
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76766

None of the existing benchmarks captured the cost of the final destruction of an intrusive_ptr.

Differential Revision: [D36109756](https://our.internmc.facebook.com/intern/diff/D36109756/)

Approved by: https://github.com/ezyang
2022-06-10 19:26:28 +00:00
Akshay Parashar
26f2376b78 [Static Runtime] support fork and wait operations on Static Runtime (#79211)
Summary:
- Initial support for fork was done on JIT interpreter. This patch enabled the async execution on static runtime

- For each forked node, seeprate runtime is created for the execution of subgraph. Async  execution is handled by aten::ParallelThreadPoolNative threadpool

- aten::wait waits on the future of fork to be completed

Test Plan:
local test cases:
    - buck test caffe2/benchmarks/static_runtime/fb:test_fb_operators
    - buck test mode/opt caffe2/benchmarks/static_runtime:static_runtime_cpptest
    - buck test mode/opt caffe2/test:static_runtime

Async execution of the subgraph is tested by adding pytorch profiler hooks on the StaticRuntime execution via below code. Async execution in threadpool is verfiied by checking trace

    with profile(activities=[ProfilerActivity.CPU]) as prof:
        static_runtime_module(inputs)
    prof.export_chrome_trace("trace.json")

Differential Revision: D37044513

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79211
Approved by: https://github.com/mikeiovine
2022-06-10 19:11:03 +00:00
Rohan Varma
ec86070922 Checkpoint util
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78704

Approved by: https://github.com/zhaojuanmao
2022-06-10 18:37:36 +00:00
Michael Andreas Dagitses
67d313a032 turn on -Werror=unused-function in our Bazel CPU build
Summary:
We also fix any existing issues. Note that we only do this for the CPU
build because nvcc is considered a C++ toolchain but it does not have
the same flag support. Adding flags to the GPU build will cause nvcc
errors.

Test Plan: Built locally, rely on CI to confirm.

Reviewers: malfet

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79154

Approved by: https://github.com/seemethere, https://github.com/osalpekar, https://github.com/albanD
2022-06-10 18:30:08 +00:00
samdow
3734fcc8f8 add ability to push a mode if the current mode is an ancestor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78822

Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-06-10 18:27:04 +00:00
goldenxuett
83c0a2bc38 Add mutation checks for tensor inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79078

Approved by: https://github.com/davidberard98, https://github.com/Krovatkin
2022-06-10 18:17:33 +00:00
swang392
6bea742c10 Revised test cases for isGreen (#79223)
Relates to #76700

**Overview:** Revised and added new test cases to `test_print_latest_commits.py` to test the functionality of `isGreen`. All tests are successful!

**Test Plan:** Check that everything is passing

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79223
Approved by: https://github.com/janeyx99
2022-06-10 17:46:43 +00:00