Commit Graph

52 Commits

Author SHA1 Message Date
PyTorch MergeBot
e3243203b0 Revert "Add Python to CompositeImplicitAutograd (#82333)"
This reverts commit 1a20c69385.

Reverted https://github.com/pytorch/pytorch/pull/82333 on behalf of https://github.com/osalpekar due to Failing executorch tests internally D38252636 due to changes in graph tracing
2022-07-29 00:46:27 +00:00
Edward Z. Yang
1a20c69385 Add Python to CompositeImplicitAutograd (#82333)
Signed-off-by: Edward Z. Yang <ezyangfb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82333
Approved by: https://github.com/zou3519
2022-07-28 18:18:51 +00:00
PyTorch MergeBot
df36ccbd81 Revert "add a reinplacing FX pass (#80897)"
This reverts commit 3ef7a6921d.

Reverted https://github.com/pytorch/pytorch/pull/80897 on behalf of https://github.com/malfet due to broke windows trunk tests, see 3ef7a6921d
2022-07-27 22:32:03 +00:00
Brian Hirsh
3ef7a6921d add a reinplacing FX pass (#80897)
Adds a "reinplacing" FX transform, that goes through an FX graph and tries to convert out-of-place op calls into inplace calls whenever possible.

Followups from this PR include:
- Set up torch bench, and run the whole torchbench suite using AOTAutograd + functionalize + rein placing transforms to surface any issues (this is what I'm currently working on). Right now, I have some basic unit tests just to sanity check that the general logic makes sense.
- Add any missing inplace ops. This is mostly the `*_scatter*` ops, e.g. `diagonal_scatter_`, because these ops will commonly show up an FX graph after running functionalization.

The criteria for when you can swap an op `b = a.add(...)` with `a.add_(...)` is:
(1) An inplace variant of the operator with the same schema needs to exist (`aten.add` -> `aten.add_`)
(2) `a` (**or any of its aliases**) can't be used as an input to any other operators later on in the graph
(3) `a` can't be one of the inputs to the entire graph. It also can't be an **alias** of any of the inputs ***

*** One thing to note: (3) means that we can't technically guarantee that we'll get back **all** memory usage that we lost from functionalization. Functionalization converts input mutations into out-of-place calls, and then adds a `copy_()` to the end of the graph to preserve semantics.

I added logic to handle `copy_()` in this PR because it it's a pretty important optimizations in the context of `functionalization()`: any program that performs input mutations will have a `copy_()` in it after running functionalization.

There are some examples in the test file, but I think staring at an example of where re-inplacing is/isn't allowed to run is helpful:
```
// Before functionalization
def foo(a):
    tmp1 = a.add_(1)
    tmp2 = a.add(2)

// After functionalization
def foo(a)
    tmp1 = a.add(1)
    tmp2 = a.add(2)
    ....
    a.copy_(tmp1)

// After re-inplacing
def foo(a)
    // first add() is safe to re-inplace even though a is a program input,
    // because a's data is overwritten later by a copy_()
    tmp1 = a.add_(1)
    // second add() is NOT safe to re-inplace, because:
    // (1) a and tmp1 are aliased. Note that they weren't aliased in the original program,
             but they are now that we've done some re-inplacing.
    // (2) tmp1 is used as an input later in the program
    tmp2 = a.add(2)
    ....
    a.copy_(tmp1)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80897
Approved by: https://github.com/ezyang
2022-07-27 19:11:15 +00:00
Nikolay Korovaiko
d2c47d559c Revert "Revert "Enabling SymInt in autograd; take 3 (#81145)"" ; make sure is_intlist checks for symintnodes (#82189)
### Description
<!-- What did you change and why was it needed? -->

### Issue
<!-- Link to Issue ticket or RFP -->

### Testing
<!-- How did you test your change? -->

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82189
Approved by: https://github.com/ezyang
2022-07-26 20:47:11 +00:00
PyTorch MergeBot
c078476eb0 Revert "Enabling SymInt in autograd; take 3 (#81145)"
This reverts commit 032facd6e6.

Reverted https://github.com/pytorch/pytorch/pull/81145 on behalf of https://github.com/jeanschmidt due to breaking internal builds
2022-07-22 11:15:20 +00:00
Nikolay Korovaiko
032facd6e6 Enabling SymInt in autograd; take 3 (#81145)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81145
Approved by: https://github.com/ezyang
2022-07-22 00:14:50 +00:00
Brian Hirsh
ec77d35bda remove backend keys from FunctionalTensorWrapper, update TensorImpl::is_device methods (#81471)
It's kinda annoying to have wrapper subclass tensors (like `FunctionalTensorWrapper` include backend dispatch keys in their keyset, because when we occasionally write something buggy, we'll send the wrapper tensor the the backend kernel (which usually segfaults). By ensuring that wrapper tensors don't get backend keys, we'll get a nicer error when that happens.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81471
Approved by: https://github.com/ezyang
2022-07-21 21:47:29 +00:00
Brian Hirsh
5bd7abf281 functionalization: fix for mutable ops with different type promotion rules (#81702)
fixes https://github.com/pytorch/pytorch/issues/81618

At some point it looks like this became broken (you can see the updated expect test looks better now, and the original was just returning a constant).

I also got a repro that was failing with an assert, that I confirmed now passes:

```
def foo(t, y):
    out_1 = torch.ones(1)
    return torch.add(t, y, out=out_1)

g = make_fx(functionalize(foo))(torch.tensor([1]), torch.tensor([1]))
print(g.code)
out1 = functionalize(foo)(torch.tensor([1]), torch.tensor([1]))
out2 = foo(torch.tensor([1]), torch.tensor([1]))
print(out1 == out2)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81702
Approved by: https://github.com/ezyang
2022-07-19 18:52:35 +00:00
Edward Z. Yang
c09617f98f Revert "Revert "Remove python key when setting functional tensor metadata (#81401)"" (#81456)
This reverts commit f2bb25a758.

For the gory story see https://github.com/pytorch/pytorch/issues/73537
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81456
Approved by: https://github.com/Chillee
2022-07-15 03:53:40 +00:00
Edward Z. Yang
cce2f0d0e4 Disable test_functionalization.py under torchdynamo (#81458)
Tracked at https://github.com/pytorch/pytorch/issues/81457

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81458
Approved by: https://github.com/anijain2305
2022-07-14 16:56:56 +00:00
PyTorch MergeBot
f2bb25a758 Revert "Remove python key when setting functional tensor metadata (#81401)"
This reverts commit b0199c06f6.

Reverted https://github.com/pytorch/pytorch/pull/81401 on behalf of https://github.com/clee2000 due to broke trunk win force_on_cpu tests https://github.com/pytorch/pytorch/runs/7329017706?check_suite_focus=true
2022-07-13 21:55:47 +00:00
Edward Z. Yang
b0199c06f6 Remove python key when setting functional tensor metadata (#81401)
Fixes https://github.com/pytorch/pytorch/issues/81365

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81401
Approved by: https://github.com/bdhirsh
2022-07-13 19:57:40 +00:00
Brian Hirsh
f2dcb11bac basic SymInt test for functionalization (#80418)
`expand` is one of a handful of ops with SymInt support today, so this PR gives a basic test that shows functionalization properly mapping `expand.SymInt` -> `expand_copy.SymInt`. I added the logic to handle this properly in https://github.com/pytorch/pytorch/pull/80251, but didn't add a test for it. (see the [code](https://github.com/pytorch/pytorch/pull/80251/files#diff-da7d91d9e59774e3ee8d120a0f97e52058b73125fd7edd55b5c2e71d4ce5629dR330))

I want to add a more comprehensive test that also shows something more E2E (using `PySymInt`'s to avoid baking in shapes, running functionalization, and fx-tracing the output to show that functionalization ran properly), but I think it's currently blocked on some other work.

At least today, `FakeSymbolicTensor` doesn't play well with `make_fx` (but @Chillee mentioned - should it?)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80418
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-12 01:46:16 +00:00
Brian Hirsh
f84b30f790 fix functionalization regression introduced by ProxyTorchDispatchMode, migrate testing to make_fx (#80416)
`ProxyTorchDispatchMode` was added recently as part of `make_fx`, which was secretly causing the meta tensor calls used inside of functionalization to get baked into the graph. It also wasn't caught because the functionalization tests in core don't use `make_fx`, and the tests in functorch aren't as comprehensive.

Now that `make_fx` is in core, I also ported the functionalization test infra over to use it, which would have caught the regression. This also makes the tests cleaner, since mode-based tracing lets us pick up factory functions in the trace output.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80416
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-12 01:46:16 +00:00
Animesh Jain
1d90d6ee60 Setup for running PyTorch tests with TorchDynamo and skips for known failing tests (#80106)
@ezyang I am going to keep adding more skips in this PR for now. And once we have the CI running, I will replace with the appropriate decorators.

cc @mlazos , we should add those tests in test_ops.py in this PR as well

cc @jansel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80106
Approved by: https://github.com/ezyang, https://github.com/jansel
2022-07-07 18:57:33 +00:00
Brian Hirsh
960758b0b7 fix overload ambiguity with functional ops; fix _foreach op grouping (#80556)
This should fix the last issue that @anijain2305 hit when running ResNet with TorchDynamo <> functionalization.

Today if you try to call an `OpOverloadPacket` from python with some arguments, we will use the types of those arguments to perform overload resolution. With some functional variants of ops, this can be ambiguous.

Today this affects just one op: `_fused_moving_avg_obs_fq_helper`, although it would potentially affect e.g. `native_batch_norm` in the future.

Example:
```
# There are technically two overloads:
# torch.ops.aten._fused_moving_avg_obs_fq_helper.default (returns 2 argument, mutates 4 of its inputs inplace)
# torch.ops.aten._fused_moving_avg_obs_fq_helper.functional (returns 6 argument, mutates none of its inputs)

# We pick the wrong one - no way to know that we should pick the functional one, just from the call site.
outs = torch.ops.aten._fused_moving_avg_obs_fq_helper(a, a, a, a, a, a, a, 1.0, 0, 1, 0)
# raises an error - tries to call the overload with only 2 returns
return _fused_moving_avg_obs_fq_helper_functional[5]
```

Specifically, functionalization will bake `_fused_moving_avg_obs_fq_helper.functional` into the graph, but when AOTAutograd tries to compile with TorchScript, it needs to remove the overload name (TS doesn't know how to parse overload names directly, so we need to remove the overload name and let it infer the right overload at runtime later- so it picks the wrong one).

The situation is pretty similar to inplace; `ops.aten.add` and `ops.aten.add_` represent two different `OverloadPacket` objects; they can't be overloads of the same op, because their schemas would be ambiguous - the alias annotations are different, but that isn't enough to disambiguate).

In this PR, I try to fix the situation in a pretty similar way to how we handle `inplace` in the data model: `inplace` ops get their own base operator name, but they are represented as a flag inside of `BaseOperatorName` in the data model.

Two other important changes that I made as part of this PR:

(1) Originally, there were ~100 different `*_functional` operators: e.g. we had operators named `resize.functional` and `zero.functional`. The `_functional` bit isn't actually necessary in most cases: it's only necessary for operators that **also** have a `SchemaKind.mutable` variant, where `_fused_moving_avg_obs_fq_helper` is the only op that fits that description today. So I removed the unnecessary notion of "functional" from those other ops. I also added a bunch of assertions to force this restriction.

I think that makes more sense in the long run, because it eliminates an unnecessary difference in the model. E.g. we don't have `add_.Tensor` and `add.Tensor_functional`. We just have `add_.Tensor` and `add.Tensor`.

(2) I noticed that we actually still weren't pairing up a bunch of `_foreach` operators correctly, because their input arguments were different (`self` vs. `tensors`). Since they're private API's, I went ahead and changed the argument names directly so they get matched up. Before this PR, we were generating a separate `_foreach_add` and `_foreach_add.functional` variant in a bunch of cases, that really did the same thing (but happened to have a different name for the first argument).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80556
Approved by: https://github.com/ezyang, https://github.com/albanD
2022-07-06 12:45:11 +00:00
Brian Hirsh
adf8060600 add a new alias key for functional to view op decompositions
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79615

Approved by: https://github.com/zou3519
2022-06-15 23:18:09 +00:00
PyTorch MergeBot
d2200e38f7 Revert "fix _unsafe_view schema to work with functionalization"
This reverts commit 46234df5f1.

Reverted https://github.com/pytorch/pytorch/pull/79148 on behalf of https://github.com/janeyx99 due to Broke 11.3 tests on trunk and on PR, see 46234df5f1
2022-06-10 13:09:00 +00:00
Brian Hirsh
46234df5f1 fix _unsafe_view schema to work with functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79148

Approved by: https://github.com/albanD
2022-06-10 01:45:04 +00:00
Brian Hirsh
92229adf0c add special handling for resize_() in functionalization pass
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77714

Approved by: https://github.com/ezyang
2022-05-26 16:15:44 +00:00
Brian Hirsh
e9c54ae1c2 functionalization: remove some unnecessary view_copies in inplace views
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77713

Approved by: https://github.com/ezyang
2022-05-26 16:15:44 +00:00
Brian Hirsh
7ff091fc4e move Functionalize dispatch key closer to backends
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77132

Approved by: https://github.com/ezyang, https://github.com/zou3519
2022-05-26 16:15:43 +00:00
Brian Hirsh
5cc258ec9e make block_diag composite compliant
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77716

Approved by: https://github.com/zou3519
2022-05-26 16:15:42 +00:00
Brian Hirsh
07e4533403 reland of as_strided support for functionalization; introduce as_strided_scatter
This reverts commit a95f1edd85.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78199

Approved by: https://github.com/ezyang
2022-05-24 22:40:44 +00:00
PyTorch MergeBot
a95f1edd85 Revert "as_strided support for functionalization; introduce as_strided_scatter"
This reverts commit 3a921f2d26.

Reverted https://github.com/pytorch/pytorch/pull/77128 on behalf of https://github.com/suo due to This broke rocm tests on master 3a921f2d26. rocm tests are no longer run on PRs, you should add a `ciflow/trunk` label if you want to run them
2022-05-24 20:19:12 +00:00
Brian Hirsh
2eea5eff62 functionalization: fix bug with multiple views of same base
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77129

Approved by: https://github.com/ezyang
2022-05-24 19:56:43 +00:00
Brian Hirsh
3a921f2d26 as_strided support for functionalization; introduce as_strided_scatter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77128

Approved by: https://github.com/ezyang
2022-05-24 18:20:31 +00:00
Brian Hirsh
7ddc1425ff functionalization fix for inplace comparison ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77125

Approved by: https://github.com/ezyang
2022-05-24 18:20:31 +00:00
Brian Hirsh
22d566acda functionalization fix for inplace_view ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77126

Approved by: https://github.com/ezyang
2022-05-24 18:20:30 +00:00
Brian Hirsh
0161e9eb00 [test] attempt to functionalize ops with mutable positional-only args
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76320

Approved by: https://github.com/ezyang
2022-05-19 18:50:34 +00:00
Edward Z. Yang
b5bc954a71 Fix optional dtype/layout/memory_format pycall; fix memory format
Double-header bug fix:

- As reported by jansel, dtypes are still showing up as integers
  when the schema is an optional dtype.  This is simple enough to
  fix and I added a test for it.  But while I was at it...

- I noticed that the THPMemoryFormat_new idiom with "unused" name
  doesn't actually work, the repr of the returned memory format
  object is wrong and this shows up when we try to log the args/kwargs.
  So I fixed memory format to do it properly along with everything
  else.

Fixes https://github.com/pytorch/pytorch/issues/77135

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77543

Approved by: https://github.com/albanD, https://github.com/jansel
2022-05-16 16:46:08 +00:00
Brian Hirsh
47dd092bae add a new at::lift operator, fix torch.tensor for functionalization
This reverts commit 85bd65a880.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77285

Approved by: https://github.com/albanD, https://github.com/ezyang
2022-05-12 13:31:19 +00:00
PyTorch MergeBot
85bd65a880 Revert "[test] try to fix torch.tensor for functionalization"
This reverts commit 9edee09ed6.

Reverted https://github.com/pytorch/pytorch/pull/76319 on behalf of https://github.com/janeyx99
2022-05-11 18:48:42 +00:00
Brian Hirsh
9edee09ed6 [test] try to fix torch.tensor for functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76319

Approved by: https://github.com/ezyang
2022-05-11 17:27:34 +00:00
Edward Z. Yang
f2eed9400d Register PrimTorch refs as decompositions.
For the most part, PrimTorch refs have the same signature as their
ATen equivalents.  I modify most PrimTorch refs to register themselves
as decompositions, using the prim name they wrap to find the aten name
(except for a few cases where the prim/aten names mismatch).  There are
some exclusions, falling into one of two categories:

- The torch equivalent was already implemented as a CompositeImplicitAutograd
  decomposition in C++

- The ref doesn't support enough features (e.g., the real deal has more
  kwargs / overloads than are currently implemented)

PrimTorch refs are written as a single function that supports all
overloads, and this style is convenient for cases where we have a bundle
of overloads for what morally is a single overload with a Union type
on an argument (which we ought to have supported in
native_functions.yaml but blah); to support registering a single decomp
for all the overloads, we modify register_decomposition to register
to ALL overloads if you pass it an overload packet.  This is technically
BC breaking but no tests started failing because of it.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76835

Approved by: https://github.com/Chillee, https://github.com/mruberry
2022-05-06 20:11:45 +00:00
Brian Hirsh
40d96f0afd Revert "functionalization: add support for zero_()"
This reverts commit 7d44b3675b.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76375

Approved by: https://github.com/datumbox, https://github.com/albanD
2022-04-26 19:27:27 +00:00
Brian Hirsh
640ce6bc9b functionalization bugfix: using owning type when unwrapping tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76125

Approved by: https://github.com/ezyang
2022-04-25 22:00:19 +00:00
Brian Hirsh
ea5209c9fd functionalization: add native fill() op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76084

Approved by: https://github.com/ezyang
2022-04-25 21:34:16 +00:00
Brian Hirsh
7d44b3675b functionalization: add support for zero_()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75913

Approved by: https://github.com/albanD
2022-04-25 21:31:48 +00:00
soulitzer
81722f6630 Fix autograd.functional tests to not fail with logging tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76057

Approved by: https://github.com/albanD
2022-04-20 20:32:40 +00:00
albanD
cd0591dff3 Change default TLS behavior in dispatch to favor is-a style
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75827

Approved by: https://github.com/ezyang
2022-04-20 17:32:29 +00:00
Brian Hirsh
7ca03dcdfc avoid some unnecessary view_copy calls
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75819

Approved by: https://github.com/ezyang
2022-04-18 20:38:55 +00:00
Brian Hirsh
204df13d42 teach ivalue about List[Optional[Tensor]], fix fallbacks
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75716

Approved by: https://github.com/ezyang
2022-04-18 20:05:26 +00:00
Brian Hirsh
4c7b4b5770 fix out= op handling for functionalization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75818

Approved by: https://github.com/ezyang
2022-04-18 20:05:21 +00:00
Brian Hirsh
cb17973a2b split out functionalization codegen to use view_copy operators
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75302

Approved by: https://github.com/ezyang
2022-04-18 20:05:21 +00:00
Brian Hirsh
9429dbb434 make functionalization work better with subclasses
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73441

Approved by: https://github.com/ezyang, https://github.com/albanD
2022-04-04 15:33:27 +00:00
Jane Xu
b40dbdc49f Fix test ownership lint (#71554)
Summary:
I noticed after creating https://github.com/pytorch/pytorch/issues/71553 that the test ownership lint was not working properly.

This fixes my egregious mistake and fixes the broken lints.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71554

Reviewed By: malfet

Differential Revision: D33690732

Pulled By: janeyx99

fbshipit-source-id: ba4dfbcd98038e4afd63e326832ae40935d2501e
(cherry picked from commit 1bbc3d343a)
2022-01-21 18:24:42 +00:00
Brian Hirsh
0de7a618a3 functionalization: update is_aliased() logic (#68881)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68881

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32647614

Pulled By: bdhirsh

fbshipit-source-id: 6bec50d3e54419d1707d0b6c0c6729bcc1ced1f0
2021-12-02 09:19:12 -08:00
Brian Hirsh
53bfb00ee1 [bugfix] TensorList args in functionalization pass (#68395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68395

At the time that I wrote the pass, I thought that `c10::TensorList` and `c10::List<Tensor>` were the same thing. But it looks like a `TensorList` is actually an `ArrayRef<Tensor>`. This led to a nasty bug when I tried to add conditional functionalization to `block_diag`, where in the boxed kernel, I would:

(1) unwrap the first `IValue` by calling `.toTensorList()` (this actually returns a `List<Tensor>`, not a `TensorList`).
(2) call `TensorList to_functional_tensor(List<Tensor>)` to get out a `TensorList` with the functionalized tensors
(3) wrap that back into an `IValue` and put in on the stack.

Somewhere in that sequence of operations, something bad happens and we segfault. Fixing up the signature of `to_functional_tensor` to be `List<Tensor> to_functional_tensor(List<Tensor>)` fixes the bug. I have a feeling that there's a latent TensorList-related bug in the boxing/unboxing logic that made this worse, but I'm okay to stick with my narrow fix for now.

Additionally tested by running `pytest test/test_ops.py test/test_vmap.py -v -k block_diag` on top of this PR: https://github.com/pytorch/functorch/pull/235

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D32448258

Pulled By: bdhirsh

fbshipit-source-id: 3b2b6c7cd5e4c29533d0502f24272d826bfe03c1
2021-11-17 15:50:30 -08:00