Commit Graph

39 Commits

Author SHA1 Message Date
cyy
77f2883c41 [Reland2] fix missing-prototypes warnings in torch_cpu (Part 4) (#102228)
This PR relands the changes introduced in PR https://github.com/pytorch/pytorch/pull/100849. The old PR turnd nnc_* functions into  static. We now add declarations for them and hope that inter builds will pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102228
Approved by: https://github.com/albanD
2023-06-02 22:04:44 +00:00
PyTorch MergeBot
32ce06a5ab Revert "[Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)"
This reverts commit 4f2c007a1b.

Reverted https://github.com/pytorch/pytorch/pull/101949 on behalf of https://github.com/osalpekar due to As noted in @izaitsevfb's comment, we are still seeing linker errors, this time due to `nnc_prepacked_linear_clamp_run` being made a static function. ([comment](https://github.com/pytorch/pytorch/pull/101949#issuecomment-1560226880))
2023-05-23 22:53:47 +00:00
cyy
4f2c007a1b [Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)
This PR relands the changes introduced in PR #100849. The old PR turnd  nnc_aten_embedding  into a static function, however, it is actually used in torch/csrc/jit/tensorexpr/operators/misc.cpp.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101949
Approved by: https://github.com/albanD
2023-05-22 10:53:07 +00:00
PyTorch MergeBot
498c34e8e8 Revert " fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)"
This reverts commit c2f28d1c1d.

Reverted https://github.com/pytorch/pytorch/pull/100849 on behalf of https://github.com/izaitsevfb due to fails internal Meta builds, including fbcode and android, see D46009888: ld.lld: error: undefined symbol: nnc_aten_embedding ([comment](https://github.com/pytorch/pytorch/pull/100849#issuecomment-1555105800))
2023-05-19 19:05:15 +00:00
cyy
c2f28d1c1d fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)
This PR fixes more missing-prototypes violations in the torch_cpu source following PRs #100053, #100147 and #100245

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100849
Approved by: https://github.com/albanD
2023-05-18 03:49:45 +00:00
cyy
799521fae5 Fixes 96676 (#96714)
Fixes #96676

PR #95942 introduced some changes in function implementations to replace const parameters by const referenced ones. However, GetBackendDevice was missed and  remains the old signature. This quick fix solves the type mismatch.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96714
Approved by: https://github.com/antoniojkim, https://github.com/Skylion007
2023-03-14 19:00:59 +00:00
cyy
d0e4ca233e some reference and move fixes (#95942)
This PR introduces some modifications:
1. We find out some const function parameters that can be passed by reference and add the reference.
2. We find more opportunists of passing by value and change them accordingly.
3. Some use-after-move errors are fixed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95942
Approved by: https://github.com/Skylion007
2023-03-10 03:44:09 +00:00
cyy
1a32db15e7 Some performance fixes (#94034)
Applies some performance fixes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94034
Approved by: https://github.com/Skylion007
2023-02-04 02:17:48 +00:00
Jiewen Tan
3b8245ab12 [LTC] Make ComputePostOrder accept const T pointers (#88773)
Summary:
Since `c10::ArrayRef` now support `c10::ArrayRef<const T>`, let's restore `ComputePostOrder` to accept `const Node*` again, which is more suitable for the context of the given helpers.

Test Plan:
CI.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88773
Approved by: https://github.com/JackCaoG
2022-11-10 18:34:19 +00:00
Antonio Kim
d37dc6f698 Make LazyGraphExecutor extensible (#87218)
Add `LazyGraphExecutor` to backend interface so that its is extensible by a vendor backend.

I've made some preliminary methods virtual. Not sure if we want to make all methods in `LazyGraphExecutor` virtual.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87218
Approved by: https://github.com/wconstab, https://github.com/alanwaketan
2022-10-21 14:28:14 +00:00
Brian Hirsh
4a2d2e5e40 Change API type Tensor[] for structured kernels. (#73350)
Partially fixes: #66328

This PR:
- adds support for `ITensorList` to the dispatcher for:
  - computing the dispatch key
  - boxing and unboxing `ITensorList`
- modified the codegen for structured kernels:
  - codegen APIs use `ITensorList` instead of `ArrayRef<Tensor>`

**Changes summary:**

- Signature changes due to the different APIs:
  - dispatcher API (e.g. `BatchingRegistrations.cpp`)
  - C++ API (e.g. `TensorShape.cpp`)
- Miscelaneous functions used by codegen'd functions (e.g. `FunctionalTensorWrapper.*`)
- Dispatcher changes for handling `ITensorList` correctly (e.g. `DispatchKeyExtractor.h`)
- Signature changes of `at::cat` due to the need of `const` inside `TensorBody.h`
- Forward declarations of `ITensorList` (e.g. `MethodOperators.h`)
- Codegen changes, special casing structured kernels (e.g. `gen.py`)

**Short description of structured kernels special casing:**

I introduced, mainly, 5 types of changes to the codegen for generating code depending on
whether the kernel is structured or not:

1. Added a `structured_type_override` flag to the `argument_type` function definition of
the affected APIs (mainly the dispatcher and C++ APIs).
  - `api/cpp.py`, `api/dispatcher.py`, `api/native.py`
2. Added a `structured_type_override` member to the signature
classes (e.g. `CppSignature`), since `FunctionSchema` doesn't really know whether the
function is structured or not
  - `api/types.py`
3. Added a `part_of_structured_group` to `NativeFunction` class, which is just a
convenient function to forward to `structured_type_override` wherever needed
  - `model.py`
4. Appropriately changed the rest of the codegen, whenever it used either the signature
classes or the `arguments` function directly
5. Added a check for `const ITensorList&` type wherever there was a check for `TensorList`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73350
Approved by: https://github.com/bdhirsh
2022-09-26 21:46:38 +00:00
Antonio Kim
7c3102f3f0 Add ShouldSyncTensor interface (#84418)
Adding an `ShouldSyncTensor` interface to allow for the case of output pruning should a vendor not support retrieving the value of a certain output.

CC: @wconstab @JackCaoG @Krovatkin
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84418
Approved by: https://github.com/wconstab
2022-09-07 05:03:02 +00:00
Nikolay Korovaiko
4aac42cc98 [LT] Add a new backend interface [DUP of the original] (#81662)
This is a dup of https://github.com/pytorch/pytorch/pull/76517 which is failing because Jiewen needs to resign the CLA.

Summary:
This commit introduces a new set of BackendImplInterface: GetDefaultDeviceOrdinal
and SetDefaultDeviceOrdinal. It allows backend to specify their own default
device, e.g, 1 for XLA and 0 for CUDA/CPU.

Test Plan:
./build/bin/test_lazy --gtest_filter=BackendDeviceTest.*

ghstack-source-id: b4adfef49253e51bffbbf40d356188a92c98994d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76517

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81662
Approved by: https://github.com/JackCaoG, https://github.com/wconstab
2022-07-19 01:15:22 +00:00
Nikolay Korovaiko
4b54a946a8 add MetricReport API (#79678)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79678
Approved by: https://github.com/JackCaoG, https://github.com/wconstab
2022-07-06 20:06:17 +00:00
Nikolay Korovaiko
7bf1b1dc31 switch computation to computationPtr (#79491)
Make `ExecuteComputation` take a pointer to `Computation` so it can be overridden and extended.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79491
Approved by: https://github.com/JackCaoG, https://github.com/wconstab
2022-06-23 23:48:16 +00:00
Michael Suo
30fb2c4aba [lint] autoformat test/cpp and torch/csrc
Let's have some fun.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78828

Approved by: https://github.com/ezyang
2022-06-11 21:11:16 +00:00
Antonio Kim
55be35ae39 Fix 'Code below assumes there is at least one tensor arg' assumption (#76917)
Previously when codegening ops like `zeros_` or `ones_` we'd hit a `Code below assumes there is at least one tensor arg error`. This check is not entirely correct which is what is causing the error to be thrown. There are ops like the ones mentioned that pass in a `device` parameter that can be used in place of the "first tensor".

CC: @wconstab @desertfire @henrytwo @ke1337
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76917
Approved by: https://github.com/desertfire
2022-05-18 17:58:47 +00:00
Antonio Kim
c218263207 [LTC] Mark Step Indicator (#76840)
Proposed solution for #76826

Basically adds a context which is only "active" when called in mark step. Any backend can then use this to check if within a mark step context.

I've also added an example warning in the TS backend so that we now see the following:
```python
>>> import torch
>>> import torch._lazy
>>> import torch._lazy.ts_backend
>>> torch._lazy.ts_backend.init()
>>> a = torch.tensor([1, 2, 3, 4], device="lazy")
>>> b = torch.tensor([5, 6, 7, 8], device="lazy")
>>> c = a + b
>>> c
[W ts_backend_impl.cpp:187] Compile outside of mark step
tensor([ 6,  8, 10, 12], device='lazy:0')
>>> d = a * b
>>> torch._lazy.mark_step()
>>> d
tensor([ 5, 12, 21, 32], device='lazy:0')
```
Though it was mainly for example and will be happy to remove if this warning is not desired.

Fixes #76826

CC: @wconstab @desertfire @henrytwo @ke1337

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76840
Approved by: https://github.com/desertfire
2022-05-16 15:47:29 +00:00
Wonjoo Lee
28dfed962a Remove deprecated string torch::lazy::BackendDevice constructor (#76506)
Summary:
Remove deprecated string torch::lazy::BackendDevice constructor, re-landing part of https://github.com/pytorch/pytorch/pull/76264.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76506

Reviewed By: dagitses

Differential Revision: D35993059

Pulled By: alanwaketan

fbshipit-source-id: c859331919447ecfa56a9a57a3324305b7904fc2
(cherry picked from commit 2e99b1533425693a6575405f8806551efb2002e4)
2022-05-03 06:10:01 +00:00
Antonio Kim
f3f327e103 Decouple LTC from TS Backend using Lazy IR Builder
Next stage of breaking up https://github.com/pytorch/pytorch/pull/74710

IR builder class introduced to decouple the explicit usage of `TsNode` in core lazy tensors.

Requires https://github.com/pytorch/pytorch/pull/75324 to be merged in first.

**Background**
- there are ~ 5 special ops used in lazy core but defined as :public {Backend}Node.  (DeviceData, Expand, Scalar...)
- we currently require all nodes derive from {Backend}Node, so that backends can make this assumption safely
- it is hard to have shared 'IR classes' in core/ because they depend on 'Node'

**Motivation**

1. avoid copy-paste of "special" node classes for each backend
2. in general decouple and remove all dependencies that LTC has on the TS backend

**Summary of changes**
- new 'IRBuilder' interface that knows how to make 5 special ops
- move 'special' node classes to `ts_backend/`
- implement TSIRBuilder that makes the special TS Nodes
- new backend interface API to get the IRBuilder
- update core code to call the builder

CC: @wconstab @JackCaoG @henrytwo

Partially Fixes #74628

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75433
Approved by: https://github.com/wconstab
2022-04-28 02:07:02 +00:00
Jiewen Tan
a28b132bc2 Revert D35860266: [pytorch][PR] Update torch::lazy::BackendDevice to have a new default ordinal
Test Plan: revert-hammer

Differential Revision:
D35860266 (f9d07ae644)

Original commit changeset: 554ebe16a068

Original Phabricator Diff: D35860266 (f9d07ae644)

fbshipit-source-id: 325c54aa2e87e51134115213352b3d33a81b7edf
(cherry picked from commit bbd74bf34a534d1b87aadff9790038e3dbbfa9c8)
2022-04-27 18:11:24 +00:00
Henry Tu
a39a2c3969 Enable LTC Input/Output Mapping (#75828)
Summary:
This PR enables Input/Output aliasing for Lazy Tensor Core. `SetUpAlias` is a virtual function that can be overridden in a vendor's custom `LoweringContext` implementation.

The return type of `LoweringContext::GetResultShape` has also been updated to return a `c10::optional` value, since `GetResultShape` isn't currently implemented for the TorchScript backend.

The changes here mirror the interface used by `torch_xla`: https://github.com/pytorch/xla/blob/master/torch_xla/csrc/tensor.cpp#L1548-L1549

cc: antoniojkim ke1337 wconstab silvasean

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75828

Reviewed By: Krovatkin

Differential Revision: D35952593

Pulled By: wconstab

fbshipit-source-id: e20b11e44e0e1beda1b1c47aa3a8b611afd97b7f
(cherry picked from commit bcbc9ef01ef8eb84667e5c42edc10d38d5d78395)
2022-04-27 17:06:01 +00:00
Wonjoo Lee
f9d07ae644 Update torch::lazy::BackendDevice to have a new default ordinal (#76264)
Summary:
Fixes https://github.com/pytorch/xla/issues/3490. Updates `torch::lazy::BackendDevice` with changes below:

1. Remove the no-op string constructor.
2. Update default ordinal to `-1`.
3. Add a `is_valid` function to check if `ordinal` is valid/non-default (`ordinal >= 0`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76264

Reviewed By: mrshenli

Differential Revision: D35860266

Pulled By: alanwaketan

fbshipit-source-id: 554ebe16a0683d37b00270c4f35163bf690bfe28
(cherry picked from commit b941d10e8545dfecfb34e4d5c24a29a1cc49bc4b)
2022-04-25 23:57:18 +00:00
Antonio Kim
2c2c13d21b Decouple Lazy Node Shape Cache (#75324)
Summary:
Next stage of breaking up https://github.com/pytorch/pytorch/pull/74710

Move shape cache implementation to the backend interface. Also, clean up some of the hashing logic in the base node class.

CC: wconstab JackCaoG henrytwo

Partially Fixes https://github.com/pytorch/pytorch/issues/74628

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75324

Reviewed By: anjali411

Differential Revision: D35730823

Pulled By: wconstab

fbshipit-source-id: cf6fa326319b9324e5f422a78817b6fb5bf7e9b8
(cherry picked from commit faec5043df56639e2fd23de2d91ae796e4f3df70)
2022-04-21 17:27:05 -07:00
Will Constable
3547f20872 Land remaining parts of Torchscript Lazy Tensor backend (#74111)
Summary:
Also enables bazel build to run lazy codegen.  Bazel (oss) build feeds off the same filelists as cmake/buck (build_variables.bzl), so enabling it is easier than keeping it disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74111

Test Plan: Run CI and verify test_lazy_ops is running via OSS cmake builds

Reviewed By: bdhirsh

Differential Revision: D34772403

fbshipit-source-id: 8a63f58b9536e6ac1be530667932176ef2549496
(cherry picked from commit e807ffb1918853d10b924fdc24f85ee5b1a39021)
2022-03-22 23:14:03 +00:00
Will Constable
d67a265881 Sync lazy_tensor_staging to master (#74311)
Summary:
This merges changes that have already been reviewed/landed onto lazy_tensor_staging branch.  It combines changes from multiple PRs into one diff.

updated from lazy_tensor_staging on 3/16

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74311

Test Plan:
Run CI to ensure compilation on various platforms
Run unit tests on lazy_tensor_staging branch with source version of all these diffs

Reviewed By: desertfire

Differential Revision: D34929235

fbshipit-source-id: babbc3bbeabc5b8107ee9284ed7765887a148622
(cherry picked from commit d91577a6557343ec536f6859e4808ec1a8a9b685)
2022-03-17 16:08:57 +00:00
Will Constable
b4173b80b7 Use intrusive_ptr for LazyTensor (#73445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73445

Refactors the whole codebase to use LazyTensorPtr (defined as c10::intrusive_ptr) to enable XLA to use a derived class XlaLazyTensor and override functionality.

this PR is just the first step, and we will need to add a factory class that XLA can override in their backend to actually hook up their derived tensor class.

Parallel PR on lazy_tensor_staging: #73429

Test Plan: tested via lazy_tensor_staging test_ptltc and torchbench and CI

Reviewed By: ezyang

Differential Revision: D34481918

fbshipit-source-id: 01176b127df6b79039aa1bc57bc6da5505161f87
(cherry picked from commit 52b9ae4e22d2703d44c6436311d79d40bd62c6aa)
2022-03-03 06:27:35 +00:00
Jiewen Tan
338eb1b2b3 [LTC] Export torch::lazy::GetBackendDevice() (#70963)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70963

This commit exports torch::lazy::GetBackendDevice().

Test Plan: CI in the lazy_tensor_staging branch.

Reviewed By: wconstab

Differential Revision: D33468938

Pulled By: alanwaketan

fbshipit-source-id: f65599c9238bf6b4f4ffbd5194befdc267272831
2022-01-07 13:13:18 -08:00
Jiewen Tan
87139d8532 [LTC] Sync LazyGraphExecutor and LazyTensor with the staging branch (#70867)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70867

This commit syncs LazyGraphExecutor and LazyTensor with the staging branch's
latest changes.

Test Plan: CI in the lazy_tensor_staging branch.

Reviewed By: wconstab, desertfire

Differential Revision: D33440005

Pulled By: alanwaketan

fbshipit-source-id: 0dd72643dbf81a87fc4b05019b6564fcb28f1979
2022-01-07 01:51:53 -08:00
Jiewen Tan
ab57f6d12c [LTC] Upstream utils to extract BackendDevice from at::Tensor (#70069)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70069

This commit upstreams utils to extract BackendDevice from at::Tensor.

Test Plan: ./build/bin/test_lazy --gtest_filter=BackendDeviceTest.GetBackendDevice*

Reviewed By: samdow

Differential Revision: D33293160

Pulled By: alanwaketan

fbshipit-source-id: 78647239f90b4d04adce84ae6022b8983ad30c09
2021-12-23 12:42:03 -08:00
Michael Suo
795af1578c Revert D33172665: [LTC] Upstream utils to extract BackendDevice from at::Tensor
Test Plan: revert-hammer

Differential Revision:
D33172665 (121d067999)

Original commit changeset: b334ee358ea7

Original Phabricator Diff: D33172665 (121d067999)

fbshipit-source-id: 8bff43cddfc5d30483ec5cea8eff037aab9d1cfa
2021-12-22 21:12:49 -08:00
Jiewen Tan
121d067999 [LTC] Upstream utils to extract BackendDevice from at::Tensor (#70069)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70069

This commit upstreams utils to extract BackendDevice from at::Tensor.

Test Plan: ./build/bin/test_lazy --gtest_filter=BackendDeviceTest.GetBackendDevice*

Reviewed By: wconstab

Differential Revision: D33172665

Pulled By: alanwaketan

fbshipit-source-id: b334ee358ea7b031bbffb0a16fa634715dba83f5
2021-12-22 18:15:45 -08:00
Jiewen Tan
e02d836cb2 [LTC] Upstream LTCTensorImpl (#70062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70062

This commit upstreams LTCTensorImpl from the lazy_tensor_staging branch.
It inherits from c10::TensorImpl and thus manages the lifetime/storage
of LazyTensor.

Test Plan: ./build/bin/test_lazy --gtest_filter=LazyTensorImplTest.*

Reviewed By: desertfire

Differential Revision: D33171186

Pulled By: alanwaketan

fbshipit-source-id: 6af9f91cc7c7e997f120cb89a7bcd6785c03ace0
2021-12-22 03:21:52 -08:00
Bin Bao
0bbe21b172 [LT] Upstream more util functions (#69098)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69098

Add the following utils: helpers, ir_dump_util, and
tensor_util. Some of the util functions may be better organized by
grouping into different files, but we can leave that for later.

Test Plan: Imported from OSS

Reviewed By: alanwaketan

Differential Revision: D32758480

Pulled By: desertfire

fbshipit-source-id: 2a0707879f0c49573380b4c8227a3c916c99bf9a
2021-12-04 08:42:35 -08:00
Jiewen Tan
e6c435bf96 [LTC] Upstream helpers for c10::Device <=> BackendDevice (#69064)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69064

This commit upstreams helpers for converting a c10::Device to
BackendDevice and vice versa.

Test Plan: ./build/bin/test_lazy --gtest_filter=BackendDeviceTest.FromAten:BackendDeviceTest.ToAten

Reviewed By: wconstab

Differential Revision: D32732607

Pulled By: alanwaketan

fbshipit-source-id: 0dd233d37a4a30fc4b22dba322ddd85d4cb3635b
2021-12-01 12:15:32 -08:00
Bin Bao
b83e8d560b [LT] Sync LTC branch changes on torch/csrc/lazy/core (#69012)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69012

Some changes to torch/csrc/lazy/core were done on the
lazy_tensor_staging branch (https://github.com/pytorch/pytorch/pull/68427).
Merge those back into the trunk.

Test Plan: Imported from OSS

Reviewed By: wconstab

Differential Revision: D32708696

Pulled By: desertfire

fbshipit-source-id: e54b978f2bdb9c7db27880f60246fdf1e8b41019
2021-11-30 07:09:15 -08:00
Jiewen Tan
cd4e31ff21 [LTC] Add some comments to BackendDevice() (#68156)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68156

[skip ci]

Test Plan: Imported from OSS

Reviewed By: wconstab

Differential Revision: D32346302

Pulled By: alanwaketan

fbshipit-source-id: 06de6afbe2f937511abce485b24cec0a85bfbe97
2021-11-11 12:43:56 -08:00
Will Constable
d6e6064efc [LT] Upstream backend interfaces (#67927)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67927

BackendData - represents 'tensor data' in opaque backend storage
LoweringContext - interface for performing backend-specific IR lowering
BackendImplInterface - interface for lazy tensors backends to implement

Reorgs backend-related files into lazy/backend subdir

includes a few small fixes, which were made on lazy_tensor_staging but need to be back-ported to master.

Test Plan: used by lazy_tensor_staging branch

Reviewed By: desertfire

Differential Revision: D32142032

fbshipit-source-id: 828c717bcd0d511876e64ad209b50f7bfb10cec5
2021-11-10 12:55:31 -08:00
Jiewen Tan
6011c35a79 [LTC] Upstream class BackendDevice (#68027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68027

This commit upstreams class BackendDevice to the master, which is a backend
specific representation of the actual hardware, for instances, CPU, GPU, or
TPU.

This concept is important for backend like XLA where it needs to tell the
actual hardware type from the c10::DeviceType::Lazy virtual device during
both IR constructions and lowerings.

Test Plan: ./build/bin/test_lazy --gtest_filter=BackendDeviceTest.*

Reviewed By: wconstab

Differential Revision: D32261838

Pulled By: alanwaketan

fbshipit-source-id: 579c3fc5f9da7847c887a383c6047e8ecb9cc5bc
2021-11-10 07:05:43 -08:00