Before, the process would crash or certain elements would be silently ignored. Now an InvalidArgument is raised.
PiperOrigin-RevId: 384844020
Change-Id: Iba44417e383bdd0e1abc4012bfca83b2377dd335
One step in getting rid of deprecated and brittle include functionality in the tablegen rules.
Note that this doesn't move everything to td_library yet, but uses td_library where adding a new library was the easiest way to keep everything building.
PiperOrigin-RevId: 384840190
Change-Id: I0c420024421f74ac98e3bf61577439544372ec67
- Added the delimiter between shape dimensions identical to the EagerTensor cases
- Updated the associated test which now fails with an error without this fix by including the case presented in the original bug
PiperOrigin-RevId: 384824827
Change-Id: Iada04d329348365df17d5e971a5ab085ff90f85b
TFLM issues should now be created at github.com/tensorflow/tflite-micro
PiperOrigin-RevId: 384822209
Change-Id: I3a64986eabe3d60d9f66672dd266799cb9646875
The remat XLA tests run on CPU, GPU, and TPU. The tests work by calling the experimental_get_compiler_ir API to trigger XLA compilation and retrieve the HLO proto string. From the HLO proto string, the memory usage is calculated.
PiperOrigin-RevId: 384821536
Change-Id: If5d6325c05b2d192b040ae2d060e9df0579477a6
The remat XLA tests run on CPU, GPU, and TPU. The tests work by calling the experimental_get_compiler_ir API to trigger XLA compilation and retrieve the HLO proto string. From the HLO proto string, the memory usage is calculated.
PiperOrigin-RevId: 384818305
Change-Id: I8324df150263158e33ef47693e94f5632522b43b
XLA doesn't lower tf.control_dependencies to HLO graph. Add a fake data dependency to upstream gradients so recompute will happen after that.
Also add memory test cases.
PiperOrigin-RevId: 384816813
Change-Id: I579e51f9ff1f845dd75fd355d46d1d913e64a200
This CL removes previous support for periodically exporting the performance model information to a file and improves the tfstreamz/ integration to avoid the need to maintain static state.
In addition, this CL extends gauges with support for int64 / string / bool futures -- that is, making it possible to compute the values at the time they are requested.
PiperOrigin-RevId: 384812318
Change-Id: I3f82fc8f4b1ec5a1e0d3ac2a6d45b872f0ad815b
- Added the delimiter between shape dimensions identical to the EagerTensor cases
- Updated the associated test which now fails with an error without this fix by including the case presented in the original bug
PiperOrigin-RevId: 384810264
Change-Id: I222bb5b55197155003c3a8a2110b4a2196252f6a
XLA doesn't lower tf.control_dependencies to HLO graph. Add a fake data dependency to upstream gradients so recompute will happen after that.
Also add memory test cases.
PiperOrigin-RevId: 384805186
Change-Id: I17396d0a24723d597ea11ba7843117075298770a
With TFLM moving to its own repository, we should no longer have a dependency
from TfLite to TFLM.
https://github.com/tensorflow/tflite-micro/pull/275 adds a TFLM-specific
implementation of op_macros.h in the TFLM tree and we can now remove all the
TFLM code from the tensorflow tree.
PiperOrigin-RevId: 384787132
Change-Id: I4d9babfa45413dd41309f30f11f8f732aec9b6fa
Custom devices see ops with resource inputs placed on them after cl/351603218.
PiperOrigin-RevId: 384786315
Change-Id: Ic6931c3049bb6fda068676c20a7875158f61417e
When running shape functions, some functions (such as `MutableHashTableShape`)
produce extra output information in the form of a `ShapeAndType` struct. The
shapes embedded in this struct are owned by an inference context that is
cleaned up almost immediately; if the upstream code attempts to access this
shape information, it can trigger a segfault.
`ShapeRefiner` is mitigating this for normal output shapes by cloning them
(and thus putting the newly created shape under ownership of an inference
context that will not die), but we were not doing the same for shapes and
types. This commit fixes that by doing similar logic on output shapes and
types.
PiperOrigin-RevId: 384761124
Change-Id: I07c0c42d29dfbb55bfa13ec1f09ef825fb0a1a1d
Towards removing typedef. This changes from typedef to typedef'd type. This
doesn't cover all cases.
PiperOrigin-RevId: 384758927
Change-Id: I1abb77274c22164cf1ea7b5c860146ef91c308da
This means .pack(...) can not start implicitly converting things to tensors, since pack([[1., 2.], [3., 4.], [5., 6.]]) and such are ambiguous (list of parallel tensors vs. parallel tensor of vectors). But I think that constraint is OK.
PiperOrigin-RevId: 384739350
Change-Id: I5301b6ac99e189ec5acfc68e9de27266a7363d14
Toolchains have now been migrated to the external repository under tensorflow/toolchains.
PiperOrigin-RevId: 384724191
Change-Id: Ieae6a574bb8957640ebd75bd080ad1b546959fb3
Without the accompanying fix, the new test will fail with the error:
```
FailedPreconditionError: {{function_node __wrapped__MapDataset_Targuments_0}}
Could not find required function definition __inference_Dataset_map_square_11
[[{{node MapDataset}}]] [Op:MapDataset]
```
PiperOrigin-RevId: 384722451
Change-Id: I558a918fb3f258dbb6f55b92f4b73d79beccaa11