Lint TensorFlow 2.10 release notes

This commit is contained in:
8bitmp3 2022-09-16 20:13:30 +01:00 committed by GitHub
parent f3678f1c1b
commit 9b10cefa3e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -49,7 +49,7 @@ This release contains contributions from many people at Google, as well as:
* Causal attention in `keras.layers.Attention` and `keras.layers.AdditiveAttention` is now specified in the `call()` method via the `use_causal_mask` argument (rather than in the constructor), for consistency with other layers.
* Some files in `tensorflow/python/training` have been moved to `tensorflow/python/tracking` and `tensorflow/python/checkpoint`. Please update your imports accordingly, the old files will be removed in Release 2.11.
* `tf.keras.optimizers.experimental.Optimizer` will graduate in Release 2.11, which means `tf.keras.optimizers.Optimizer` will be an alias of `tf.keras.optimizers.experimental.Optimizer`. The current `tf.keras.optimizers.Optimizer` will continue to be supported as `tf.keras.optimizers.legacy.Optimizer`, e.g.,`tf.keras.optimizers.legacy.Adam`. Most users won't be affected by this change, but please check the [API doc](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental) if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to `tf.keras.optimizers.legacy.Optimizer`.
* `tf.keras.optimizers.experimental.Optimizer` will graduate in Release 2.11, which means `tf.keras.optimizers.Optimizer` will be an alias of `tf.keras.optimizers.experimental.Optimizer`. The current `tf.keras.optimizers.Optimizer` will continue to be supported as `tf.keras.optimizers.legacy.Optimizer`, e.g.,`tf.keras.optimizers.legacy.Adam`. Most users won't be affected by this change, but please check the [API doc](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental) if any API used in your workflow is changed or deprecated, and make adaptations. If you decide to keep using the old optimizer, please explicitly change your optimizer to `tf.keras.optimizers.legacy.Optimizer`.
* RNG behavior change for `tf.keras.initializers`. Keras initializers will now use stateless random ops to generate random numbers.
* Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (`seed=None`), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).
* An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.
@ -70,7 +70,7 @@ This release contains contributions from many people at Google, as well as:
* Use the constructors such as `tensorflow::errors:InvalidArgument` to create status using an error code without accessing it.
* Use the free functions such as `tensorflow::errors::IsInvalidArgument` if needed.
* In the last resort, use e.g.`static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)` or `static_cast<int>(code)` for comparisons.
* `tensorflow::StatusOr` will also become in the future alias to `absl::StatusOr`, so use `StatusOr::value` instead of `StatusOr::ConsumeValueOrDie`.
* `tensorflow::StatusOr` will also become in the future an alias to `absl::StatusOr`, so use `StatusOr::value` instead of `StatusOr::ConsumeValueOrDie`.
## Major Features and Improvements
@ -78,12 +78,12 @@ This release contains contributions from many people at Google, as well as:
* New operations supported:
* tflite SelectV2 now supports 5D.
* tf.einsum is supported with multiple unknown shapes.
* tf.unsortedsegmentprod op is supported.
* tf.unsortedsegmentmax op is supported.
* tf.unsortedsegmentsum op is supported.
* `tf.einsum` is supported with multiple unknown shapes.
* `tf.unsortedsegmentprod` op is supported.
* `tf.unsortedsegmentmax` op is supported.
* `tf.unsortedsegmentsum` op is supported.
* Updates to existing operations:
* tfl.scatter_nd now supports I1 for update arg.
* `tfl.scatter_nd` now supports I1 for the `update` arg.
* Upgrade Flatbuffers v2.0.5 from v1.12.0
* `tf.keras`:
@ -93,10 +93,10 @@ This release contains contributions from many people at Google, as well as:
* Added `subset="both"` support in `tf.keras.utils.image_dataset_from_directory`,`tf.keras.utils.text_dataset_from_directory`, and `audio_dataset_from_directory`, to be used with the `validation_split` argument, for returning both dataset splits at once, as a tuple.
* Added `tf.keras.utils.split_dataset` utility to split a `Dataset` object or a list/tuple of arrays into two `Dataset` objects (e.g. train/test).
* Added step granularity to `BackupAndRestore` callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
* Added [`tf.keras.dtensor.experimental.optimizers.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/dtensor/experimental/optimizers/AdamW). This optimizer is similar as the existing [`keras.optimizers.experimental.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/AdamW), and works in the DTensor training use case.
* Added [`tf.keras.dtensor.experimental.optimizers.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/dtensor/experimental/optimizers/AdamW). This optimizer is similar to the existing [`keras.optimizers.experimental.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/AdamW), and works in the DTensor training use case.
* Improved masking support for [`tf.keras.layers.MultiHeadAttention`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention).
* Implicit masks for `query`, `key` and `value` inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any `attention_mask` passed in directly when calling the layer. This can be used with [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) with `mask_zero=True` to automatically infer a correct padding mask.
* Added a `use_causal_mask` call time arugment to the layer. Passing `use_causal_mask=True` will compute a causal attention mask, and optionally combine it with any `attention_mask` passed in directly when calling the layer.
* Added a `use_causal_mask` call time argument to the layer. Passing `use_causal_mask=True` will compute a causal attention mask, and optionally combine it with any `attention_mask` passed in directly when calling the layer.
* Added `ignore_class` argument in the loss `SparseCategoricalCrossentropy` and metrics `IoU` and `MeanIoU`, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
* Added [`tf.keras.models.experimental.SharpnessAwareMinimization`](https://www.tensorflow.org/api_docs/python/tf/keras/models/experimental/SharpnessAwareMinimization). This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
@ -105,12 +105,12 @@ This release contains contributions from many people at Google, as well as:
* Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
* Added `dataset_id` to `tf.data.experimental.service.register_dataset`. If provided, `tf.data` service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call `register_dataset` with the same `dataset_id`.
* Added a new field, `inject_prefetch`, to `tf.data.experimental.OptimizationOptions`. If it is set to `True`,`tf.data` will now automatically add a `prefetch` transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set `inject_prefetch=True` in `tf.data.experimental.OptimizationOptions`.
* Added a new value to `tf.data.Options.autotune.autotune_algorithm`: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
* Added a new value to `tf.data.Options.autotune.autotune_algorithm`: `STAGE_BASED`. If the autotune algorithm is set to `STAGE_BASED`, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
* Added [`tf.data.experimental.from_list`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/from_list), a new API for creating `Dataset`s from lists of elements.
* Graduated experimental APIs:
* [`tf.data.Dataset.counter`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#counter), which creates `Dataset`s of indefinite sequences of numbers.
* [`tf.data.Dataset.ignore_errors`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#ignore_errors), which drops erroneous elements from `Dataset`s.
* Added [`tf.data.Dataset.rebatch](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#rebatch), a new API for rebatching the elements of a dataset.
* Added [`tf.data.Dataset.rebatch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#rebatch), a new API for rebatching the elements of a dataset.
* `tf.distribute`:
@ -118,7 +118,7 @@ This release contains contributions from many people at Google, as well as:
* `tf.math`:
* Added `tf.math.approx_max_k` and `tf.math.approx_min_k` which are the optimized alternatives to `tf.math.top_k` on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
* Added `tf.math.approx_max_k` and `tf.math.approx_min_k` which are the optimized alternatives to `tf.math.top_k` on TPU. The performance difference ranges from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
* `tf.train`:
@ -129,7 +129,7 @@ This release contains contributions from many people at Google, as well as:
* Added an optional parameter: `warn`. This parameter controls whether or not warnings will be printed when operations in the provided `fn` fall back to a while loop.
* XLA:
* MWMS is now compilable with XLA.
* `tf.distribute.MultiWorkerMirroredStrategy` is now compilable with XLA.
* [Compute Library for the Arm® Architecture (ACL)](https://github.com/ARM-software/ComputeLibrary) is supported for aarch64 CPU XLA runtime
* CPU performance optimizations:
@ -142,7 +142,7 @@ This release contains contributions from many people at Google, as well as:
## Bug Fixes and Other Changes
* New argument `experimental_device_ordinal` in `LogicalDeviceConfiguration` to control the order of logical devices. (GPU only)
* New argument `experimental_device_ordinal` in `LogicalDeviceConfiguration` to control the order of logical devices (GPU only).
* `tf.keras`: