mirror of
https://github.com/zebrajr/tensorflow.git
synced 2025-12-07 12:20:24 +01:00
Batch norm docs fix applied to _fused_batch_norm as well
PiperOrigin-RevId: 157530527
This commit is contained in:
parent
abd4aa49a7
commit
5c73d01024
|
|
@ -158,16 +158,18 @@ def _fused_batch_norm(
|
||||||
|
|
||||||
Can be used as a normalizer function for conv2d and fully_connected.
|
Can be used as a normalizer function for conv2d and fully_connected.
|
||||||
|
|
||||||
Note: When is_training is True the moving_mean and moving_variance need to be
|
Note: when training, the moving_mean and moving_variance need to be updated.
|
||||||
updated, by default the update_ops are placed in `tf.GraphKeys.UPDATE_OPS` so
|
By default the update ops are placed in `tf.GraphKeys.UPDATE_OPS`, so they
|
||||||
they need to be added as a dependency to the `train_op`, example:
|
need to be added as a dependency to the `train_op`. For example:
|
||||||
|
|
||||||
|
```python
|
||||||
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
|
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
|
||||||
with tf.control_dependencies(update_ops):
|
with tf.control_dependencies(update_ops):
|
||||||
train_op = optimizer.minimize(loss)
|
train_op = optimizer.minimize(loss)
|
||||||
|
```
|
||||||
|
|
||||||
One can set updates_collections=None to force the updates in place, but that
|
One can set updates_collections=None to force the updates in place, but that
|
||||||
can have speed penalty, especially in distributed settings.
|
can have a speed penalty, especially in distributed settings.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
inputs: A tensor with 2 or more dimensions, where the first dimension has
|
inputs: A tensor with 2 or more dimensions, where the first dimension has
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue
Block a user