mirror of
https://github.com/zebrajr/tensorflow.git
synced 2025-12-07 00:20:20 +01:00
New landing page and leftnav for Programmer's Guide.
PiperOrigin-RevId: 165660897
This commit is contained in:
parent
7359fec792
commit
00594ecdd6
|
|
@ -1,69 +0,0 @@
|
|||
# Tensor Ranks, Shapes, and Types
|
||||
|
||||
TensorFlow programs use a tensor data structure to represent all data. You can
|
||||
think of a TensorFlow tensor as an n-dimensional array or list.
|
||||
A tensor has a static type and dynamic dimensions. Only tensors may be passed
|
||||
between nodes in the computation graph.
|
||||
|
||||
## Rank
|
||||
|
||||
In the TensorFlow system, tensors are described by a unit of dimensionality
|
||||
known as *rank*. Tensor rank is not the same as matrix rank. Tensor rank
|
||||
(sometimes referred to as *order* or *degree* or *n-dimension*) is the number
|
||||
of dimensions of the tensor. For example, the following tensor (defined as a
|
||||
Python list) has a rank of 2:
|
||||
|
||||
t = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
||||
|
||||
A rank two tensor is what we typically think of as a matrix, a rank one tensor
|
||||
is a vector. For a rank two tensor you can access any element with the syntax
|
||||
`t[i, j]`. For a rank three tensor you would need to address an element with
|
||||
`t[i, j, k]`.
|
||||
|
||||
Rank | Math entity | Python example
|
||||
--- | --- | ---
|
||||
0 | Scalar (magnitude only) | `s = 483`
|
||||
1 | Vector (magnitude and direction) | `v = [1.1, 2.2, 3.3]`
|
||||
2 | Matrix (table of numbers) | `m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]`
|
||||
3 | 3-Tensor (cube of numbers) | `t = [[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18]]]`
|
||||
n | n-Tensor (you get the idea) | `....`
|
||||
|
||||
## Shape
|
||||
|
||||
The TensorFlow documentation uses three notational conventions to describe
|
||||
tensor dimensionality: rank, shape, and dimension number. The following table
|
||||
shows how these relate to one another:
|
||||
|
||||
Rank | Shape | Dimension number | Example
|
||||
--- | --- | --- | ---
|
||||
0 | [] | 0-D | A 0-D tensor. A scalar.
|
||||
1 | [D0] | 1-D | A 1-D tensor with shape [5].
|
||||
2 | [D0, D1] | 2-D | A 2-D tensor with shape [3, 4].
|
||||
3 | [D0, D1, D2] | 3-D | A 3-D tensor with shape [1, 4, 3].
|
||||
n | [D0, D1, ... Dn-1] | n-D | A tensor with shape [D0, D1, ... Dn-1].
|
||||
|
||||
Shapes can be represented via Python lists / tuples of ints, or with the
|
||||
@{tf.TensorShape}.
|
||||
|
||||
## Data types
|
||||
|
||||
In addition to dimensionality, Tensors have a data type. You can assign any one
|
||||
of the following data types to a tensor:
|
||||
|
||||
Data type | Python type | Description
|
||||
--- | --- | ---
|
||||
`DT_FLOAT` | `tf.float32` | 32 bits floating point.
|
||||
`DT_DOUBLE` | `tf.float64` | 64 bits floating point.
|
||||
`DT_INT8` | `tf.int8` | 8 bits signed integer.
|
||||
`DT_INT16` | `tf.int16` | 16 bits signed integer.
|
||||
`DT_INT32` | `tf.int32` | 32 bits signed integer.
|
||||
`DT_INT64` | `tf.int64` | 64 bits signed integer.
|
||||
`DT_UINT8` | `tf.uint8` | 8 bits unsigned integer.
|
||||
`DT_UINT16` | `tf.uint16` | 16 bits unsigned integer.
|
||||
`DT_STRING` | `tf.string` | Variable length byte arrays. Each element of a Tensor is a byte array.
|
||||
`DT_BOOL` | `tf.bool` | Boolean.
|
||||
`DT_COMPLEX64` | `tf.complex64` | Complex number made of two 32 bits floating points: real and imaginary parts.
|
||||
`DT_COMPLEX128` | `tf.complex128` | Complex number made of two 64 bits floating points: real and imaginary parts.
|
||||
`DT_QINT8` | `tf.qint8` | 8 bits signed integer used in quantized Ops.
|
||||
`DT_QINT32` | `tf.qint32` | 32 bits signed integer used in quantized Ops.
|
||||
`DT_QUINT8` | `tf.quint8` | 8 bits unsigned integer used in quantized Ops.
|
||||
|
|
@ -53,10 +53,6 @@ TensorFlow assigns operations to devices, and the
|
|||
@{$deep_cnn$CIFAR-10 tutorial} for an example model that
|
||||
uses multiple GPUs.
|
||||
|
||||
#### What are the different types of tensors that are available?
|
||||
|
||||
TensorFlow supports a variety of different data types and tensor shapes. See the
|
||||
@{$dims_types$ranks, shapes, and types reference} for more details.
|
||||
|
||||
## Running a TensorFlow computation
|
||||
|
||||
|
|
@ -171,7 +167,8 @@ available. These operations allow you to build sophisticated
|
|||
@{$reading_data$input pipelines}, at the cost of making the
|
||||
TensorFlow computation somewhat more complicated. See the how-to documentation
|
||||
for
|
||||
@{$reading_data#creating-threads-to-prefetch-using-queuerunner-objects$using `QueueRunner` objects to drive queues and readers}
|
||||
@{$reading_data#creating-threads-to-prefetch-using-queuerunner-objects$using
|
||||
`QueueRunner` objects to drive queues and readers}
|
||||
for more information on how to use them.
|
||||
|
||||
## Variables
|
||||
|
|
@ -240,11 +237,6 @@ to encode the batch size as a Python constant, but instead to use a symbolic
|
|||
* Use @{tf.reduce_mean} instead
|
||||
of `tf.reduce_sum(...) / batch_size`.
|
||||
|
||||
* If you use
|
||||
@{$reading_data#feeding$placeholders for feeding input},
|
||||
you can specify a variable batch dimension by creating the placeholder with
|
||||
[`tf.placeholder(..., shape=[None, ...])`](../api_docs/python/io_ops.md#placeholder). The
|
||||
`None` element of the shape corresponds to a variable-sized dimension.
|
||||
|
||||
## TensorBoard
|
||||
|
||||
|
|
@ -269,36 +261,33 @@ the flag --host=localhost. This should quiet any security warnings.
|
|||
|
||||
## Extending TensorFlow
|
||||
|
||||
See also the how-to documentation for
|
||||
See the how-to documentation for
|
||||
@{$adding_an_op$adding a new operation to TensorFlow}.
|
||||
|
||||
#### My data is in a custom format. How do I read it using TensorFlow?
|
||||
|
||||
There are two main options for dealing with data in a custom format.
|
||||
There are three main options for dealing with data in a custom format.
|
||||
|
||||
The easier option is to write parsing code in Python that transforms the data
|
||||
into a numpy array, then feed a
|
||||
@{tf.placeholder} a tensor with
|
||||
that data. See the documentation on
|
||||
@{$reading_data#feeding$using placeholders for input} for
|
||||
more details. This approach is easy to get up and running, but the parsing can
|
||||
be a performance bottleneck.
|
||||
The easiest option is to write parsing code in Python that transforms the data
|
||||
into a numpy array. Then use @{tf.contrib.data.Dataset.from_tensor_slices} to
|
||||
create an input pipeline from the in-memory data.
|
||||
|
||||
The more efficient option is to
|
||||
If your data doesn't fit in memory, try doing the parsing in the Dataset
|
||||
pipeline. Start with an appropriate file reader, like
|
||||
@{tf.contrib.data.TextLineDataset}. Then convert the dataset by mapping
|
||||
@{tf.contrib.data.Dataset.map$mapping} appropriate operations over it.
|
||||
Prefer predefined TensorFlow operations such as @{tf.decode_raw},
|
||||
@{tf.decode_csv}, @{tf.parse_example}, or @{tf.image.decode_png}.
|
||||
|
||||
If your data is not easily parsable with the built-in TensorFlow operations,
|
||||
consider converting it, offline, to a format that is easily parsable, such
|
||||
as ${tf.python_io.TFRecordWriter$`TFRecord`} format.
|
||||
|
||||
The more efficient method to customize the parsing behavior is to
|
||||
@{$adding_an_op$add a new op written in C++} that parses your
|
||||
data format. The
|
||||
@{$new_data_formats$guide to handling new data formats} has
|
||||
data format. The @{$new_data_formats$guide to handling new data formats} has
|
||||
more information about the steps for doing this.
|
||||
|
||||
#### How do I define an operation that takes a variable number of inputs?
|
||||
|
||||
The TensorFlow op registration mechanism allows you to define inputs that are a
|
||||
single tensor, a list of tensors with the same type (for example when adding
|
||||
together a variable-length list of tensors), or a list of tensors with different
|
||||
types (for example when enqueuing a tuple of tensors to a queue). See the
|
||||
how-to documentation for
|
||||
@{$adding_an_op#list-inputs-and-outputs$adding an op with a list of inputs or outputs}
|
||||
for more details of how to define these different input types.
|
||||
|
||||
## Miscellaneous
|
||||
|
||||
|
|
|
|||
|
|
@ -1,38 +1,45 @@
|
|||
# Programmer's Guide
|
||||
|
||||
The documents in this unit dive into the details of writing TensorFlow
|
||||
code. This section begins with the following guides, each of which
|
||||
explain a particular aspect of TensorFlow:
|
||||
code. For TensorFlow 1.3, we revised this document extensively.
|
||||
The units are now as follows:
|
||||
|
||||
* @{$variables$Variables: Creation, Initialization, Saving, Loading, and
|
||||
Sharing}, which details the mechanics of TensorFlow Variables.
|
||||
* @{$dims_types$Tensor Ranks, Shapes, and Types}, which explains Tensor
|
||||
rank (the number of dimensions), shape (the size of each dimension),
|
||||
and datatypes.
|
||||
* @{$threading_and_queues$Threading and Queues}, which explains TensorFlow's
|
||||
rich queuing system.
|
||||
* @{$reading_data$Reading Data}, which documents three different mechanisms
|
||||
for getting data into a TensorFlow program.
|
||||
|
||||
The following guide is helpful when training a complex model over multiple
|
||||
days:
|
||||
|
||||
* @{$supervisor$Supervisor: Training Helper for Days-Long Trainings}, which
|
||||
explains how to gracefully handle system crashes during a lengthy training
|
||||
session.
|
||||
|
||||
TensorFlow provides a debugger named `tfdbg`, which is documented in the
|
||||
following guide:
|
||||
|
||||
* @{$debugger$Debugging TensorFlow Programs},
|
||||
which walks you through the use of `tfdbg` within an application. It covers
|
||||
using `tfdbg` with both the low-level TensorFlow API and the Estimator API.
|
||||
|
||||
To learn about the TensorFlow versioning scheme consult:
|
||||
|
||||
* @{$version_compat$The TensorFlow Version Compatibility Guide}, which explains
|
||||
TensorFlow's versioning nomenclature and compatibility rules.
|
||||
|
||||
We conclude this section with a FAQ about TensorFlow programming:
|
||||
|
||||
* @{$faq$Frequently Asked Questions}
|
||||
* @{$programmers_guide/tensors$Tensors}, which explains how to create,
|
||||
manipulate, and access Tensors--the fundamental object in TensorFlow.
|
||||
* @{$programmers_guide/variables$Variables}, which details how
|
||||
to represent shared, persistent state in your program.
|
||||
* @{$programmers_guide/graphs$Graphs and Sessions}, which explains:
|
||||
* dataflow graphs, which are TensorFlow's representation of computations
|
||||
as dependencies between operations.
|
||||
* sessions, which are TensorFlow's mechanism for running dataflow graphs
|
||||
across one or more local or remote devices.
|
||||
If you are programming with the low-level TensorFlow API, this unit
|
||||
is essential. If you are programming with a high-level TensorFlow API
|
||||
such as Estimators or Keras, the high-level API creates and manages
|
||||
graphs and sessions for you, but understanding graphs and sessions
|
||||
can still be helpful.
|
||||
* @{$programmers_guide/estimators$Estimators}, which introduces a high-level
|
||||
TensorFlow API that greatly simplifies ML programming.
|
||||
* @{$programmers_guide/saved_model$Saving and Restoring}, which
|
||||
explains how to save and restore variables and models.
|
||||
* @{$programmers_guide/datasets$Input Pipelines}, which explains how to
|
||||
set up data pipelines to read data sets into your TensorFlow program.
|
||||
* @{$programmers_guide/threading_and_queues$Threading and Queues}, which
|
||||
explains TensorFlow's older system for multi-threaded, queue-based input
|
||||
pipelines. Beginning with TensorFlow 1.2, we recommend using the
|
||||
`tf.contrib.data` module instead, which is documented in the
|
||||
"Input Pipelines" unit.
|
||||
* @{$programmers_guide/embedding$Embeddings}, which introduces the concept
|
||||
of embeddings, provides a simple example of training an embedding in
|
||||
TensorFlow, and explains how to view embeddings with the TensorBoard
|
||||
Embedding Projector.
|
||||
* @{$programmers_guide/debugger$Debugging TensorFlow Programs}, which
|
||||
explains how to use the TensorFlow debugger (tfdbg).
|
||||
* @{$programmers_guide/supervisor$Supervisor: Training Helper for Days-Long Trainings},
|
||||
which explains how to gracefully handle system crashes during lengthy
|
||||
training sessions. (We have not revised this document for v1.3.)
|
||||
* @{$programmers_guide/version_compat$TensorFlow Version Compatibility},
|
||||
which explains backward compatibility guarantees and non-guarantees.
|
||||
* @{$programmers_guide/faq$FAQ}, which contains frequently asked
|
||||
questions about TensorFlow. (We have not revised this document for v1.3,
|
||||
except to remove some obsolete information.)
|
||||
|
|
|
|||
|
|
@ -1,15 +1,13 @@
|
|||
index.md
|
||||
tensors.md
|
||||
variables.md
|
||||
dims_types.md
|
||||
graphs.md
|
||||
estimators.md
|
||||
saved_model.md
|
||||
datasets.md
|
||||
threading_and_queues.md
|
||||
reading_data.md
|
||||
embedding.md
|
||||
debugger.md
|
||||
supervisor.md
|
||||
saved_model.md
|
||||
meta_graph.md
|
||||
version_compat.md
|
||||
faq.md
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user