mirror of
https://github.com/zebrajr/tensorflow.git
synced 2025-12-06 12:20:11 +01:00
Merge changes from github.
END_PUBLIC --- Commitd77b99809authored by Yong Tang<yong.tang.github@outlook.com> Committed by gunan<gunan@google.com>: Update docs for `begin_params_axis` (#13979) This fix fixes the issue raised in 13975 where `begin_shift_axis` is actually `begin_params_axis`. This fix fixes 13975. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commite6a242b4eauthored by Yifei Feng<fengyifei2026@gmail.com> Committed by gunan<gunan@google.com>: Add GCC/Compiler version to issue template. (#14113) As suggested in #13930 --- Commit7ece1c0b8authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Moving model_pruning library to tf.contrib PiperOrigin-RevId: 174214419 --- Commit693325c83authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Log the full traceback in Coordinator.request_stop if it's available PiperOrigin-RevId: 174213375 --- Commit6c4a769abauthored by Mark Daoust<markdaoust@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Delete duplicate label_image script. The version in examples/label_image is more complete (with image size and normalization options), so it can be used with `mobilenets`. Also: removed bazel from main tutorial instructions. PiperOrigin-RevId: 174212674 --- Commit7a5b81c29authored by Yao Zhang<yaozhang@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Materialize shape for ShapeN. PiperOrigin-RevId: 174211500 --- Commit78041b1ddauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: internal change PiperOrigin-RevId: 174211190 --- Commit2118fcf62authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BUILD cleanup in contrib/tensor_forest/... PiperOrigin-RevId: 174201884 --- Commit6849ef8f6authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: internal change. PiperOrigin-RevId: 174197506 --- Commit37370d98fauthored by resec<resec0109@gmail.com> Committed by gunan<gunan@google.com>: Support more Android arch in Makefile build (#12806) * Support more Android arch in Makefile build * update Makefile * fix MARCH_OPTION * persist multiple architectures across builds * persist multiple architectures across builds * persist multiple architectures across builds * persistence bug fix * persistence bug fix * persistence bug fix * add -latomic to linker flags for benchmark * Change ANDROID_OS_ARCH to ANDROID_HOST_OS_ARCH --- Commitc40d54173authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Exposes recall_at_top_k under tf.metrics. PiperOrigin-RevId: 174189641 --- Commit18bf5b2d9authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Return a classifier score of the same type as the logits. PiperOrigin-RevId: 174184871 --- Commit9da02be11authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make 'collections' a list, as documented and expected by downstream custom getters. PiperOrigin-RevId: 174184867 --- Commit16b0bb095authored by loki der quaeler<quaeler@users.noreply.github.com> Committed by gunan<gunan@google.com>: Adding a feed for boolean tensors to TensorFlowInferenceInterface (#14059) * Sublime Text index-ignore file (a copy of .gitignore) * Adding the requested implementation to TensorFlowInferenceInterface * Removing Sublime Text .ignore file from remote repository * indeed there was --- Commitfa9d8aab4authored by Urs K?ster<ursk@users.noreply.github.com> Committed by gunan<gunan@google.com>: Add 'log_progress' argument for tf.estimator.Estimator's evaluate function (#13695) * Add argument for tf.estimator.Estimator's evaluate function * add log_progress argument to ._convert_eval_steps_to_hooks for TPU estimator * log only every 10th step if more than 100 iterations in _StopAfterNEvalsHook * ensure last step is logged and aim for 10 outputs total --- Commit07a91dac5authored by nolan liu<nolan.liou@gmail.com> Committed by gunan<gunan@google.com>: make `gather` cpu kernel to be multiple threads. (#12246) * Change the gather op to multi-thread. * Modify the gather kernel of xla compiler in order to be compatible with multi-threads cpu kernel. * Add prefetch logic to gather op kernel. * Update the indention of gather op kernel code. * Update the gather kernel code for multiple thread. * Remove reference to ealier version of code in gather functor. * Change the framework_lite dep of gather_functor to framework. * Remove mutex guard in gather functor. --- Commita956486beauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove an erronous __attribute__((...)) tag. There is no __attribute__((guarded)) or __attribute__((pt_guarded)) attribute in Clang, and if we turn on warnings for unknown attributes (which are currently turned off), this causes build failures. This means that, when the warnings are turned off, this is simply a no-op. PiperOrigin-RevId: 174134252 --- Commit27412f3b6authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add compiler/tf2xla/sharding_util.h with utilities for getting the core device from a Node. PiperOrigin-RevId: 174133602 --- Commitab4349a26authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BUILD cleanup in selected packages in contrib/... PiperOrigin-RevId: 174115744 --- Commit4aa90bfd3authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Add HLO matchers that check parameter numbers and GTE indices. This lets you do EXPECT_THAT(foo, op::Parameter(42)); and EXPECT_THAT(bar, op::GetTupleElement(baz, 8)); PiperOrigin-RevId: 174113597 --- Commitf97e7c69bauthored by Olivia Nordquist<nolivia@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: partially exposing the _set_attr and _get_attr method in python PiperOrigin-RevId: 174113043 --- Commit8e732a312authored by Artem Belevich<tra@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Prefer cubin over PTX when we launch CUDA kernels. Native GPU code, if we have it, should be preferred over JIT compilation of PTX. PiperOrigin-RevId: 174110646 --- Commit2ccf3aba4authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Permanently remove several modules from tf.contrib.bayesflow. These modules are very infrequently used and will not be developed moving forward. Removing this code paves the way for remaining modules in tf.contrib.bayesflow to move to their own repo. PiperOrigin-RevId: 174110067 --- Commitef7052fbdauthored by Andrew Selle<aselle@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Open source build support for TensorFlow Lite Toco. - Handle proto incompatibilities - Mixed bazel compatibility fixes. - Add link to absl libraries PiperOrigin-RevId: 174103981 --- Commitd6a9cd40cauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix "hides overloaded virtual function" error in default/gpu_tracer.cc when compiled with -Werror,-Woverloaded-virtual. PiperOrigin-RevId: 174101519 --- Commitb242a7988authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Set metric variable initializers as lambda. PiperOrigin-RevId: 174100686 --- Commit57b1c5621authored by Alan Yee<alyee@ucsd.edu> Committed by drpngx<drpngx@users.noreply.github.com>: Add deprecation notes (#12614) * Update lookup_ops.py Minor comment fix * Update metrics_ops.py Add deprecated notes * Update tensor_util.py Update deprecated note on remove_squeezable_dimensions * Update metric_ops.py Add deprecated notes --- Commit453dd5848authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: K-FAC: Support for tf.AUTO_REUSE when re-using registrations. Multi-tower support for FullFB, NaiveDiagonalFB. Removal of LayerCollection.generic_registrations. PiperOrigin-RevId: 174092003 --- Commit0a7be5a2fauthored by Sanjoy Das<sanjoy@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Rename (Add|Get)ProfileResult to something more specific; NFC PiperOrigin-RevId: 174084570 --- Commitf1916f8f6authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: - Remove slice hack to properly initialize missing entries in weight matrices - Add real support for EmbeddingColumns / input_layer() - Fix warmstarting for non-PartitionedVariables PiperOrigin-RevId: 174083777 --- Commitf567ddf87authored by Alex Sergeev<alexander.sergeev@live.com> Committed by drpngx<drpngx@users.noreply.github.com>: Add tf.sysconfig.get_compile_flags() & tf.sysconfig.get_link_flags() for custom operators (#13496) * Add flags for custom op compilation * Move ABI logic into version_info.cc * Add #include <string> to be able to read _GLIBCXX_USE_CXX11_ABI value. * Make flags to be lists * Add _flag to cxx11_abi * Address review comment. * Move CXX import to the top level. * Add goldens update --- Commit0cddb9bcaauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 174074499 --- Commitba8c38959authored by Neal Wu<wun@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Change wide_deep.md and wide.md to reference the TensorFlow official models version rather than the tf.contrib.learn version PiperOrigin-RevId: 174074112 --- Commitf3006422cauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make `RunTrainOpsHook` public. PiperOrigin-RevId: 174073925 --- Commit21dafd6d2authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 174073569 --- Commit66fc99a3bauthored by Artem Belevich<tra@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA:GPU] Short-circuit compilation of no-op IR -> empty PTX. There's no point constructing/running LLVM pipeline if we know that we have no kernels in the IR we've generated for the given HLO op. This is often the case for ops we can optimize away at the HLO level. PiperOrigin-RevId: 174072540 --- Commitc911d0f16authored by Dhananjay Nakrani<dhananjayn@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Switch over python calls to RandomPoissonV2. Part 2 of Support int32/64 in tf.random_poisson(). PiperOrigin-RevId: 174071745 --- Commitb5d5326c6authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA:GPU] Fix race condition in gpu_compiler.cc. We were racing on libdevice_dir_. PiperOrigin-RevId: 174070334 --- Commit35939d2d3authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Fix string to HLO opcode conversion for atan2, complex, imag and real. Make sure that we can't forget opcodes by auto-generating the conversion functions. Add auto-generated functions to test HLOs for properties (like IsVariadic, IsComparison, etc.) This makes changing HLO more robust and easier because there are fewer places to update when adding or removing an HLO opcode. Also: * Fix IsElementwiseBinary for atan2. * Add a unit test for HLO opcode helpers. * Express IsElementwiseBinary in terms of IsElementwise() and operand_count() to avoid having to keep the two in sync manually. PiperOrigin-RevId: 174069664 --- Commit3b845c80dauthored by Allen Lavoie<allenl@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Disable resnet50_graph_test under TSAN due to timeouts. PiperOrigin-RevId: 174066937 --- Commit8a09bbc4aauthored by Igor Ganichev<iga@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add TFE_Py_TensorShapeSlice function TFE_Py_TensorShapeSlice takes a list of EagerTensors and returns a list of their i'th dimensions. This utility is fairly niche but it is simple and reduces SPINN training time by over 12%. PiperOrigin-RevId: 174065044 --- Commit585432cc2authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Refactor ArgMin / ArgMax index ops as XlaHelpers. PiperOrigin-RevId: 174061370 --- Commite6faa845cauthored by Michael Case<mikecase@chromium.org> Committed by gunan<gunan@google.com>: Merge v1.4-rc1 back into master branch. (#13960) * Update RELEASE NOTES for TensorFlow 1.4 * Update the version strings for TF 1.4-rc0. * Update version strings in POM files missed by update script. * Pin TensorBoard 0.4 to TensorFlow 1.4 * Fixing the name of the disabled test. (#13592) * Revert "Implementing ghost batch norm as defined in https://arxiv.org/pdf/1705.08741." This reverts commit125f7afa4a. * Disable iterator_ops_test on Windows for 1.4 release (#13609) * Disable failing Windows tests for r1.4 release. testRemoteIteratorUsingRemoteCallOpDirectSessionGPUCPU test is failing with "TypeError: only integer scalar arrays can be converted to a scalar index" on the Windows GPU Release bot. Disabling test. * Fix typo. * Also disalbe iterator_ops_test from contrib/. * Add contributing authors to 1.4 Release notes. Thanks! * Fixes to authors. Removed duplicate and removed googler from contributing author list. * Fixes and additions to release notes. Added line about Keras moving into core. Added line about CUDA/cuDNN versions. Added line about custom ops. * Fixing a master regression (#13562) * Update version strings for 1.4.0rc1 * Remaining cherry-picks for 1.4.0rc1 (#13700) * Java: Tweak to address some Javadoc errors. PiperOrigin-RevId: 171987329 * Fix S3 BUILD not including files explicitly. This causes remote builds to fail since they AWS headers were missing. PiperOrigin-RevId: 171718021 * Add missing default config setting in aws.BUILD (#13662) * Remove setting AWS logging for S3 file system. Was causing issues with tests. Can repro test failures on Macs by running... bazel test --config=s3 --cache_test_results=no --test_output=streamed //tensorflow/core/kernels:control_flow_ops_test Possible reason for error is symbol collision with AWS logging code. One possible solution would be to split out another shared object for the S3 filesystem op which does not link in libtensorflow_framework.so. This is done, for example, by libforestprotos.so in tensorflow/contrib/tensor_forest/BUILD PiperOrigin-RevId: 171246381 * Relanding change to add config to enable S3 file system support. Pass --config=s3 argument to Bazel to build with S3 file system support. Change was originally rolled back due to a failure it caused in //tensorflow/core/kernels:control_flow_ops_test on Macs which is now fixed. PiperOrigin-RevId: 171579378 * Update release notes about Amazon S3 file system support being default. * Add documentation to sloppy_interleave function PiperOrigin-RevId: 171303413 * Add `cudnn_rnn_ops` to the Windows build Fixes #13696. * Creating a patch for the wrong links that still point to dev. (#13753) * tfdbg release notes in r1.4 * Fix ambiguous type comparison in s3_crypto.cc (#13758) tensorflow/contrib/s3/s3_crypto.cc(74): error C2666: 'std::fpos<_Mbstatet>::operator ==': 3 overloads have similar conversions could be 'bool std::fpos<_Mbstatet>::operator ==(std::streamoff) const' or 'bool std::fpos<_Mbstatet>::operator ==(const std::fpos<_Mbstatet> &) We were seeing this compilation error on Windows builds. * Set estimator run_config default random seed to None. This will make it aligned with other parts of the TF. Many users are not aware of impact of non-random seed. For example it may lead to train only on a small fraction of training data due to preemptions. We're changing default behavior since we consider it as a bug fix. PiperOrigin-RevId: 172519268 * Move global_step_read dependency to model_fn instead of input_fn. PiperOrigin-RevId: 172366972 * [tf.data] Fix broken implementation of `Dataset.from_generator()` on Windows. Due to a mix-up between NumPy's default array element type for a Python `int` on Windows and Linux, a tf.py_func() in `Dataset.from_generator()` would appear to return the wrong type on Windows (np.int32 instead of np.int64). All code using `Dataset.from_generator()` on Windows was previously broken. This change fixes both `tf.data.Dataset.from_generator()` and `tf.contrib.data.Dataset.from_generator()`. It also enables test coverage for this method on Windows, which should prevent future breakage. PiperOrigin-RevId: 172346533 * Update RELEASE notes for change to run_config random seed. * Disable probable timeout flake on Ubuntu machines. PiperOrigin-RevId: 172408922 * Disabling failing contrib tests. * Disable S3 on Windows due to build issues. * Update serving_input_fn argument name to serving_input_receiver_fn PiperOrigin-RevId: 172787460 * Update the C++ API guide (#13858) - Adds the standard warning at the top that people may want the master branch - Includes a documentation fix for 1.4 (cc_binary -> tf_cc_binary to avoid undefined symbols). * Add known Dataset issue to RELEASE.md. (#13870) Adding info about issue using Unicode strings with Datasets. * Fixes to merge. * Fix spelling of tensorflow in install_sources.md --- Commit6eac524efauthored by cglewis<clewis@iqt.org> Committed by cglewis<clewis@iqt.org>: Use 'LABEL maintainer=' in Dockerfile * Use 'LABEL maintainer=' in Dockerfile This fix is a follow up of 13961 to replace `MAINTAINER` with `LABEL maintainer=` in Dockerfile. The keyword `MAINTAINER` has long been deprecated and is replaced by `LABEL`, which is much more flexible and is easily searchable through `docker inspect`. This fix replaces remaining `MAINTAINER` with `LABEL`. Signed-off-by: Charlie Lewis <clewis@iqt.org> * Additional `MAITAINER` -> `LABEL` Signed-off-by: Charlie Lewis <clewis@iqt.org> --- Commit469970260authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Modify quantization to support add ops that occur after Conv2D PiperOrigin-RevId: 174058697 --- Commit938643b56authored by Amit Patankar<amitpatankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Replace the docker check with an OS check. PiperOrigin-RevId: 174057778 --- Commit5f1a66ccbauthored by Igor Saprykin<isaprykin@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add more recovery functionality to MonitoredSession.run_step_fn. Current implemention wouldn't recover from one of `_PREEMPTION_ERRORS` during a fetch through the raw session that is made available to the step_fn. The changelist presents a way to map the desired functionality to the hiearchy of _MonitoredSession > (possibly!) _RecoverableSession > _CoordinatedSession > _HookedSession. PiperOrigin-RevId: 174053865 --- Commit9a2b0983aauthored by Yifei Feng<fengyifei2026@gmail.com> Committed by gunan<gunan@google.com>: Add apt-key for ubuntu keyserver (#14114) --- Commit479ee24a0authored by Asim Shankar<asimshankar@gmail.com> Committed by gunan<gunan@google.com>: eager: Update broken link in README (#14136) --- Commitad7bb2b9eauthored by Asim Shankar<asimshankar@gmail.com> Committed by gunan<gunan@google.com>: eager: Update broken links in guide.md (#14135) --- Commitc37ebf0d5authored by Thomas Deegan<tadeegan@gmail.com> Committed by gunan<gunan@google.com>: Resolve //tensorflow relative to tensorflow repo so that tfcompile.bzl can be correctly loaded from another Bazel project (#14103) --- Commitb2ff3ad96authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Added GraphKeys.METRIC_VARIABLE collection. Added all variables under tf.metrics and tf.contrib.metrics into this collection. This will enable replication of model for evaluation. When we replicate a metric in multiple towers (let's say for each qpu we replicate same model/metric), we cannot reduce the output of metrics. On the other hand internal state (local-variables) of those metrics can reducible via sum. PiperOrigin-RevId: 174051559 --- Commit98dad195dauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds sigmoid to the list of operations that can be recomputed. PiperOrigin-RevId: 174047825 --- Commit123749fb1authored by Yuan (Terry) Tang<terrytangyuan@users.noreply.github.com> Committed by Martin Wicke<martin.wicke@gmail.com>: Remove Scikit Flow link and description (#14036) --- Commit0d118e4dcauthored by Benoit Steiner<bsteiner@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Implemented tensorflow::port::NominalCPUFrequency() PiperOrigin-RevId: 174041196 --- Commit648993e82authored by Andrew Harp<andrew.harp@gmail.com> Committed by Andrew Harp<andrew.harp@gmail.com>: delete extraneous file --- Commitc2ff8a5abauthored by Mark Daoust<markdaoust@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Delete backticks PiperOrigin-RevId: 174030921 --- Commit333ba224dauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Dependency information for Skylark macros PiperOrigin-RevId: 174023371 --- Commit9ee0cececauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Shrink the model size for unit test. PiperOrigin-RevId: 174001263 --- Commitc44f67a7eauthored by Yifei Feng<fengyifei2026@gmail.com> Committed by gunan<gunan@google.com>: Disable clang_format check. (#14115) Different clang_format version can cause different formats with the same style option. This check might be too strict. Disable for now. --- Commita6a618843authored by Asim Shankar<ashankar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: eager: Documentation and example models. - Updated README - A preliminary "User's Guide" - A few example models, some with benchmarks PiperOrigin-RevId: 173996303 --- Commitde38e5dffauthored by ???<dev@goodow.com> Committed by GitHub<noreply@github.com>: fix broken link --- Commitcd81bc8e0authored by Rohan Jain<rohanj@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds a PrefetchWithFn op to contrib/data. Alongwith the FunctionBufferingResource, this can be used to prefetch and fill up a buffer by making repeated function calls. Also fixes a TODO in the ProcessFLR implementation to respect alloc_attrs for Rendezvous calls. PiperOrigin-RevId: 173990680 --- Commit17695212cauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Don't pass HLO operands in HandleAtan2. This makes it consistent with the rest of the Visit methods where we only pass the HLO itself. PiperOrigin-RevId: 173990595 --- Commit113be5746authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: A few profiler improvements 1. Track the full allocation history of each tensor, visualized in timeline. 2. Better ProfileContext for tracing step selection. 3. Small bug fix. PiperOrigin-RevId: 173988293 --- Commit6d1263cdfauthored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] Remove dead opcode kIndex. PiperOrigin-RevId: 173987428 --- Commita4b5356e4authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Reduce boilerplate code in HLO visitors. Only pass the HloInstruction into visitor methods. This makes changing instructions and visitors easier. PiperOrigin-RevId: 173983398 --- Commitd9cee35b6authored by LevineHuang<levinehuang@163.com> Committed by Benoit Steiner<benoitsteiner@users.noreply.github.com>: Typo fix in file 'fully_connected_feed.py' (#14033) * Typo fix in file 'fully_connected_feed.py' * Minor edits to coding style --- Commitbb7ed1c88authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: K-FAC: Multi-tower ConvNet example. PiperOrigin-RevId: 173982527 --- Commit2ba529856authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Initial add of docs for Tensorflow on Mobile. PiperOrigin-RevId: 173980290 --- Commit187453d61authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Change momentum optimizer to allow callable learning_rate and momentum parameters. This can be useful for implementing learninge rate decay. PiperOrigin-RevId: 173975321 --- Commit542b323e5authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Register quint16/qint16 for GatherOp. PiperOrigin-RevId: 173974904 --- Commit309e34061authored by Allen Lavoie<allenl@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Avoid uncollectable cycles with a separate deleter object for resources. PiperOrigin-RevId: 173972515 --- Commit73fdaf0b5authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Summary-writing support for Evaluators. PiperOrigin-RevId: 173971621 --- Commit72be26dc8authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tf.data] Iterator Save and Restore for Dataset.from_tensors(..), Dataset.from_tensor_slices(..) and dataset.concatenate(..). PiperOrigin-RevId: 173971324 --- Commit09f62ab38authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Speeding up the case for sparse float columns that have only 1 value. PiperOrigin-RevId: 173971121 --- Commitc315cf1eeauthored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal-only changes PiperOrigin-RevId: 173968246 --- Commit293ba20beauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make learning_rate_decay.piecewise_constant work in Eager mode. PiperOrigin-RevId: 173967531 --- Commit0e6abfcdaauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: K-FAC: Example for multi-tower support for MNIST MLP. PiperOrigin-RevId: 173967370 --- Commitb46c196e9authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: * Add graph rewrite rule that removes repeated application of scalar unary ops that are involutions (their own inverse). * Update rewrite rule for Transpose to also handle ConjugateTranspose. PiperOrigin-RevId: 173967184 --- Commitff5c276adauthored by Stephan Hoyer<shoyer@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Longer README for tf.contrib.labeled_tensor PiperOrigin-RevId: 173966577 --- Commit558f146e1authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 173966068 --- Commitf9a673cb7authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: In the overloaded HloVerifier::CheckShape, include the failing instruction in the error message. PiperOrigin-RevId: 173965368 --- Commit302ab0ff7authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 173965174 --- Commit89120eb68authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: scatter_update for resource variables PiperOrigin-RevId: 173963715 --- Commit8f7903b4cauthored by Justine Tunney<jart@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Introduce SQLite SummaryWriterInterface This change allows tensors to be written from the graph, as they flow, directly to the database. Many of the important details haven't been implemented yet. This has been done with the new summary interface that's going to be used with eager. PiperOrigin-RevId: 173961448 --- Commit9aaa49a4eauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Avoid using variables as booleans (similarly to tensors). PiperOrigin-RevId: 173956625 --- Commita60cd87c4authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: No need for unique variable names in eager. PiperOrigin-RevId: 173954805 --- Commitf17f389d8authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add a workaround in the Grappler arithmetic optimizer for the "Add" op not being marked commutative. This will allow Grappler to dedup nodes Add(x,y) and Add(y,x). PiperOrigin-RevId: 173950586 --- Commite40eb810aauthored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: TFE: Add errors for classic tf.summary.* ops and FileWriter PiperOrigin-RevId: 173949980 --- Commit25620825bauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Dataset: Adds eager warnings to make_initializable_iterator and make_one_shot_iterator. PiperOrigin-RevId: 173949737 --- Commit1d6dae88eauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add check to tf.device when called with a function in eager mode. PiperOrigin-RevId: 173947845 --- Commit3639aa7ffauthored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Always run iterator deleter in eager mode for safety. PiperOrigin-RevId: 173947019 --- Commitefcbf6e34authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Supported in this CL: * Attaching sharding descriptors to HLO ops * Partitioning the HLO graph into per-device computations based on those sharding descriptors. * All operator support for device placement and ops replicated on all devices. * Elementwise op support for tiled shardings. * 2D Convolution support for tiled shardings (no stride or dilation support). PiperOrigin-RevId: 173946036 --- Commit682a6ed64authored by Jon Shlens<shlens@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update the documentation for sample_distorted_bounding_box PiperOrigin-RevId: 173943029 --- Commit4f6e6ea4cauthored by Sanjoy Das<sanjoy@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix typo in comment; NFC PiperOrigin-RevId: 173942305 --- Commit07584221fauthored by Anna R<annarev@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Set visibility to HIDDEN for hidden Python ops in ApiDef. PiperOrigin-RevId: 173942001 --- Commit35cc8bb0aauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: K-FAC: Multiple minibatches support for LayerCollection.register_conv2d() PiperOrigin-RevId: 173941279 --- Commit32f3c3a43authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 173933228 --- Commit8cc7b47a4authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 173932574 --- Commitb9337de5bauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: K-FAC: Multi-tower support for ConvKFCBasicFB PiperOrigin-RevId: 173932013 --- Commit1b6b7e208authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add registration for op AddV2, which is identical to Add, except that it does does not implement string concatenation. This allows us to mark AddV2 is_commutative and is_aggregate, which will allow optimizers more freedom. PiperOrigin-RevId: 173931848 --- Commit629e6d0c1authored by Joshua V. Dillon<jvdillon@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Bugfix: Make `tf.contrib.distributions.Independent` tests not flaky. PiperOrigin-RevId: 173921378 --- Commit4b63f47d9authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA:CPU] Don't crash if someone tries to do dot(X, X) or dot(X, X^T). PiperOrigin-RevId: 173919310 --- Commit89582677cauthored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: EagerVariableStore, for compatibility with functional layers. PiperOrigin-RevId: 173915730 --- Commitcef680b53authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Enable shape inference on functions in grappler. PiperOrigin-RevId: 173914941 --- Commite8ac0b48fauthored by Akshay Agrawal<akshayka@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Report a nicer error message when differentiating a function that returns None in eager PiperOrigin-RevId: 173914883 --- Commit85f8d9240authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [tensorflow training input] If SparseTensors are used in batch* ops, ensure restoration. This forces the ST restore op to be called if any tensors are accessed at the output of the batch, thus fixing a memory leak. Solution suggested by Derek Murray. Fixes #13999. PiperOrigin-RevId: 173904309 --- Commit7fd261602authored by Skye Wanderman-Milne<skyewm@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add TF_GraphVersions() to C API and use in Graph.graph_def_versions() PiperOrigin-RevId: 173902666 --- Commit4723f8f6eauthored by RJ Ryan<rjryan@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Support SymbolicGradient for functions with non-trainable arguments. The non-trainable arguments end up with None as their incoming out_grad, which is not a valid input to SymbolicGradient (inputs have to be convertible to Tensor, and None isn't). PiperOrigin-RevId: 173901727 --- Commit494672475authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Added "NOTE: You may only install TensorFlow on 64-bit machines" to all the TensorFlow Install guides. PiperOrigin-RevId: 173899394 --- Commitb73743e3aauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Remove accidental disablation of (already manual) tests. PiperOrigin-RevId: 173898910 --- Commitce0238198authored by Skye Wanderman-Milne<skyewm@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add ability to fetch return nodes and unused input mappings from C API GraphDef import This change introduces yet another ImportGraphDef function to the C API (TF_GraphImportGraphDefWithResults), but this one has extensible return values so we shouldn't have to add more in the future. This change also modifies the ImportGraphDef C interface to manage all string data for the user. PiperOrigin-RevId: 173894710 --- Commitef4490f63authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BUILD cleanup in contrib/... PiperOrigin-RevId: 173889798 --- Commit2e54fd6deauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds eager execution compatibility note in Readers, Queues, and QueueRunner. Raises a RuntimeError in base classes for QueueBase, ReaderBase, and QueueRunner. PiperOrigin-RevId: 173888425 --- Commit32ab30cb0authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fixes typo in compatibility. PiperOrigin-RevId: 173887031 --- Commit325c8e5efauthored by Justine Tunney<jart@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Improve C++ SQLite veneer - Use shared_ptr for Sqlite - Don't need unique_ptr on SqliteStatement - Don't need db namespace - Include SQL in error statuses PiperOrigin-RevId: 173802267 --- Commit0eba15fe6authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds eager compatability message for PartitionedVariable. PiperOrigin-RevId: 173772851 --- Commite7645b629authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] DOT dumper: Handle fusion nodes nested inside other nodes (e.g. map). PiperOrigin-RevId: 173752314 --- Commit8ec7540e0authored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: TFE: Fix pip test for tf.contrib.summary Fixes test failure in tensorflow/contrib/summary:summary_ops_test, e.g., http://ci.tensorflow.org/job/tensorflow-cl-cpu-python3-pip/10933/console PiperOrigin-RevId: 173749502 --- Commitc16797ec3authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds eager execution compatibility note in Estimators. Raises a RuntimeError in Estimator base class. PiperOrigin-RevId: 173744765 --- Commite8a62a30bauthored by ???<dev@goodow.com> Committed by GitHub<noreply@github.com>: Fix minor typo --- Commit36696ad58authored by ???<dev@goodow.com> Committed by Larry Tin<dev@goodow.com>: tf.zeros doesn't accept a tensor argument ValueError: Shape must be rank 1 but is rank 0 for 'zeros_2' (op: 'Fill') with input shapes: [], []. --- Commit9f4b12bb5authored by Justin Lebar<jlebar@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA] DOT dumper: Print constant shape when we elide the constant's value. For example, instead of "operand 1 = %constant.42", we now print "operand 1 = %constant.42 (f32[100])". PiperOrigin-RevId: 173741373 --- Commit45c5118f0authored by Mark Heffernan<meheff@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: When creating an HloModule from an HloProto construct the HloModuleConfig with a correct ProgramShape which matches the shapes of the entry computation. Previously the module config had a bogus or default constructed ProgramShape. PiperOrigin-RevId: 173741104 --- Commit09a89ae57authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add `tf.contrib.distributions.bijectors.Reshape`. PiperOrigin-RevId: 173740491 --- Commit729db035eauthored by Mark Daoust<markdaoust@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Allow compatibility notes in class, property and module doc-strings PiperOrigin-RevId: 173739674 --- Commitca56fa49aauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 173739110 --- Commit48df7c972authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 173738765 --- Commitfb2c84cb2authored by Jeremy Lau<lauj@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal change PiperOrigin-RevId: 173738655 --- Commit245a5c171authored by Akshay Agrawal<akshayka@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Make functional_ops compatible with eager exeuction by ignoring caching devices when in eager mode PiperOrigin-RevId: 173737949 --- Commitd1c59bd37authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add tf.quantize op, which is the same as tf.quantize_v2. PiperOrigin-RevId: 173735986 --- Commit3ff9c8d2aauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix typos in Linear Model Tutorial samples 1. test_file_name is undefined (should be test_file.name) 2. train_file_name is undefined (should be train_file.name) PiperOrigin-RevId: 173733442 --- Commitabbab2430authored by Michael Case<mikecase@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add bazel mirror links for newly added workspace dependencies. PiperOrigin-RevId: 173732606 --- Commit46a577febauthored by Derek Murray<mrry@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [CMake] Generate audio_ops wrappers in the CMake build. Fixes #14004. PiperOrigin-RevId: 173732397 --- Commit7cb7f88c5authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add count metric, a helper function that computes the total number or total weight of examples. PiperOrigin-RevId: 173731046 --- Commite1d7615ebauthored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix issue with gradients of functions which return multiple values. PiperOrigin-RevId: 173730922 --- Commit80374a7b4authored by Joshua V. Dillon<jvdillon@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Breaking change: Rename `tf.contrib.distributions.Independent` parameter from `reduce_batch_ndims` to `reinterpreted_batch_ndims`. Also change default; `reinterpreted_batch_ndims` default has semantics of `tf.layers.flatten`, i.e., all batch dimensions except the first (batch axis 0) are interpretted as being part of the event. PiperOrigin-RevId: 173729585 --- Commit5426a3c93authored by Allen Lavoie<allenl@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add tfe.get_optimizer_variables for fetching a list of variables which an optimizer has created. Useful for saving them if executing eagerly. PiperOrigin-RevId: 173726859 --- Commit02f55400fauthored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: custom_gradient functions should be able to return their inputs PiperOrigin-RevId: 173723462 --- Commit78bac7290authored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: TFE: Add compatbility doc string to add_to_collection() and friends PiperOrigin-RevId: 173716912 --- Commit9bf00c371authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Shorter import for tfe. PiperOrigin-RevId: 173716375 --- Commit0bc432a44authored by Shanqing Cai<cais@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: TFE: Add compatibility errors and doc strings to queues, input pipelines and Supervisor PiperOrigin-RevId: 173712330 --- Commite9af1af4fauthored by Amit Patankar<amitpatankar@google.com> Committed by Amit Patankar<amitpatankar@google.com>: Fixing the sources docs in master. --- Commitb31b08bb0authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds randomized tests for newly introduced complex and related ops. PiperOrigin-RevId: 173709206 --- Commit466b9ecf8authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Report total number of bytes to be transferred when the curl request makes no progress. PiperOrigin-RevId: 173707608 --- Commit7c4e98eb4authored by Igor Ganichev<iga@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add Tensor._rank() getter It appears to speed up SPINN model by about 1%, which is not much, but this method is very simple and easier to use than len(tensor._shape_tuple()) PiperOrigin-RevId: 173703259 --- Commitd7cffe9c0authored by Allen Lavoie<allenl@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Adds save and restore methods to tfe.Network Save just saves the variables to a checkpoint. Restore either restores immediately or defers the restoration to variable creation time with a custom getter. PiperOrigin-RevId: 173703075 --- Commit9158f974aauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Use tf.app.run in gcs_smoke, so that the flags are explicitly parsed, instead of parsed when first accessed. PiperOrigin-RevId: 173702828 --- Commit3d39b32b9authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Fix a tfprof bug. Throws an error when the flops cannot be calculated. PiperOrigin-RevId: 173702740 --- Commit73155f56aauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [TF:XLA] Small code cleanup. Re-alphabetized. PiperOrigin-RevId: 173702336 --- Commit32bcf46f1authored by Mustafa Ispir<ispir@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: internal PiperOrigin-RevId: 173697389 --- Commit97484a4d9authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Update ops-related pbtxt files. PiperOrigin-RevId: 173690751 --- Commit873ef2ca3authored by Oleg Zabluda<ozabluda@gmail.com> Committed by GitHub<noreply@github.com>: Fix documentation error in tf.size() - output type --- Commit16538dab7authored by Alexandre Passos<apassos@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Saves summaries in the mnist example. PiperOrigin-RevId: 173690505 --- Commit6b05b36cdauthored by Jiri Simsa<jsimsa@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Generalizing sloppy_interleave, making sloppiness an option. PiperOrigin-RevId: 173687797 --- Commit7775a6604authored by Michael Case<mikecase@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Internal Change PiperOrigin-RevId: 173685895 --- Commit5120e75cfauthored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Move `@compatibility(eager)` from class docstring to __init__ docstring Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit7d7b2ec58authored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Also fixes `@end_compatiblity` -> `@end_compatibility` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit96dc501cdauthored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Fix incorrect annotation tag in tf.Variable In tf.Variable the annotation tag of `@compatiblity` should be `@compatibility` --- Commitc22973867authored by Mark Daoust<markdaoust@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Delete bad links (md links not supported in html blocks). PiperOrigin-RevId: 173680417 --- Commit4198e27beauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: [XLA:CPU] [XLA:GPU] Adds compiler support for C64 primitive type, including relevant elementwise unary and binary op lowering for CPU and GPU. We use a named LLVM struct "complex64", laid out the same as std::complex<float>. This named struct is accessed via the llvm::Module, which required changes to accessors of PrimitiveTypeToIrType & friends. Ops that require atan2 (in particular, angle and log) are only supported on GPU at this point. LLVM lacks a CPU intrinsic for atan or atan2, whereas libdevice provides this for GPU. PiperOrigin-RevId: 173676849 --- Commit4ae245a7dauthored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: n/a (internal change only) PiperOrigin-RevId: 173674697 --- Commit0ccf5cf60authored by A. Unique TensorFlower<gardener@tensorflow.org> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Limit the amount of logspam a use of GraphKeys.VARIABLES causes. Multiple copies of this warning next to each other often make logs unreadable. PiperOrigin-RevId: 173672701 --- Commita7b872527authored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Fix an ouput typo in `ci_sanity.sh` In the last PR #13924 (clang sanity check) the output message should be changed: `due to the absence of Python code changes` -> `due to the absence of .h or .cc code changes` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit58d2c5f50authored by Yong Tang<yong.tang.github@outlook.com> Committed by Shanqing Cai<cais@google.com>: Add `SANITY_STEPS_DESC` for do_clang_format_check (#14030) * Add `SANITY_STEPS_DESC` for do_clang_format_check This fix is a follow up to PR #13924 to add the corresponding description in `SANITY_STEPS_DESC`. See comment #13924#discussion_r147314599 for details. Signed-off-by: Yong Tang <yong.tang.github@outlook.com> * Update description for Clang Format Check Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit62a9ab28cauthored by ???<dev@goodow.com> Committed by GitHub<noreply@github.com>: fix broken link --- Commitc6292a3f9authored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Sanitize decode_csv_op.cc with `clang-format -i` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit285ea3910authored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Add test cases for `double` support of `tf.decode_csv` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit73aaed655authored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Update docs for `double` support on `tf.decode_csv` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit3595d1613authored by Yong Tang<yong.tang.github@outlook.com> Committed by Yong Tang<yong.tang.github@outlook.com>: Add `double` support for `tf.decode_csv` In the current tensorflow `tf.decode_csv` accepts `float`, `int32`, `int64`, `string` but not `double`. It seems adding `double` support makes sense as `StringToNumber` already support `double` type. This fix adds `double` support for `tf.decode_csv` Signed-off-by: Yong Tang <yong.tang.github@outlook.com> --- Commit37d483fdaauthored by Sergii Khomenko<sergii.khomenko@stylight.com> Committed by Sergii Khomenko<sergii.khomenko@stylight.com>: Fix a typo --- Commit9c8a520b0authored by Justine Tunney<jart@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: Add WriteEvent method to SummaryWriterInterface Another change will follow that adds an op for this method. It will be useful for loading event logs into other types of summary writer implementations, like a database. This change might also make the new summary file writer go faster, due to less memory copying. PiperOrigin-RevId: 173640116 --- Commita49455812authored by Eugene Brevdo<ebrevdo@google.com> Committed by TensorFlower Gardener<gardener@tensorflow.org>: BEGIN_PUBLIC Automated g4 rollback of changelist 172654120 PiperOrigin-RevId: 174388998
This commit is contained in:
parent
bb5dfbea81
commit
88917888f5
|
|
@ -19,6 +19,7 @@ If you open a GitHub issue, here is our policy:
|
|||
- **TensorFlow version (use command below)**:
|
||||
- **Python version**:
|
||||
- **Bazel version (if compiling from source)**:
|
||||
- **GCC/Compiler version (if compiling from source)**:
|
||||
- **CUDA/cuDNN version**:
|
||||
- **GPU model and memory**:
|
||||
- **Exact command to reproduce**:
|
||||
|
|
|
|||
21
RELEASE.md
21
RELEASE.md
|
|
@ -19,6 +19,14 @@
|
|||
(with GPU and gradient support).
|
||||
* Add a self-check on `import tensorflow` for Windows DLL issues.
|
||||
* Add NCHW support to `tf.depth_to_space` on GPU.
|
||||
* TensorFlow Debugger (tfdbg):
|
||||
* Add `eval` command to allow evaluation of arbitrary Python/numpy expressions
|
||||
in tfdbg command-line interface. See
|
||||
[Debugging TensorFlow Programs](https://www.tensorflow.org/programmers_guide/debugger)
|
||||
for more details.
|
||||
* Usability improvement: The frequently used tensor filter `has_inf_or_nan` is
|
||||
now added to `Session` wrappers and hooks by default. So there is no need
|
||||
for clients to call `.add_tensor_filter(tf_debug.has_inf_or_nan)` anymore.
|
||||
* SinhArcsinh (scalar) distribution added to `contrib.distributions`.
|
||||
* Make `GANEstimator` opensource.
|
||||
* `Estimator.export_savedmodel()` now includes all valid serving signatures
|
||||
|
|
@ -60,10 +68,14 @@
|
|||
* Fix `tf.contrib.distributions.Affine` incorrectly computing log-det-jacobian.
|
||||
* Fix `tf.random_gamma` incorrectly handling non-batch, scalar draws.
|
||||
* Resolved a race condition in TensorForest TreePredictionsV4Op.
|
||||
* Google Cloud Storage file system and Hadoop file system support are now
|
||||
default build options.
|
||||
* Google Cloud Storage file system, Amazon S3 file system, and Hadoop file
|
||||
system support are now default build options.
|
||||
* Custom op libraries must link against libtensorflow_framework.so
|
||||
(installed at `tf.sysconfig.get_lib()`).
|
||||
* Change `RunConfig` default behavior to not set a random seed, making random
|
||||
behavior independently random on distributed workers. We expect this to
|
||||
generally improve training performance. Models that do rely on determinism
|
||||
should set a random seed explicitly.
|
||||
|
||||
## Breaking Changes to the API
|
||||
* The signature of the `tf.contrib.data.rejection_resample()` function has been
|
||||
|
|
@ -74,6 +86,11 @@
|
|||
* Remove seldom used and unnecessary `tf.contrib.data.Iterator.dispose_op()`.
|
||||
* Reorder some TFGAN loss functions in a non-backwards compatible way.
|
||||
|
||||
## Known Issues
|
||||
* In Python 3, `Dataset.from_generator()` does not support Unicode strings.
|
||||
You must convert any strings to bytes objects before yielding them from
|
||||
the generator.
|
||||
|
||||
## Thanks to our Contributors
|
||||
|
||||
This release contains contributions from many people at Google, as well as:
|
||||
|
|
|
|||
|
|
@ -994,6 +994,7 @@ def main():
|
|||
environ_cp['TF_NEED_HDFS'] = '0'
|
||||
environ_cp['TF_NEED_JEMALLOC'] = '0'
|
||||
environ_cp['TF_NEED_OPENCL'] = '0'
|
||||
environ_cp['TF_NEED_S3'] = '0'
|
||||
environ_cp['TF_CUDA_CLANG'] = '0'
|
||||
|
||||
if is_macos():
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
To use from your BUILD file, add the following line to load the macro:
|
||||
|
||||
load("//tensorflow/compiler/aot:tfcompile.bzl", "tf_library")
|
||||
load("@org_tensorflow//tensorflow/compiler/aot:tfcompile.bzl", "tf_library")
|
||||
|
||||
Then call the macro like this:
|
||||
|
||||
|
|
@ -16,14 +16,14 @@ tf_library(
|
|||
)
|
||||
"""
|
||||
|
||||
load("//tensorflow:tensorflow.bzl", "if_android", "tf_copts")
|
||||
load("@org_tensorflow//tensorflow:tensorflow.bzl", "if_android", "tf_copts")
|
||||
|
||||
def tf_library(name, graph, config,
|
||||
freeze_checkpoint=None, freeze_saver=None,
|
||||
cpp_class=None, gen_test=True, gen_benchmark=True,
|
||||
visibility=None, testonly=None,
|
||||
tfcompile_flags=None,
|
||||
tfcompile_tool="//tensorflow/compiler/aot:tfcompile",
|
||||
tfcompile_tool="@org_tensorflow//tensorflow/compiler/aot:tfcompile",
|
||||
include_standard_runtime_deps=True, deps=None, tags=None):
|
||||
"""Runs tfcompile to compile a TensorFlow graph into executable code.
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ def tf_library(name, graph, config,
|
|||
outs=[freeze_file],
|
||||
cmd=("$(location //tensorflow/python/tools:freeze_graph)" +
|
||||
freeze_args),
|
||||
tools=["//tensorflow/python/tools:freeze_graph"],
|
||||
tools=["@org_tensorflow//tensorflow/python/tools:freeze_graph"],
|
||||
tags=tags,
|
||||
)
|
||||
tfcompile_graph = freeze_file
|
||||
|
|
@ -207,22 +207,22 @@ def tf_library(name, graph, config,
|
|||
# These deps are required by all tf_library targets even if
|
||||
# include_standard_runtime_deps is False. Without them, the
|
||||
# generated code will fail to compile.
|
||||
"//tensorflow/compiler/tf2xla:xla_compiled_cpu_function",
|
||||
"//tensorflow/core:framework_lite",
|
||||
"@org_tensorflow//tensorflow/compiler/tf2xla:xla_compiled_cpu_function",
|
||||
"@org_tensorflow//tensorflow/core:framework_lite",
|
||||
] + (need_xla_data_proto and [
|
||||
# If we're generating the program shape, we must depend on the proto.
|
||||
"//tensorflow/compiler/xla:xla_data_proto",
|
||||
"@org_tensorflow//tensorflow/compiler/xla:xla_data_proto",
|
||||
] or []) + (include_standard_runtime_deps and [
|
||||
# TODO(cwhipkey): only depend on kernel code that the model actually needed.
|
||||
"//tensorflow/compiler/tf2xla/kernels:index_ops_kernel_argmax_float_1d",
|
||||
"//tensorflow/compiler/tf2xla/kernels:index_ops_kernel_argmax_float_2d",
|
||||
"//tensorflow/compiler/xla/service/cpu:cpu_runtime_avx",
|
||||
"//tensorflow/compiler/xla/service/cpu:cpu_runtime_neon",
|
||||
"//tensorflow/compiler/xla/service/cpu:cpu_runtime_sse4_1",
|
||||
"//tensorflow/compiler/xla/service/cpu:runtime_conv2d",
|
||||
"//tensorflow/compiler/xla/service/cpu:runtime_matmul",
|
||||
"//tensorflow/compiler/xla/service/cpu:runtime_single_threaded_conv2d",
|
||||
"//tensorflow/compiler/xla/service/cpu:runtime_single_threaded_matmul",
|
||||
"@org_tensorflow//tensorflow/compiler/tf2xla/kernels:index_ops_kernel_argmax_float_1d",
|
||||
"@org_tensorflow//tensorflow/compiler/tf2xla/kernels:index_ops_kernel_argmax_float_2d",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:cpu_runtime_avx",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:cpu_runtime_neon",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:cpu_runtime_sse4_1",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:runtime_conv2d",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:runtime_matmul",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:runtime_single_threaded_conv2d",
|
||||
"@org_tensorflow//tensorflow/compiler/xla/service/cpu:runtime_single_threaded_matmul",
|
||||
"//third_party/eigen3",
|
||||
] or []) + (deps or []),
|
||||
tags=tags,
|
||||
|
|
@ -248,7 +248,7 @@ def tf_library(name, graph, config,
|
|||
name=("gen_" + test_name),
|
||||
testonly=1,
|
||||
srcs=[
|
||||
"//tensorflow/compiler/aot:test.cc",
|
||||
"@org_tensorflow//tensorflow/compiler/aot:test.cc",
|
||||
header_file,
|
||||
],
|
||||
outs=[test_file],
|
||||
|
|
@ -264,13 +264,13 @@ def tf_library(name, graph, config,
|
|||
srcs=[test_file],
|
||||
deps=[
|
||||
":" + name,
|
||||
"//tensorflow/compiler/tf2xla:xla_local_runtime_context",
|
||||
"//tensorflow/compiler/aot:runtime",
|
||||
"//tensorflow/compiler/aot:tf_library_test_main",
|
||||
"//tensorflow/compiler/xla:executable_run_options",
|
||||
"@org_tensorflow//tensorflow/compiler/tf2xla:xla_local_runtime_context",
|
||||
"@org_tensorflow//tensorflow/compiler/aot:runtime",
|
||||
"@org_tensorflow//tensorflow/compiler/aot:tf_library_test_main",
|
||||
"@org_tensorflow//tensorflow/compiler/xla:executable_run_options",
|
||||
"//third_party/eigen3",
|
||||
"//tensorflow/core:lib",
|
||||
"//tensorflow/core:test",
|
||||
"@org_tensorflow//tensorflow/core:lib",
|
||||
"@org_tensorflow//tensorflow/core:test",
|
||||
],
|
||||
tags=tags,
|
||||
)
|
||||
|
|
@ -278,7 +278,7 @@ def tf_library(name, graph, config,
|
|||
if gen_benchmark:
|
||||
benchmark_name = name + "_benchmark"
|
||||
benchmark_file = benchmark_name + ".cc"
|
||||
benchmark_main = ("//tensorflow/compiler/aot:" +
|
||||
benchmark_main = ("@org_tensorflow//tensorflow/compiler/aot:" +
|
||||
"benchmark_main.template")
|
||||
|
||||
# Rule to rewrite benchmark.cc to produce the benchmark_file.
|
||||
|
|
@ -310,13 +310,13 @@ def tf_library(name, graph, config,
|
|||
linkopts = if_android(["-pie", "-s"]),
|
||||
deps=[
|
||||
":" + name,
|
||||
"//tensorflow/compiler/tf2xla:xla_local_runtime_context",
|
||||
"//tensorflow/compiler/aot:benchmark",
|
||||
"//tensorflow/compiler/aot:runtime",
|
||||
"//tensorflow/compiler/xla:executable_run_options",
|
||||
"@org_tensorflow//tensorflow/compiler/tf2xla:xla_local_runtime_context",
|
||||
"@org_tensorflow//tensorflow/compiler/aot:benchmark",
|
||||
"@org_tensorflow//tensorflow/compiler/aot:runtime",
|
||||
"@org_tensorflow//tensorflow/compiler/xla:executable_run_options",
|
||||
"//third_party/eigen3",
|
||||
] + if_android([
|
||||
"//tensorflow/compiler/aot:benchmark_extra_android",
|
||||
"@org_tensorflow//tensorflow/compiler/aot:benchmark_extra_android",
|
||||
]),
|
||||
tags=tags,
|
||||
)
|
||||
|
|
@ -326,11 +326,11 @@ def target_llvm_triple():
|
|||
# TODO(toddw): Add target_triple for other targets. For details see:
|
||||
# http://llvm.org/docs/doxygen/html/Triple_8h_source.html
|
||||
return select({
|
||||
"//tensorflow:android_armeabi": "armv5-none-android",
|
||||
"//tensorflow:android_arm": "armv7-none-android",
|
||||
"//tensorflow:android_arm64": "aarch64-none-android",
|
||||
"//tensorflow:android_x86": "i686-none-android",
|
||||
"//tensorflow:linux_ppc64le": "ppc64le-ibm-linux-gnu",
|
||||
"//tensorflow:darwin": "x86_64-none-darwin",
|
||||
"@org_tensorflow//tensorflow:android_armeabi": "armv5-none-android",
|
||||
"@org_tensorflow//tensorflow:android_arm": "armv7-none-android",
|
||||
"@org_tensorflow//tensorflow:android_arm64": "aarch64-none-android",
|
||||
"@org_tensorflow//tensorflow:android_x86": "i686-none-android",
|
||||
"@org_tensorflow//tensorflow:linux_ppc64le": "ppc64le-ibm-linux-gnu",
|
||||
"@org_tensorflow//tensorflow:darwin": "x86_64-none-darwin",
|
||||
"//conditions:default": "x86_64-pc-linux",
|
||||
})
|
||||
|
|
|
|||
|
|
@ -282,6 +282,22 @@ public class TensorFlowInferenceInterface {
|
|||
|
||||
// Methods for taking a native Tensor and filling it with values from Java arrays.
|
||||
|
||||
/**
|
||||
* Given a source array with shape {@link dims} and content {@link src}, copy the contents into
|
||||
* the input Tensor with name {@link inputName}. The source array {@link src} must have at least
|
||||
* as many elements as that of the destination Tensor. If {@link src} has more elements than the
|
||||
* destination has capacity, the copy is truncated.
|
||||
*/
|
||||
public void feed(String inputName, boolean[] src, long... dims) {
|
||||
byte[] b = new byte[src.length];
|
||||
|
||||
for (int i = 0; i < src.length; i++) {
|
||||
b[i] = src[i] ? (byte) 1 : (byte) 0;
|
||||
}
|
||||
|
||||
addFeed(inputName, Tensor.create(Boolean.class, dims, ByteBuffer.wrap(b)));
|
||||
}
|
||||
|
||||
/**
|
||||
* Given a source array with shape {@link dims} and content {@link src}, copy the contents into
|
||||
* the input Tensor with name {@link inputName}. The source array {@link src} must have at least
|
||||
|
|
|
|||
|
|
@ -344,7 +344,7 @@ class GradientBoostedDecisionTreeModel(object):
|
|||
learner_config.num_classes == 2)
|
||||
|
||||
def _predict_and_return_dict(self, ensemble_handle, ensemble_stamp, mode):
|
||||
"""Runs prediciton and returns a dictionary of the prediction results.
|
||||
"""Runs prediction and returns a dictionary of the prediction results.
|
||||
|
||||
Args:
|
||||
ensemble_handle: ensemble resource handle.
|
||||
|
|
|
|||
|
|
@ -253,7 +253,6 @@ if (tensorflow_BUILD_PYTHON_TESTS)
|
|||
"${tensorflow_source_dir}/tensorflow/python/training/evaluation_test.py"
|
||||
# training tests
|
||||
"${tensorflow_source_dir}/tensorflow/python/training/basic_session_run_hooks_test.py" # Needs tf.contrib fix.
|
||||
"${tensorflow_source_dir}/tensorflow/python/training/localhost_cluster_performance_test.py" # Needs portpicker.
|
||||
"${tensorflow_source_dir}/tensorflow/python/training/quantize_training_test.py" # Needs quantization ops to be included in windows.
|
||||
"${tensorflow_source_dir}/tensorflow/python/training/supervisor_test.py" # Flaky I/O error on rename.
|
||||
"${tensorflow_source_dir}/tensorflow/python/training/server_lib_test.py" # Test occasionally deadlocks.
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ print(m)
|
|||
This feature is in early stages and work remains to be done in terms of smooth
|
||||
support for distributed and multi-GPU training and CPU performance.
|
||||
|
||||
- [Known issues](https://github.com/tensorflow/tensorflow/issues?q=is%3Aissue%20is%3Aopen%20label%3Aproj%3Aeager)
|
||||
- [Known issues](https://github.com/tensorflow/tensorflow/issues?q=is%3Aissue%20is%3Aopen%20label%3Acomp%3Aeager)
|
||||
- Feedback is welcome, please consider
|
||||
[filing an issue](https://github.com/tensorflow/tensorflow/issues/new) to provide it.
|
||||
|
||||
|
|
|
|||
|
|
@ -68,9 +68,9 @@ enabled.
|
|||
A significant fraction of the [TensorFlow
|
||||
API](https://www.tensorflow.org/api_docs/python/) consists of numerical
|
||||
operations:
|
||||
[arithmetic operations](https://www.tensorflow.org/api_docs/python/tf/matmul),
|
||||
[matrix operations](https://www.tensorflow.org/api_docs/python/tf/matmul),
|
||||
[linear algebra operations](https://www.tensorflow.org/api_docs/python/tf/linalg),
|
||||
[arithmetic operations](https://www.tensorflow.org/api_guides/python/math_ops#Arithmetic_Operators),
|
||||
[matrix operations](https://www.tensorflow.org/api_guides/python/math_ops#Matrix_Math_Functions),
|
||||
[linear algebra operations](https://www.tensorflow.org/versions/master/api_docs/python/tf/linalg),
|
||||
etc.
|
||||
|
||||
With eager execution enabled, these operations consume and return
|
||||
|
|
@ -746,7 +746,7 @@ during graph construction.
|
|||
|
||||
`tf.summary` operations are *not* compatible with eager execution, but an
|
||||
equivalent alternative exists in
|
||||
[`tf.contrib.summary`](https://www.tensorflow.org/versions/master/api_guides/python/tf/contrib/summary/)
|
||||
[`tf.contrib.summary`](https://www.tensorflow.org/versions/master/api_docs/python/tf/contrib/summary)
|
||||
that is compatible with both eager execution and graph construction.
|
||||
|
||||
During model construction simply insert summary operations like
|
||||
|
|
@ -887,7 +887,7 @@ Some differences worth noting:
|
|||
|
||||
Please give eager execution a spin. This feature is in early stages and is
|
||||
evolving, so we welcome your feedback via issues on GitHub (see [known
|
||||
issues](https://github.com/tensorflow/tensorflow/labels/eager)).
|
||||
issues](https://github.com/tensorflow/tensorflow/labels/comp:eager)).
|
||||
|
||||
You may want to browse through some sample code, including benchmarks for some:
|
||||
|
||||
|
|
|
|||
|
|
@ -77,10 +77,10 @@ def reduce_sum_n(tensors, name=None):
|
|||
return tensors[0]
|
||||
return math_ops.add_n(tensors, name=name_scope)
|
||||
|
||||
@deprecated(None,
|
||||
'Please switch to tf.confusion_matrix.remove_squeezable_dimensions.'
|
||||
'Note that order of the inputs and outputs of labels and '
|
||||
'predictions have also been switched.')
|
||||
@deprecated(
|
||||
None, "Please switch to remove_squeezable_dimensions from "
|
||||
"tf.confusion_matrix. Note that the order of the inputs and outputs of "
|
||||
"labels and predictions have also been switched.")
|
||||
def remove_squeezable_dimensions(predictions, labels, name=None):
|
||||
"""Squeeze last dim if ranks of `predictions` and `labels` differ by 1.
|
||||
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ such as the Wasserstein loss, gradient penalty, mutual information penalty, etc
|
|||
|
||||
* [evaluation](https://www.tensorflow.org/code/tensorflow/contrib/gan/python/eval/python/):
|
||||
Use `Inception Score` or `Frechet Distance` with a pretrained Inception
|
||||
network to evaluate your unconditional generative model. You can also also use
|
||||
network to evaluate your unconditional generative model. You can also use
|
||||
your own pretrained classifier for more specific performance numbers, or use
|
||||
other methods for evaluating conditional generative models.
|
||||
|
||||
|
|
|
|||
|
|
@ -2008,7 +2008,7 @@ def layer_norm(inputs,
|
|||
|
||||
Given a tensor `inputs` of rank `R`, moments are calculated and normalization
|
||||
is performed over axes `begin_norm_axis ... R - 1`. Scaling and centering,
|
||||
if requested, is performed over axes `begin_shift_axis .. R - 1`.
|
||||
if requested, is performed over axes `begin_params_axis .. R - 1`.
|
||||
|
||||
By default, `begin_norm_axis = 1` and `begin_params_axis = -1`,
|
||||
meaning that normalization is performed over all but the first axis
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Gunhan Gulsoy <gunan@google.com>
|
||||
LABEL maintainer="Gunhan Gulsoy <gunan@google.com>"
|
||||
|
||||
# Install make build dependencies for TensorFlow.
|
||||
RUN apt-get update
|
||||
|
|
|
|||
|
|
@ -11,6 +11,8 @@
|
|||
# the first for the host (the machine you're compiling on) and the second for
|
||||
# the target (the machine you want the program to run on).
|
||||
|
||||
SHELL := /bin/bash
|
||||
|
||||
# Host compilation settings
|
||||
|
||||
# Find where we're running from, so we can store generated files here.
|
||||
|
|
@ -44,6 +46,11 @@ ifdef HEXAGON_LIBS
|
|||
endif
|
||||
endif # HEXAGON_LIBS
|
||||
|
||||
# If ANDROID_TYPES is not set assume __ANDROID_TYPES_SLIM__
|
||||
ifeq ($(ANDROID_TYPES),)
|
||||
ANDROID_TYPES := -D__ANDROID_TYPES_SLIM__
|
||||
endif
|
||||
|
||||
# Try to figure out the host system
|
||||
HOST_OS :=
|
||||
ifeq ($(OS),Windows_NT)
|
||||
|
|
@ -58,6 +65,8 @@ else
|
|||
endif
|
||||
endif
|
||||
|
||||
HOST_ARCH := $(shell if [[ $(shell uname -m) =~ i[345678]86 ]]; then echo x86_32; else echo $(shell uname -m); fi)
|
||||
|
||||
# Where compiled objects are stored.
|
||||
HOST_OBJDIR := $(MAKEFILE_DIR)/gen/host_obj/
|
||||
HOST_BINDIR := $(MAKEFILE_DIR)/gen/host_bin/
|
||||
|
|
@ -216,7 +225,7 @@ ifeq ($(TARGET),LINUX)
|
|||
endif
|
||||
# If we're cross-compiling for the Raspberry Pi, use the right gcc.
|
||||
ifeq ($(TARGET),PI)
|
||||
CXXFLAGS += -D__ANDROID_TYPES_SLIM__ -DRASPBERRY_PI
|
||||
CXXFLAGS += $(ANDROID_TYPES) -DRASPBERRY_PI
|
||||
LDFLAGS := -Wl,--no-whole-archive
|
||||
LIBS += -ldl -lpthread
|
||||
LIBFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive
|
||||
|
|
@ -230,43 +239,93 @@ ifeq ($(TARGET),ANDROID)
|
|||
# NDK_ROOT=/path/to/your/ndk
|
||||
# You need to have an Android version of the protobuf libraries compiled to link
|
||||
# in. The compile_android_protobuf.sh script may help.
|
||||
# TODO(satok): Support all CPU architectures (Currently only armv7 is supported)
|
||||
|
||||
OS_PATH :=
|
||||
ANDROID_HOST_OS_ARCH :=
|
||||
ifeq ($(HOST_OS),LINUX)
|
||||
OS_PATH=linux
|
||||
ANDROID_HOST_OS_ARCH=linux
|
||||
endif
|
||||
ifeq ($(HOST_OS),OSX)
|
||||
OS_PATH=darwin
|
||||
ANDROID_HOST_OS_ARCH=darwin
|
||||
endif
|
||||
ifeq ($(HOST_OS),WINDOWS)
|
||||
$(error "windows is not supported.")
|
||||
endif
|
||||
|
||||
ifeq ($(HOST_ARCH),x86_32)
|
||||
ANDROID_HOST_OS_ARCH := $(ANDROID_HOST_OS_ARCH)-x86
|
||||
else
|
||||
ANDROID_HOST_OS_ARCH := $(ANDROID_HOST_OS_ARCH)-$(HOST_ARCH)
|
||||
endif
|
||||
|
||||
ifndef ANDROID_ARCH
|
||||
ANDROID_ARCH := armeabi-v7a
|
||||
endif
|
||||
|
||||
ifeq ($(ANDROID_ARCH),arm64-v8a)
|
||||
TOOLCHAIN := aarch64-linux-android-4.9
|
||||
SYSROOT_ARCH := arm64
|
||||
BIN_PREFIX := aarch64-linux-android
|
||||
MARCH_OPTION :=
|
||||
endif
|
||||
ifeq ($(ANDROID_ARCH),armeabi)
|
||||
TOOLCHAIN := arm-linux-androideabi-4.9
|
||||
SYSROOT_ARCH := arm
|
||||
BIN_PREFIX := arm-linux-androideabi
|
||||
MARCH_OPTION :=
|
||||
endif
|
||||
ifeq ($(ANDROID_ARCH),armeabi-v7a)
|
||||
TOOLCHAIN := arm-linux-androideabi-4.9
|
||||
SYSROOT_ARCH := arm
|
||||
BIN_PREFIX := arm-linux-androideabi
|
||||
MARCH_OPTION := -march=armv7-a -mfloat-abi=softfp -mfpu=neon
|
||||
endif
|
||||
ifeq ($(ANDROID_ARCH),mips)
|
||||
TOOLCHAIN := mipsel-linux-android-4.9
|
||||
SYSROOT_ARCH := mips
|
||||
BIN_PREFIX := mipsel-linux-android
|
||||
MARCH_OPTION :=
|
||||
endif
|
||||
ifeq ($(ANDROID_ARCH),mips64)
|
||||
TOOLCHAIN := mips64el-linux-android-4.9
|
||||
SYSROOT_ARCH := mips64
|
||||
BIN_PREFIX := mips64el-linux-android
|
||||
MARCH_OPTION :=
|
||||
endif
|
||||
ifeq ($(ANDROID_ARCH),x86)
|
||||
TOOLCHAIN := x86-4.9
|
||||
SYSROOT_ARCH := x86
|
||||
BIN_PREFIX := i686-linux-android
|
||||
MARCH_OPTION :=
|
||||
endif
|
||||
ifeq ($(ANDROID_ARCH),x86_64)
|
||||
TOOLCHAIN := x86_64-4.9
|
||||
SYSROOT_ARCH := x86_64
|
||||
BIN_PREFIX := x86-64-linux-android
|
||||
MARCH_OPTION :=
|
||||
endif
|
||||
|
||||
ifndef NDK_ROOT
|
||||
$(error "NDK_ROOT is not defined.")
|
||||
endif
|
||||
CXX := $(CC_PREFIX) $(NDK_ROOT)/toolchains/arm-linux-androideabi-4.9/prebuilt/$(OS_PATH)-x86_64/bin/arm-linux-androideabi-g++
|
||||
CC := $(CC_PREFIX) $(NDK_ROOT)/toolchains/arm-linux-androideabi-4.9/prebuilt/$(OS_PATH)-x86_64/bin/arm-linux-androideabi-gcc
|
||||
CXX := $(CC_PREFIX) $(NDK_ROOT)/toolchains/$(TOOLCHAIN)/prebuilt/$(ANDROID_HOST_OS_ARCH)/bin/$(BIN_PREFIX)-g++
|
||||
CC := $(CC_PREFIX) $(NDK_ROOT)/toolchains/$(TOOLCHAIN)/prebuilt/$(ANDROID_HOST_OS_ARCH)/bin/$(BIN_PREFIX)-gcc
|
||||
CXXFLAGS +=\
|
||||
--sysroot $(NDK_ROOT)/platforms/android-21/arch-arm \
|
||||
--sysroot $(NDK_ROOT)/platforms/android-21/arch-$(SYSROOT_ARCH) \
|
||||
-Wno-narrowing \
|
||||
-fomit-frame-pointer \
|
||||
-march=armv7-a \
|
||||
-mfloat-abi=softfp \
|
||||
-mfpu=neon \
|
||||
$(MARCH_OPTION) \
|
||||
-fPIE
|
||||
INCLUDES = \
|
||||
-I$(NDK_ROOT)/sources/android/support/include \
|
||||
-I$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/include \
|
||||
-I$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/libs/armeabi/include \
|
||||
-I$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/libs/$(ANDROID_ARCH)/include \
|
||||
-I. \
|
||||
-I$(MAKEFILE_DIR)/downloads/ \
|
||||
-I$(MAKEFILE_DIR)/downloads/eigen \
|
||||
-I$(MAKEFILE_DIR)/downloads/gemmlowp \
|
||||
-I$(MAKEFILE_DIR)/downloads/nsync/public \
|
||||
-I$(MAKEFILE_DIR)/downloads/fft2d \
|
||||
-I$(MAKEFILE_DIR)/gen/protobuf/include \
|
||||
-I$(MAKEFILE_DIR)/gen/protobuf_android/$(ANDROID_ARCH)/include \
|
||||
-I$(PROTOGENDIR) \
|
||||
-I$(PBTGENDIR)
|
||||
|
||||
|
|
@ -277,19 +336,20 @@ $(TARGET_NSYNC_LIB) \
|
|||
-llog \
|
||||
-lz \
|
||||
-lm \
|
||||
-ldl
|
||||
-ldl \
|
||||
-latomic
|
||||
|
||||
LD := $(NDK_ROOT)/toolchains/arm-linux-androideabi-4.9/prebuilt/$(OS_PATH)-x86_64/arm-linux-androideabi/bin/ld
|
||||
LD := $(NDK_ROOT)/toolchains/$(TOOLCHAIN)/prebuilt/$(ANDROID_HOST_OS_ARCH)/$(BIN_PREFIX)/bin/ld
|
||||
|
||||
LDFLAGS := \
|
||||
-march=armv7-a \
|
||||
-L$(MAKEFILE_DIR)/gen/protobuf/lib \
|
||||
-L$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/libs/armeabi-v7a \
|
||||
$(MARCH_OPTION) \
|
||||
-L$(MAKEFILE_DIR)/gen/protobuf_android/$(ANDROID_ARCH)/lib \
|
||||
-L$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/libs/$(ANDROID_ARCH) \
|
||||
-fPIE \
|
||||
-pie \
|
||||
-v
|
||||
|
||||
AR := $(NDK_ROOT)/toolchains/arm-linux-androideabi-4.9/prebuilt/$(OS_PATH)-x86_64/bin/arm-linux-androideabi-ar
|
||||
AR := $(NDK_ROOT)/toolchains/$(TOOLCHAIN)/prebuilt/$(ANDROID_HOST_OS_ARCH)/bin/$(BIN_PREFIX)-ar
|
||||
ARFLAGS := r
|
||||
LIBFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive
|
||||
|
||||
|
|
@ -313,6 +373,11 @@ $(TARGET_NSYNC_LIB) \
|
|||
ifdef ENABLE_EXPERIMENTAL_HEXNN_OPS
|
||||
CXXFLAGS += -DENABLE_EXPERIMENTAL_HEXNN_OPS
|
||||
endif
|
||||
|
||||
OBJDIR := $(OBJDIR)android_$(ANDROID_ARCH)/
|
||||
LIBDIR := $(LIBDIR)android_$(ANDROID_ARCH)/
|
||||
BINDIR := $(BINDIR)android_$(ANDROID_ARCH)/
|
||||
DEPDIR := $(DEPDIR)android_$(ANDROID_ARCH)/
|
||||
|
||||
endif # ANDROID
|
||||
# LINT.ThenChange(//tensorflow/contrib/android/cmake/CMakeLists.txt)
|
||||
|
|
@ -338,7 +403,7 @@ ifeq ($(TARGET),IOS)
|
|||
-Wno-c++11-narrowing \
|
||||
-mno-thumb \
|
||||
-DTF_LEAN_BINARY \
|
||||
-D__ANDROID_TYPES_SLIM__ \
|
||||
$(ANDROID_TYPES) \
|
||||
-fno-exceptions \
|
||||
-isysroot \
|
||||
${IPHONEOS_SYSROOT}
|
||||
|
|
@ -362,7 +427,7 @@ ifeq ($(TARGET),IOS)
|
|||
-Wno-c++11-narrowing \
|
||||
-mno-thumb \
|
||||
-DTF_LEAN_BINARY \
|
||||
-D__ANDROID_TYPES_SLIM__ \
|
||||
$(ANDROID_TYPES) \
|
||||
-fno-exceptions \
|
||||
-isysroot \
|
||||
${IPHONEOS_SYSROOT}
|
||||
|
|
@ -385,7 +450,7 @@ ifeq ($(TARGET),IOS)
|
|||
-DUSE_GEMM_FOR_CONV \
|
||||
-Wno-c++11-narrowing \
|
||||
-DTF_LEAN_BINARY \
|
||||
-D__ANDROID_TYPES_SLIM__ \
|
||||
$(ANDROID_TYPES) \
|
||||
-fno-exceptions \
|
||||
-isysroot \
|
||||
${IPHONEOS_SYSROOT}
|
||||
|
|
@ -409,7 +474,7 @@ ifeq ($(TARGET),IOS)
|
|||
-DUSE_GEMM_FOR_CONV \
|
||||
-Wno-c++11-narrowing \
|
||||
-DTF_LEAN_BINARY \
|
||||
-D__ANDROID_TYPES_SLIM__ \
|
||||
$(ANDROID_TYPES) \
|
||||
-fno-exceptions \
|
||||
-isysroot \
|
||||
${IPHONESIMULATOR_SYSROOT}
|
||||
|
|
@ -432,7 +497,7 @@ ifeq ($(TARGET),IOS)
|
|||
-DUSE_GEMM_FOR_CONV \
|
||||
-Wno-c++11-narrowing \
|
||||
-DTF_LEAN_BINARY \
|
||||
-D__ANDROID_TYPES_SLIM__ \
|
||||
$(ANDROID_TYPES) \
|
||||
-fno-exceptions \
|
||||
-isysroot \
|
||||
${IPHONESIMULATOR_SYSROOT}
|
||||
|
|
@ -655,12 +720,12 @@ clean:
|
|||
# Gets rid of all generated files except protobuf libs generated
|
||||
# before calling make. This allows users not to recompile proto libs everytime.
|
||||
clean_except_protobuf_libs:
|
||||
find $(MAKEFILE_DIR)/gen -mindepth 1 -maxdepth 1 ! -name "protobuf" ! -name "protobuf-host" -exec rm -r "{}" \;
|
||||
find $(MAKEFILE_DIR)/gen -mindepth 1 -maxdepth 1 ! -name "protobuf*" -exec rm -r "{}" \;
|
||||
rm -rf tensorflow/core/util/version_info.cc
|
||||
|
||||
# Gets rid of target files only, leaving the host alone. Also leaves the lib
|
||||
# directory untouched deliberately, so we can persist multiple architectures
|
||||
# across builds for iOS.
|
||||
# across builds for iOS and Android.
|
||||
cleantarget:
|
||||
rm -rf $(OBJDIR)
|
||||
rm -rf $(BINDIR)
|
||||
|
|
|
|||
|
|
@ -18,12 +18,15 @@
|
|||
set -e
|
||||
|
||||
usage() {
|
||||
echo "Usage: NDK_ROOT=<path to ndk root> $(basename "$0") [-s:t:Tx:X]"
|
||||
echo "Usage: NDK_ROOT=<path to ndk root> $(basename "$0") [-Es:t:Tx:a:X]"
|
||||
echo "-E enable experimental hexnn ops"
|
||||
echo "-s [sub_makefiles] sub makefiles separated by white space"
|
||||
echo "-t [build_target] build target for Android makefile [default=all]"
|
||||
echo "-T only build tensorflow"
|
||||
echo "-x [hexagon library path] copy and hexagon libraries in the specified path"
|
||||
echo "-a [architecture] Architecture of target android [default=armeabi-v7a] \
|
||||
(supported architecture list: \
|
||||
arm64-v8a armeabi armeabi-v7a mips mips64 x86 x86_64)"
|
||||
exit 1
|
||||
}
|
||||
|
||||
|
|
@ -32,13 +35,16 @@ if [[ -z "${NDK_ROOT}" ]]; then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
while getopts "Es:t:Tx:" opt_name; do
|
||||
ARCH=armeabi-v7a
|
||||
|
||||
while getopts "Es:t:Tx:a:" opt_name; do
|
||||
case "$opt_name" in
|
||||
E) ENABLE_EXPERIMENTAL_HEXNN_OPS="true";;
|
||||
s) SUB_MAKEFILES="${OPTARG}";;
|
||||
t) BUILD_TARGET="${OPTARG}";;
|
||||
T) ONLY_MAKE_TENSORFLOW="true";;
|
||||
x) HEXAGON_LIB_PATH="${OPTARG}";;
|
||||
a) ARCH="${OPTARG}";;
|
||||
*) usage;;
|
||||
esac
|
||||
done
|
||||
|
|
@ -53,25 +59,23 @@ JOB_COUNT="${JOB_COUNT:-$(get_job_count)}"
|
|||
|
||||
HEXAGON_DOWNLOAD_PATH="tensorflow/contrib/makefile/downloads/hexagon"
|
||||
|
||||
# Remove any old files first.
|
||||
make -f tensorflow/contrib/makefile/Makefile cleantarget
|
||||
|
||||
if [[ "${ONLY_MAKE_TENSORFLOW}" != "true" ]]; then
|
||||
# Remove any old files first.
|
||||
make -f tensorflow/contrib/makefile/Makefile clean
|
||||
rm -rf tensorflow/contrib/makefile/downloads
|
||||
# Pull down the required versions of the frameworks we need.
|
||||
tensorflow/contrib/makefile/download_dependencies.sh
|
||||
# Compile protobuf for the target Android device architectures.
|
||||
CC_PREFIX="${CC_PREFIX}" NDK_ROOT="${NDK_ROOT}" \
|
||||
tensorflow/contrib/makefile/compile_android_protobuf.sh -c
|
||||
else
|
||||
# Only clean files generated by make
|
||||
make -f tensorflow/contrib/makefile/Makefile clean_except_protobuf_libs
|
||||
tensorflow/contrib/makefile/compile_android_protobuf.sh -c -a ${ARCH}
|
||||
fi
|
||||
|
||||
# Compile nsync for the host and the target Android device architecture.
|
||||
# Don't use export var=`something` syntax; it swallows the exit status.
|
||||
HOST_NSYNC_LIB=`tensorflow/contrib/makefile/compile_nsync.sh`
|
||||
TARGET_NSYNC_LIB=`CC_PREFIX="${CC_PREFIX}" NDK_ROOT="${NDK_ROOT}" \
|
||||
tensorflow/contrib/makefile/compile_nsync.sh -t android -a armeabi-v7a`
|
||||
tensorflow/contrib/makefile/compile_nsync.sh -t android -a ${ARCH}`
|
||||
export HOST_NSYNC_LIB TARGET_NSYNC_LIB
|
||||
|
||||
if [[ ! -z "${HEXAGON_LIB_PATH}" ]]; then
|
||||
|
|
@ -98,7 +102,8 @@ fi
|
|||
|
||||
if [[ -z "${BUILD_TARGET}" ]]; then
|
||||
make -j"${JOB_COUNT}" -f tensorflow/contrib/makefile/Makefile \
|
||||
TARGET=ANDROID NDK_ROOT="${NDK_ROOT}" CC_PREFIX="${CC_PREFIX}" \
|
||||
TARGET=ANDROID NDK_ROOT="${NDK_ROOT}" ANDROID_ARCH="${ARCH}" \
|
||||
CC_PREFIX="${CC_PREFIX}" \
|
||||
HOST_NSYNC_LIB="$HOST_NSYNC_LIB" TARGET_NSYNC_LIB="$TARGET_NSYNC_LIB" \
|
||||
HEXAGON_LIBS="${HEXAGON_LIBS}" HEXAGON_INCLUDE="${HEXAGON_INCLUDE}" \
|
||||
SUB_MAKEFILES="${SUB_MAKEFILES}" ${EXTRA_MAKE_ARGS[@]}
|
||||
|
|
@ -106,7 +111,8 @@ else
|
|||
# BUILD_TARGET explicitly uncommented to allow multiple targets to be
|
||||
# passed to make in a single build_all_android.sh invocation.
|
||||
make -j"${JOB_COUNT}" -f tensorflow/contrib/makefile/Makefile \
|
||||
TARGET=ANDROID NDK_ROOT="${NDK_ROOT}" CC_PREFIX="${CC_PREFIX}" \
|
||||
TARGET=ANDROID NDK_ROOT="${NDK_ROOT}" ANDROID_ARCH="${ARCH}" \
|
||||
CC_PREFIX="${CC_PREFIX}" \
|
||||
HOST_NSYNC_LIB="$HOST_NSYNC_LIB" TARGET_NSYNC_LIB="$TARGET_NSYNC_LIB" \
|
||||
HEXAGON_LIBS="${HEXAGON_LIBS}" HEXAGON_INCLUDE="${HEXAGON_INCLUDE}" \
|
||||
SUB_MAKEFILES="${SUB_MAKEFILES}" ${EXTRA_MAKE_ARGS[@]} ${BUILD_TARGET}
|
||||
|
|
|
|||
|
|
@ -71,10 +71,10 @@ then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
GENDIR="$(pwd)/gen/protobuf"
|
||||
GENDIR="$(pwd)/gen/protobuf_android"
|
||||
HOST_GENDIR="$(pwd)/gen/protobuf-host"
|
||||
mkdir -p "${GENDIR}"
|
||||
mkdir -p "${HOST_GENDIR}"
|
||||
mkdir -p "${GENDIR}/${ARCHITECTURE}"
|
||||
|
||||
if [[ ! -f "./downloads/protobuf/autogen.sh" ]]; then
|
||||
echo "You need to download dependencies before running this script." 1>&2
|
||||
|
|
@ -153,7 +153,7 @@ then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
./configure --prefix="${GENDIR}" \
|
||||
./configure --prefix="${GENDIR}/${ARCHITECTURE}" \
|
||||
--host="${bin_prefix}" \
|
||||
--with-sysroot="${SYSROOT}" \
|
||||
--disable-shared \
|
||||
|
|
|
|||
|
|
@ -423,7 +423,8 @@ def streaming_mean_tensor(values,
|
|||
updates_collections=updates_collections,
|
||||
name=name)
|
||||
|
||||
|
||||
@deprecated(None, "Please switch to tf.metrics.accuracy. Note that the order "
|
||||
"of the inputs of labels and predictions have been switched.")
|
||||
def streaming_accuracy(predictions,
|
||||
labels,
|
||||
weights=None,
|
||||
|
|
@ -1101,7 +1102,8 @@ def streaming_curve_points(labels=None,
|
|||
|
||||
return points, update_op
|
||||
|
||||
|
||||
@deprecated(None, "Please switch to tf.metrics.auc. Note that the order of "
|
||||
"the inputs of labels and predictions have been switched.")
|
||||
def streaming_auc(predictions,
|
||||
labels,
|
||||
weights=None,
|
||||
|
|
@ -1486,7 +1488,9 @@ def streaming_sensitivity_at_specificity(predictions,
|
|||
updates_collections=updates_collections,
|
||||
name=name)
|
||||
|
||||
|
||||
@deprecated(
|
||||
None, "Please switch to tf.metrics.precision_at_thresholds. Note that the "
|
||||
"order of of the inputs of labels and predictions have been switched.")
|
||||
def streaming_precision_at_thresholds(predictions,
|
||||
labels,
|
||||
thresholds,
|
||||
|
|
@ -1545,7 +1549,9 @@ def streaming_precision_at_thresholds(predictions,
|
|||
updates_collections=updates_collections,
|
||||
name=name)
|
||||
|
||||
|
||||
@deprecated(
|
||||
None, "Please switch to tf.metrics.recall_at_thresholds. Note that the "
|
||||
"order of of the inputs of labels and predictions have been switched.")
|
||||
def streaming_recall_at_thresholds(predictions,
|
||||
labels,
|
||||
thresholds,
|
||||
|
|
@ -1755,8 +1761,8 @@ def _at_k_name(name, k=None, class_id=None):
|
|||
return name
|
||||
|
||||
|
||||
@deprecated('2016-11-08', 'Please use `streaming_sparse_recall_at_k`, '
|
||||
'and reshape labels from [batch_size] to [batch_size, 1].')
|
||||
@deprecated("2016-11-08", "Please use `streaming_sparse_recall_at_k`, "
|
||||
"and reshape labels from [batch_size] to [batch_size, 1].")
|
||||
def streaming_recall_at_k(predictions,
|
||||
labels,
|
||||
k,
|
||||
|
|
@ -2389,7 +2395,7 @@ def streaming_sparse_average_precision_at_top_k(top_k_predictions,
|
|||
updates_collections=updates_collections,
|
||||
name=name)
|
||||
|
||||
|
||||
@deprecated(None, "Please switch to tf.metrics.mean.")
|
||||
def streaming_mean_absolute_error(predictions,
|
||||
labels,
|
||||
weights=None,
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ message NodeDef {
|
|||
// CONSTRAINT ::= ("job:" JOB_NAME)
|
||||
// | ("replica:" [1-9][0-9]*)
|
||||
// | ("task:" [1-9][0-9]*)
|
||||
// | ("device:" ("gpu" | "cpu") ":" ([1-9][0-9]* | "*") )
|
||||
// | ("device:" [A-Za-z]* ":" ([1-9][0-9]* | "*") )
|
||||
//
|
||||
// Valid values for this string include:
|
||||
// * "/job:worker/replica:0/task:1/device:GPU:3" (full specification)
|
||||
|
|
|
|||
|
|
@ -1098,7 +1098,7 @@ tf_kernel_library(
|
|||
visibility = [":friends"],
|
||||
deps = [
|
||||
":bounds_check",
|
||||
"//tensorflow/core:framework_lite",
|
||||
"//tensorflow/core:framework",
|
||||
"//third_party/eigen3",
|
||||
],
|
||||
)
|
||||
|
|
|
|||
|
|
@ -91,9 +91,9 @@ class DecodeCSVOp : public OpKernel {
|
|||
} else {
|
||||
int32 value;
|
||||
OP_REQUIRES(ctx, strings::safe_strto32(fields[f], &value),
|
||||
errors::InvalidArgument(
|
||||
"Field ", f, " in record ", i,
|
||||
" is not a valid int32: ", fields[f]));
|
||||
errors::InvalidArgument("Field ", f, " in record ", i,
|
||||
" is not a valid int32: ",
|
||||
fields[f]));
|
||||
output[f]->flat<int32>()(i) = value;
|
||||
}
|
||||
break;
|
||||
|
|
@ -111,9 +111,9 @@ class DecodeCSVOp : public OpKernel {
|
|||
} else {
|
||||
int64 value;
|
||||
OP_REQUIRES(ctx, strings::safe_strto64(fields[f], &value),
|
||||
errors::InvalidArgument(
|
||||
"Field ", f, " in record ", i,
|
||||
" is not a valid int64: ", fields[f]));
|
||||
errors::InvalidArgument("Field ", f, " in record ", i,
|
||||
" is not a valid int64: ",
|
||||
fields[f]));
|
||||
output[f]->flat<int64>()(i) = value;
|
||||
}
|
||||
break;
|
||||
|
|
@ -130,13 +130,33 @@ class DecodeCSVOp : public OpKernel {
|
|||
} else {
|
||||
float value;
|
||||
OP_REQUIRES(ctx, strings::safe_strtof(fields[f].c_str(), &value),
|
||||
errors::InvalidArgument(
|
||||
"Field ", f, " in record ", i,
|
||||
" is not a valid float: ", fields[f]));
|
||||
errors::InvalidArgument("Field ", f, " in record ", i,
|
||||
" is not a valid float: ",
|
||||
fields[f]));
|
||||
output[f]->flat<float>()(i) = value;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case DT_DOUBLE: {
|
||||
// If this field is empty or NA value, check if default is given:
|
||||
// If yes, use default value; Otherwise report error.
|
||||
if (fields[f].empty() || fields[f] == na_value_) {
|
||||
OP_REQUIRES(ctx, record_defaults[f].NumElements() == 1,
|
||||
errors::InvalidArgument(
|
||||
"Field ", f,
|
||||
" is required but missing in record ", i, "!"));
|
||||
output[f]->flat<double>()(i) =
|
||||
record_defaults[f].flat<double>()(0);
|
||||
} else {
|
||||
double value;
|
||||
OP_REQUIRES(ctx, strings::safe_strtod(fields[f].c_str(), &value),
|
||||
errors::InvalidArgument("Field ", f, " in record ", i,
|
||||
" is not a valid double: ",
|
||||
fields[f]));
|
||||
output[f]->flat<double>()(i) = value;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case DT_STRING: {
|
||||
// If this field is empty or NA value, check if default is given:
|
||||
// If yes, use default value; Otherwise report error.
|
||||
|
|
@ -188,10 +208,9 @@ class DecodeCSVOp : public OpKernel {
|
|||
if (!quoted) {
|
||||
while (static_cast<size_t>(current_idx) < input.size() &&
|
||||
input[current_idx] != delim_) {
|
||||
OP_REQUIRES(ctx,
|
||||
(!use_quote_delim_ || input[current_idx] != '"') &&
|
||||
input[current_idx] != '\n' &&
|
||||
input[current_idx] != '\r',
|
||||
OP_REQUIRES(ctx, (!use_quote_delim_ || input[current_idx] != '"') &&
|
||||
input[current_idx] != '\n' &&
|
||||
input[current_idx] != '\r',
|
||||
errors::InvalidArgument(
|
||||
"Unquoted fields cannot have quotes/CRLFs inside"));
|
||||
field += input[current_idx];
|
||||
|
|
@ -219,11 +238,10 @@ class DecodeCSVOp : public OpKernel {
|
|||
}
|
||||
|
||||
OP_REQUIRES(
|
||||
ctx,
|
||||
(static_cast<size_t>(current_idx) < input.size() &&
|
||||
input[current_idx] == '"' &&
|
||||
(static_cast<size_t>(current_idx) == input.size() - 1 ||
|
||||
input[current_idx + 1] == delim_)),
|
||||
ctx, (static_cast<size_t>(current_idx) < input.size() &&
|
||||
input[current_idx] == '"' &&
|
||||
(static_cast<size_t>(current_idx) == input.size() - 1 ||
|
||||
input[current_idx + 1] == delim_)),
|
||||
errors::InvalidArgument("Quoted field has to end with quote "
|
||||
"followed by delim or end"));
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ namespace functor {
|
|||
#define DECLARE_GPU_SPECS_INDEX(T, Index) \
|
||||
template <> \
|
||||
int64 GatherFunctor<GPUDevice, T, Index>::operator()( \
|
||||
const GPUDevice& d, typename TTypes<T, 3>::ConstTensor Tparams, \
|
||||
OpKernelContext* ctx, typename TTypes<T, 3>::ConstTensor Tparams, \
|
||||
typename TTypes<Index>::ConstFlat Tindices, \
|
||||
typename TTypes<T, 3>::Tensor Tout); \
|
||||
extern template struct GatherFunctor<GPUDevice, T, Index>;
|
||||
|
|
|
|||
|
|
@ -23,6 +23,8 @@ limitations under the License.
|
|||
#include "tensorflow/core/kernels/bounds_check.h"
|
||||
#include "tensorflow/core/platform/prefetch.h"
|
||||
#include "tensorflow/core/platform/types.h"
|
||||
#include "tensorflow/core/framework/op_kernel.h"
|
||||
#include "tensorflow/core/util/work_sharder.h"
|
||||
|
||||
namespace tensorflow {
|
||||
typedef Eigen::ThreadPoolDevice CPUDevice;
|
||||
|
|
@ -32,7 +34,8 @@ namespace functor {
|
|||
// Helper method to copy using memcpy.
|
||||
template <typename T, typename Index, typename SliceIndex,
|
||||
SliceIndex static_slice_elems>
|
||||
SliceIndex HandleCopies(typename TTypes<T, 3>::ConstTensor params,
|
||||
SliceIndex HandleCopies(OpKernelContext* ctx,
|
||||
typename TTypes<T, 3>::ConstTensor params,
|
||||
typename TTypes<Index>::ConstFlat indices,
|
||||
SliceIndex slice_elems,
|
||||
typename TTypes<T, 3>::Tensor out) {
|
||||
|
|
@ -47,44 +50,64 @@ SliceIndex HandleCopies(typename TTypes<T, 3>::ConstTensor params,
|
|||
}
|
||||
// Compute slice_bytes here so that static knowledge is available
|
||||
const size_t slice_bytes = slice_elems * sizeof(T);
|
||||
for (SliceIndex b = 0; b < batch_size; b++) {
|
||||
for (SliceIndex i = 0; i < indices_size; i++) {
|
||||
const SliceIndex i_next = i + 1;
|
||||
const SliceIndex b_next = b + 1;
|
||||
if (i_next < indices_size) {
|
||||
port::prefetch<port::PREFETCH_HINT_T0>(¶ms(b, indices(i_next), 0));
|
||||
port::prefetch<port::PREFETCH_HINT_T0>(&out(b, i_next, 0));
|
||||
} else if (b_next < batch_size) {
|
||||
auto worker_threads = ctx->device()->tensorflow_cpu_worker_threads();
|
||||
mutex mu;
|
||||
// Store the value of invalidate index for printing error information, it's a shared variable.
|
||||
SliceIndex result = -1;
|
||||
auto work = [&] (int64 start, int64 end) {
|
||||
SliceIndex batch_idx = static_cast<SliceIndex>(start / indices_size);
|
||||
SliceIndex indices_idx = static_cast<SliceIndex>(start % indices_size);
|
||||
SliceIndex batch_idx_end = static_cast<SliceIndex>(end / indices_size);
|
||||
SliceIndex indices_idx_end = static_cast<SliceIndex>(end % indices_size);
|
||||
|
||||
while ((batch_idx < batch_idx_end) ||
|
||||
(batch_idx == batch_idx_end && indices_idx < indices_idx_end)) {
|
||||
SliceIndex i_next = indices_idx + 1;
|
||||
SliceIndex b_next = batch_idx + 1;
|
||||
if ((batch_idx == batch_idx_end && i_next < indices_idx_end) ||
|
||||
(i_next < indices_size)) {
|
||||
port::prefetch<port::PREFETCH_HINT_T0>(¶ms(batch_idx, indices(i_next), 0));
|
||||
port::prefetch<port::PREFETCH_HINT_T0>(&out(batch_idx, i_next, 0));
|
||||
b_next = batch_idx;
|
||||
} else if (b_next <= batch_idx_end) {
|
||||
port::prefetch<port::PREFETCH_HINT_T0>(¶ms(b_next, indices(0), 0));
|
||||
port::prefetch<port::PREFETCH_HINT_T0>(&out(b_next, 0, 0));
|
||||
i_next = 0;
|
||||
}
|
||||
const Index index = internal::SubtleMustCopy(indices(indices_idx));
|
||||
if (!FastBoundsCheck(index, limit)) {
|
||||
mutex_lock l(mu);
|
||||
result = indices_idx;
|
||||
return;
|
||||
}
|
||||
// Grab the index and check its validity. An earlier version of the
|
||||
// code checked it and then grabbed it from memory a second time, which
|
||||
// was a security risk since it could have changed in between.
|
||||
const Index index = internal::SubtleMustCopy(indices(i));
|
||||
if (!FastBoundsCheck(index, limit)) return i;
|
||||
// Copy using memcpy if possible, otherwise an Eigen loop
|
||||
// TODO(cwhipkey): avoid linking to framework to get Allocator (to improve
|
||||
// ahead-of-time compilation binary size).
|
||||
if (is_simple_type<T>::value) {
|
||||
// Avoid auto-promotion to Index from SliceIndex by casting.
|
||||
memcpy(out_base + (b * indices_size + i) * slice_elems,
|
||||
params_base + (b * static_cast<SliceIndex>(limit) +
|
||||
memcpy(out_base + (batch_idx * indices_size + indices_idx) * slice_elems,
|
||||
params_base + (batch_idx * static_cast<SliceIndex>(limit) +
|
||||
static_cast<SliceIndex>(index)) *
|
||||
slice_elems,
|
||||
slice_elems,
|
||||
slice_bytes);
|
||||
} else {
|
||||
// For non-"simple" types (e.g. strings).
|
||||
out.template chip<1>(i) = params.template chip<1>(index);
|
||||
out.template chip<1>(indices_idx) = params.template chip<1>(index);
|
||||
}
|
||||
indices_idx = i_next;
|
||||
batch_idx = b_next;
|
||||
}
|
||||
}
|
||||
return -1;
|
||||
};
|
||||
|
||||
Shard(worker_threads->num_threads, worker_threads->workers, batch_size*indices_size,
|
||||
slice_elems * sizeof(T), work);
|
||||
return result;
|
||||
}
|
||||
|
||||
template <typename T, typename Index>
|
||||
struct GatherFunctorCPU {
|
||||
int64 operator()(typename TTypes<T, 3>::ConstTensor params,
|
||||
int64 operator()(OpKernelContext* ctx,
|
||||
typename TTypes<T, 3>::ConstTensor params,
|
||||
typename TTypes<Index>::ConstFlat indices,
|
||||
typename TTypes<T, 3>::Tensor out) {
|
||||
const int64 N = indices.size();
|
||||
|
|
@ -94,16 +117,16 @@ struct GatherFunctorCPU {
|
|||
bool use_large = (slice_size > std::numeric_limits<int32>::max() ||
|
||||
params.size() > std::numeric_limits<int32>::max() ||
|
||||
N > std::numeric_limits<int32>::max());
|
||||
#define CALL(elems) \
|
||||
do { \
|
||||
if (use_large) { \
|
||||
bad_i = HandleCopies<T, Index, int64, elems>(params, indices, \
|
||||
slice_size, out); \
|
||||
} else { \
|
||||
const int32 small_slice = static_cast<int32>(slice_size); \
|
||||
bad_i = HandleCopies<T, Index, int32, elems>(params, indices, \
|
||||
small_slice, out); \
|
||||
} \
|
||||
#define CALL(elems) \
|
||||
do { \
|
||||
if (use_large) { \
|
||||
bad_i = HandleCopies<T, Index, int64, elems>(ctx, params, indices, \
|
||||
slice_size, out); \
|
||||
} else { \
|
||||
const int32 small_slice = static_cast<int32>(slice_size); \
|
||||
bad_i = HandleCopies<T, Index, int32, elems>(ctx, params, indices, \
|
||||
small_slice, out); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
if (slice_size == 10)
|
||||
|
|
@ -120,18 +143,18 @@ struct GatherFunctorCPU {
|
|||
|
||||
template <typename Device, typename T, typename Index>
|
||||
struct GatherFunctor {
|
||||
int64 operator()(const Device& d, typename TTypes<T, 3>::ConstTensor params,
|
||||
int64 operator()(OpKernelContext* ctx, typename TTypes<T, 3>::ConstTensor params,
|
||||
typename TTypes<Index>::ConstFlat indices,
|
||||
typename TTypes<T, 3>::Tensor out);
|
||||
};
|
||||
|
||||
template <typename T, typename Index>
|
||||
struct GatherFunctor<CPUDevice, T, Index> {
|
||||
int64 operator()(const CPUDevice& d,
|
||||
int64 operator()(OpKernelContext* ctx,
|
||||
typename TTypes<T, 3>::ConstTensor params,
|
||||
typename TTypes<Index>::ConstFlat indices,
|
||||
typename TTypes<T, 3>::Tensor out) {
|
||||
return GatherFunctorCPU<T, Index>()(params, indices, out);
|
||||
return GatherFunctorCPU<T, Index>()(ctx, params, indices, out);
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -72,10 +72,11 @@ __global__ void GatherOpKernel(const T* params, const Index* indices, T* out,
|
|||
namespace functor {
|
||||
template <typename T, typename Index>
|
||||
struct GatherFunctor<GPUDevice, T, Index> {
|
||||
int64 operator()(const GPUDevice& d,
|
||||
int64 operator()(OpKernelContext* ctx,
|
||||
typename TTypes<T, 3>::ConstTensor params,
|
||||
typename TTypes<Index>::ConstFlat indices,
|
||||
typename TTypes<T, 3>::Tensor out) {
|
||||
const GPUDevice& d = ctx->eigen_gpu_device();
|
||||
const int64 out_size = out.size();
|
||||
if (out_size == 0) {
|
||||
// We need a check here since the CPU version does useful error checking
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ class GatherOp : public OpKernel {
|
|||
auto out_flat = out->shaped<T, 3>({outer_size, N, inner_size});
|
||||
|
||||
functor::GatherFunctor<Device, T, Index> functor;
|
||||
int64 bad_i = functor(c->eigen_device<Device>(), params_flat,
|
||||
int64 bad_i = functor(c, params_flat,
|
||||
indices_flat, out_flat);
|
||||
|
||||
OP_REQUIRES(
|
||||
|
|
|
|||
|
|
@ -464,7 +464,7 @@ class ResourceGatherOp : public OpKernel {
|
|||
auto out_flat = out->shaped<T, 3>({1, N, out->NumElements() / N});
|
||||
|
||||
functor::GatherFunctor<Device, T, Index> functor;
|
||||
int64 bad_i = functor(c->eigen_device<Device>(), params_flat,
|
||||
int64 bad_i = functor(c, params_flat,
|
||||
indices_flat, out_flat);
|
||||
|
||||
OP_REQUIRES(
|
||||
|
|
|
|||
|
|
@ -248,7 +248,7 @@ TF_CALL_int64(HANDLE_TYPE_NAME_SYCL);
|
|||
#undef HANDLE_CASE
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
template <typename Device>
|
||||
template <typename Device, typename Tmultiples>
|
||||
class TileGradientOp : public OpKernel {
|
||||
public:
|
||||
explicit TileGradientOp(OpKernelConstruction* context) : OpKernel(context) {}
|
||||
|
|
@ -273,10 +273,10 @@ class TileGradientOp : public OpKernel {
|
|||
return;
|
||||
}
|
||||
|
||||
const gtl::ArraySlice<int32> multiples_array(multiples.flat<int32>().data(),
|
||||
input_dims);
|
||||
const gtl::ArraySlice<Tmultiples> multiples_array(
|
||||
multiples.flat<Tmultiples>().data(), input_dims);
|
||||
TensorShape output_shape;
|
||||
std::vector<int32> input_dim_size_vec;
|
||||
std::vector<Tmultiples> input_dim_size_vec;
|
||||
for (int i = 0; i < input_dims; ++i) {
|
||||
OP_REQUIRES(
|
||||
context, multiples_array[i] > 0,
|
||||
|
|
@ -337,19 +337,19 @@ class TileGradientOp : public OpKernel {
|
|||
private:
|
||||
template <DataType DT, int NDIM>
|
||||
void HandleCase(OpKernelContext* context,
|
||||
const std::vector<int32>& input_dims,
|
||||
const gtl::ArraySlice<int32>& multiples_array,
|
||||
const std::vector<Tmultiples>& input_dims,
|
||||
const gtl::ArraySlice<Tmultiples>& multiples_array,
|
||||
Tensor* result);
|
||||
|
||||
template <DataType DT, int NDIM>
|
||||
void HandleCaseImpl(OpKernelContext* context,
|
||||
const std::vector<int32>& input_dims,
|
||||
const gtl::ArraySlice<int32>& multiples_array,
|
||||
const std::vector<Tmultiples>& input_dims,
|
||||
const gtl::ArraySlice<Tmultiples>& multiples_array,
|
||||
Tensor* result) {
|
||||
typedef typename EnumToDataType<DT>::Type T;
|
||||
|
||||
bool reduction_only = true;
|
||||
std::vector<int> reduction_dims;
|
||||
std::vector<Tmultiples> reduction_dims;
|
||||
|
||||
for (int i = 0; i < NDIM; ++i) {
|
||||
if (input_dims[i] > multiples_array[i] && multiples_array[i] > 1) {
|
||||
|
|
@ -411,7 +411,8 @@ class TileGradientOp : public OpKernel {
|
|||
|
||||
template <typename T, int NDIM, int REDUCENDIM>
|
||||
void HandleReduce(OpKernelContext* context,
|
||||
const std::vector<int32>& reduce_dim_in, Tensor* result) {
|
||||
const std::vector<Tmultiples>& reduce_dim_in,
|
||||
Tensor* result) {
|
||||
static_assert(NDIM >= REDUCENDIM, "Too many reduced dimensions");
|
||||
Eigen::DSizes<Eigen::DenseIndex, REDUCENDIM> reduce_dim;
|
||||
Eigen::DSizes<Eigen::DenseIndex, NDIM> reshape_dim;
|
||||
|
|
@ -432,34 +433,41 @@ class TileGradientOp : public OpKernel {
|
|||
TF_DISALLOW_COPY_AND_ASSIGN(TileGradientOp);
|
||||
};
|
||||
|
||||
template <typename Device>
|
||||
template <typename Device, typename Tmultiples>
|
||||
template <DataType DT, int NDIM>
|
||||
inline void TileGradientOp<Device>::HandleCase(
|
||||
OpKernelContext* context, const std::vector<int32>& input_dims,
|
||||
const gtl::ArraySlice<int32>& multiples_array, Tensor* result) {
|
||||
inline void TileGradientOp<Device, Tmultiples>::HandleCase(
|
||||
OpKernelContext* context, const std::vector<Tmultiples>& input_dims,
|
||||
const gtl::ArraySlice<Tmultiples>& multiples_array, Tensor* result) {
|
||||
LOG(FATAL) << "TileGradientOp: Invalid combination of Device, DT and NDIM: "
|
||||
<< MakeTypeIndex<Device>().name() << ", " << DataTypeString(DT)
|
||||
<< ", " << NDIM;
|
||||
}
|
||||
|
||||
#define HANDLE_CASE(device, T, dtype, ndim) \
|
||||
#define HANDLE_CASE(device, T, dtype, Tmultiples, ndim) \
|
||||
template <> \
|
||||
template <> \
|
||||
void TileGradientOp<device>::HandleCase<dtype, ndim>( \
|
||||
OpKernelContext * context, const std::vector<int32>& input_dims, \
|
||||
const gtl::ArraySlice<int32>& multiples_array, Tensor* result) { \
|
||||
void TileGradientOp<device, Tmultiples>::HandleCase<dtype, ndim>( \
|
||||
OpKernelContext * context, const std::vector<Tmultiples>& input_dims, \
|
||||
const gtl::ArraySlice<Tmultiples>& multiples_array, Tensor* result) { \
|
||||
HandleCaseImpl<dtype, ndim>(context, input_dims, multiples_array, result); \
|
||||
}
|
||||
|
||||
// 0-D handled specially above
|
||||
#define HANDLE_CASE_DIM(device, T, dtype) \
|
||||
HANDLE_CASE(device, T, dtype, 1); \
|
||||
HANDLE_CASE(device, T, dtype, 2); \
|
||||
HANDLE_CASE(device, T, dtype, 3); \
|
||||
HANDLE_CASE(device, T, dtype, 4); \
|
||||
HANDLE_CASE(device, T, dtype, 5); \
|
||||
HANDLE_CASE(device, T, dtype, 6); \
|
||||
HANDLE_CASE(device, T, dtype, 7);
|
||||
#define HANDLE_CASE_DIM(device, T, dtype) \
|
||||
HANDLE_CASE(device, T, dtype, int32, 1); \
|
||||
HANDLE_CASE(device, T, dtype, int32, 2); \
|
||||
HANDLE_CASE(device, T, dtype, int32, 3); \
|
||||
HANDLE_CASE(device, T, dtype, int32, 4); \
|
||||
HANDLE_CASE(device, T, dtype, int32, 5); \
|
||||
HANDLE_CASE(device, T, dtype, int32, 6); \
|
||||
HANDLE_CASE(device, T, dtype, int32, 7); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 1); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 2); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 3); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 4); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 5); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 6); \
|
||||
HANDLE_CASE(device, T, dtype, int64, 7);
|
||||
|
||||
#define HANDLE_TYPE_NAME_CPU(T) \
|
||||
HANDLE_CASE_DIM(CPUDevice, T, DataTypeToEnum<T>::value);
|
||||
|
|
@ -514,9 +522,16 @@ REGISTER_KERNEL_BUILDER(Name("Tile")
|
|||
.HostMemory("multiples")
|
||||
.TypeConstraint<int64>("Tmultiples"),
|
||||
TileOp<CPUDevice, int64>);
|
||||
REGISTER_KERNEL_BUILDER(
|
||||
Name("TileGrad").Device(DEVICE_CPU).HostMemory("multiples"),
|
||||
TileGradientOp<CPUDevice>);
|
||||
REGISTER_KERNEL_BUILDER(Name("TileGrad")
|
||||
.Device(DEVICE_CPU)
|
||||
.HostMemory("multiples")
|
||||
.TypeConstraint<int32>("Tmultiples"),
|
||||
TileGradientOp<CPUDevice, int32>);
|
||||
REGISTER_KERNEL_BUILDER(Name("TileGrad")
|
||||
.Device(DEVICE_CPU)
|
||||
.HostMemory("multiples")
|
||||
.TypeConstraint<int64>("Tmultiples"),
|
||||
TileGradientOp<CPUDevice, int64>);
|
||||
|
||||
#if GOOGLE_CUDA
|
||||
#define REGISTER_GPU(type) \
|
||||
|
|
@ -537,7 +552,13 @@ REGISTER_KERNEL_BUILDER(
|
|||
.TypeConstraint<type>("T") \
|
||||
.TypeConstraint<int32>("Tmultiples") \
|
||||
.HostMemory("multiples"), \
|
||||
TileGradientOp<GPUDevice>);
|
||||
TileGradientOp<GPUDevice, int32>); \
|
||||
REGISTER_KERNEL_BUILDER(Name("TileGrad") \
|
||||
.Device(DEVICE_GPU) \
|
||||
.TypeConstraint<type>("T") \
|
||||
.TypeConstraint<int64>("Tmultiples") \
|
||||
.HostMemory("multiples"), \
|
||||
TileGradientOp<GPUDevice, int64>);
|
||||
|
||||
TF_CALL_float(REGISTER_GPU);
|
||||
TF_CALL_double(REGISTER_GPU);
|
||||
|
|
@ -569,7 +590,13 @@ TF_CALL_complex128(REGISTER_GPU)
|
|||
.TypeConstraint<type>("T") \
|
||||
.TypeConstraint<int32>("Tmultiples") \
|
||||
.HostMemory("multiples"), \
|
||||
TileGradientOp<SYCLDevice>);
|
||||
TileGradientOp<SYCLDevice, int32>); \
|
||||
REGISTER_KERNEL_BUILDER(Name("TileGrad") \
|
||||
.Device(DEVICE_SYCL) \
|
||||
.TypeConstraint<type>("T") \
|
||||
.TypeConstraint<int64>("Tmultiples") \
|
||||
.HostMemory("multiples"), \
|
||||
TileGradientOp<SYCLDevice, int64>);
|
||||
|
||||
TF_CALL_float(REGISTER_SYCL);
|
||||
TF_CALL_double(REGISTER_SYCL);
|
||||
|
|
|
|||
|
|
@ -56,35 +56,45 @@ representation of that entry.
|
|||
8- or 16-bit inputs and then aggregate the resulting counts.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("BitwiseAnd").BINARY_BITWISE().Doc(R"doc(
|
||||
REGISTER_OP("BitwiseAnd")
|
||||
.BINARY_BITWISE()
|
||||
.Doc(R"doc(
|
||||
Elementwise computes the bitwise AND of `x` and `y`.
|
||||
|
||||
The result will have those bits set, that are set in both `x` and `y`. The
|
||||
computation is performed on the underlying representations of `x` and `y`.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("BitwiseOr").BINARY_BITWISE().Doc(R"doc(
|
||||
REGISTER_OP("BitwiseOr")
|
||||
.BINARY_BITWISE()
|
||||
.Doc(R"doc(
|
||||
Elementwise computes the bitwise OR of `x` and `y`.
|
||||
|
||||
The result will have those bits set, that are set in `x`, `y` or both. The
|
||||
computation is performed on the underlying representations of `x` and `y`.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("BitwiseXor").BINARY_BITWISE().Doc(R"doc(
|
||||
REGISTER_OP("BitwiseXor")
|
||||
.BINARY_BITWISE()
|
||||
.Doc(R"doc(
|
||||
Elementwise computes the bitwise XOR of `x` and `y`.
|
||||
|
||||
The result will have those bits set, that are different in `x` and `y`. The
|
||||
computation is performed on the underlying representations of `x` and `y`.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("LeftShift").BINARY_BITWISE().Doc(R"doc(
|
||||
REGISTER_OP("LeftShift")
|
||||
.BINARY_BITWISE()
|
||||
.Doc(R"doc(
|
||||
Elementwise computes the bitwise left-shift of `x` and `y`.
|
||||
|
||||
If `y` is negative, or greater than or equal to the width of `x` in bits the
|
||||
result is implementation defined.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("RightShift").BINARY_BITWISE().Doc(R"doc(
|
||||
REGISTER_OP("RightShift")
|
||||
.BINARY_BITWISE()
|
||||
.Doc(R"doc(
|
||||
Elementwise computes the bitwise right-shift of `x` and `y`.
|
||||
|
||||
Performs a logical shift for unsigned integer types, and an arithmetic shift
|
||||
|
|
|
|||
|
|
@ -25,9 +25,8 @@ namespace tensorflow {
|
|||
namespace {
|
||||
|
||||
TEST(BackwardsCompatibilityTest, IsCompatible) {
|
||||
OpCompatibilityLib compatibility("tensorflow/core/ops",
|
||||
strings::StrCat("v", TF_MAJOR_VERSION),
|
||||
nullptr);
|
||||
OpCompatibilityLib compatibility(
|
||||
"tensorflow/core/ops", strings::StrCat("v", TF_MAJOR_VERSION), nullptr);
|
||||
|
||||
Env* env = Env::Default();
|
||||
int changed_ops = 0;
|
||||
|
|
|
|||
|
|
@ -2225,7 +2225,6 @@ this op will block until it does. This Op is optimized for
|
|||
performance.
|
||||
)doc");
|
||||
|
||||
|
||||
REGISTER_OP("StageSize")
|
||||
.Output("size: int32")
|
||||
.Attr("capacity: int >= 0 = 0")
|
||||
|
|
@ -2354,7 +2353,6 @@ REGISTER_OP("MapIncompleteSize")
|
|||
Op returns the number of incomplete elements in the underlying container.
|
||||
)doc");
|
||||
|
||||
|
||||
REGISTER_OP("MapClear")
|
||||
.Attr("capacity: int >= 0 = 0")
|
||||
.Attr("memory_limit: int >= 0 = 0")
|
||||
|
|
@ -2367,7 +2365,6 @@ REGISTER_OP("MapClear")
|
|||
Op removes all elements in the underlying container.
|
||||
)doc");
|
||||
|
||||
|
||||
// OrderedMap
|
||||
REGISTER_OP("OrderedMapStage")
|
||||
.Input("key: int64")
|
||||
|
|
|
|||
|
|
@ -925,27 +925,27 @@ use_image_if_no_bounding_boxes: Controls behavior if no bounding boxes supplied.
|
|||
)doc");
|
||||
|
||||
REGISTER_OP("SampleDistortedBoundingBoxV2")
|
||||
.Input("image_size: T")
|
||||
.Input("bounding_boxes: float")
|
||||
.Input("min_object_covered: float")
|
||||
.Output("begin: T")
|
||||
.Output("size: T")
|
||||
.Output("bboxes: float")
|
||||
.Attr("T: {uint8, int8, int16, int32, int64}")
|
||||
.Attr("seed: int = 0")
|
||||
.Attr("seed2: int = 0")
|
||||
.Attr("aspect_ratio_range: list(float) = [0.75, 1.33]")
|
||||
.Attr("area_range: list(float) = [0.05, 1.0]")
|
||||
.Attr("max_attempts: int = 100")
|
||||
.Attr("use_image_if_no_bounding_boxes: bool = false")
|
||||
.SetIsStateful()
|
||||
.SetShapeFn([](InferenceContext* c) {
|
||||
c->set_output(0, c->Vector(3));
|
||||
c->set_output(1, c->Vector(3));
|
||||
c->set_output(2, c->MakeShape({1, 1, 4}));
|
||||
return Status::OK();
|
||||
})
|
||||
.Doc(R"doc(
|
||||
.Input("image_size: T")
|
||||
.Input("bounding_boxes: float")
|
||||
.Input("min_object_covered: float")
|
||||
.Output("begin: T")
|
||||
.Output("size: T")
|
||||
.Output("bboxes: float")
|
||||
.Attr("T: {uint8, int8, int16, int32, int64}")
|
||||
.Attr("seed: int = 0")
|
||||
.Attr("seed2: int = 0")
|
||||
.Attr("aspect_ratio_range: list(float) = [0.75, 1.33]")
|
||||
.Attr("area_range: list(float) = [0.05, 1.0]")
|
||||
.Attr("max_attempts: int = 100")
|
||||
.Attr("use_image_if_no_bounding_boxes: bool = false")
|
||||
.SetIsStateful()
|
||||
.SetShapeFn([](InferenceContext* c) {
|
||||
c->set_output(0, c->Vector(3));
|
||||
c->set_output(1, c->Vector(3));
|
||||
c->set_output(2, c->MakeShape({1, 1, 4}));
|
||||
return Status::OK();
|
||||
})
|
||||
.Doc(R"doc(
|
||||
Generate a single randomly distorted bounding box for an image.
|
||||
|
||||
Bounding box annotations are often supplied in addition to ground-truth labels
|
||||
|
|
@ -1236,16 +1236,16 @@ method: A string specifying the interpolation method. Only 'bilinear' is
|
|||
// --------------------------------------------------------------------------
|
||||
|
||||
REGISTER_OP("NonMaxSuppression")
|
||||
.Input("boxes: float")
|
||||
.Input("scores: float")
|
||||
.Input("max_output_size: int32")
|
||||
.Output("selected_indices: int32")
|
||||
.Attr("iou_threshold: float = 0.5")
|
||||
.SetShapeFn([](InferenceContext* c) {
|
||||
.Input("boxes: float")
|
||||
.Input("scores: float")
|
||||
.Input("max_output_size: int32")
|
||||
.Output("selected_indices: int32")
|
||||
.Attr("iou_threshold: float = 0.5")
|
||||
.SetShapeFn([](InferenceContext* c) {
|
||||
c->set_output(0, c->Vector(c->UnknownDim()));
|
||||
return Status::OK();
|
||||
})
|
||||
.Doc(R"doc(
|
||||
.Doc(R"doc(
|
||||
Greedily selects a subset of bounding boxes in descending order of score,
|
||||
pruning away boxes that have high intersection-over-union (IOU) overlap
|
||||
with previously selected boxes. Bounding boxes are supplied as
|
||||
|
|
|
|||
|
|
@ -25,7 +25,6 @@ using shape_inference::ShapeHandle;
|
|||
|
||||
namespace {
|
||||
|
||||
|
||||
// Return in <out> the result of making the end of <s> a square matrix.
|
||||
Status MakeBatchSquareMatrix(InferenceContext* c, ShapeHandle input,
|
||||
ShapeHandle* out) {
|
||||
|
|
|
|||
|
|
@ -385,7 +385,7 @@ class TestOp : public OpKernel {
|
|||
REGISTER_KERNEL_BUILDER(Name("TestOpWithNoGrad").Device(DEVICE_CPU), TestOp);
|
||||
#ifdef TENSORFLOW_USE_SYCL
|
||||
REGISTER_KERNEL_BUILDER(Name("TestOpWithNoGrad").Device(DEVICE_SYCL), TestOp);
|
||||
#endif // TENSORFLOW_USE_SYCL
|
||||
#endif // TENSORFLOW_USE_SYCL
|
||||
|
||||
TEST_F(MathGradTest, Error_Reporting) {
|
||||
auto x = test::AsTensor<float>({-3.f});
|
||||
|
|
@ -557,11 +557,10 @@ TEST_F(MathGradTest, Acosh) {
|
|||
TEST_F(MathGradTest, Atanh) {
|
||||
auto x = test::AsTensor<float>({-0.3f, -0.2f, -0.1f, 0.1f, 0.2f, 0.3f},
|
||||
TensorShape({2, 3}));
|
||||
auto g = [](float x) {
|
||||
return 1.f / (1.f - x * x);
|
||||
};
|
||||
auto g = [](float x) { return 1.f / (1.f - x * x); };
|
||||
auto dx = test::AsTensor<float>(
|
||||
{g(-0.3f), g(-0.2f), g(-0.1f), g(0.1f), g(0.2f), g(0.3f)}, TensorShape({2, 3}));
|
||||
{g(-0.3f), g(-0.2f), g(-0.1f), g(0.1f), g(0.2f), g(0.3f)},
|
||||
TensorShape({2, 3}));
|
||||
auto ans = SymGrad("Atanh", x);
|
||||
test::ExpectClose(ans, dx);
|
||||
}
|
||||
|
|
@ -761,7 +760,7 @@ TEST_F(MathGradTest, Pow) {
|
|||
}
|
||||
}
|
||||
|
||||
//TODO{lukeiwanski}: Implement Complex Pow for SYCL
|
||||
// TODO{lukeiwanski}: Implement Complex Pow for SYCL
|
||||
#ifndef TENSORFLOW_USE_SYCL
|
||||
TEST_F(MathGradTest, ComplexPow) {
|
||||
auto x = test::AsTensor<complex64>({0.f, 2.f, -2.f}, TensorShape({3}));
|
||||
|
|
@ -781,7 +780,7 @@ TEST_F(MathGradTest, ComplexPow) {
|
|||
dy, test::AsTensor<complex64>({h(0.f, 2.f), h(2.f, 2.f), h(-2.f, 2.f)},
|
||||
TensorShape({3})));
|
||||
}
|
||||
#endif // TENSORFLOW_USE_SYCL
|
||||
#endif // TENSORFLOW_USE_SYCL
|
||||
|
||||
TEST_F(MathGradTest, Maximum) {
|
||||
auto x = test::AsTensor<float>({-3.f, -2.f, -1.f, 1.f, 2.f, 3.f},
|
||||
|
|
@ -943,7 +942,7 @@ TEST_F(MathGradTest, MatMul_11) {
|
|||
test::ExpectClose(dy, MatMul(dz, true, x, true));
|
||||
}
|
||||
|
||||
//TODO{lukeiwanski}: Implement BatchMatMul for SYCL
|
||||
// TODO{lukeiwanski}: Implement BatchMatMul for SYCL
|
||||
#ifndef TENSORFLOW_USE_SYCL
|
||||
TEST_F(MathGradTest, BatchMatMul_00) {
|
||||
auto x = test::AsTensor<float>({1.f, 2.f, 3.f, 4.f, 5.f, 6.f},
|
||||
|
|
@ -992,7 +991,7 @@ TEST_F(MathGradTest, BatchMatMul_11) {
|
|||
test::ExpectClose(dx, BatchMatMul(y, true, dz, true));
|
||||
test::ExpectClose(dy, BatchMatMul(dz, true, x, true));
|
||||
}
|
||||
#endif // TENSORFLOW_USE_SYCL
|
||||
#endif // TENSORFLOW_USE_SYCL
|
||||
|
||||
TEST_F(MathGradTest, Sum_dim0) {
|
||||
auto x = test::AsTensor<float>({-3.f, -2.f, -1.f, 1.f, 2.f, 3.f},
|
||||
|
|
|
|||
|
|
@ -235,7 +235,9 @@ value is computed as \\( \sqrt{a^2 + b^2}\\).
|
|||
.Attr("T: {half, float, double, complex64, complex128}") \
|
||||
.SetShapeFn(shape_inference::UnchangedShape)
|
||||
|
||||
REGISTER_OP("Neg").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Neg")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes numerical negative value element-wise.
|
||||
I.e., \\(y = -x\\).
|
||||
)doc");
|
||||
|
|
@ -258,155 +260,217 @@ is the corresponding input gradient.
|
|||
)doc")
|
||||
.Deprecated(17, "Use ReciprocalGrad");
|
||||
|
||||
REGISTER_OP("Reciprocal").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Reciprocal")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes the reciprocal of x element-wise.
|
||||
I.e., \\(y = 1 / x\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("ReciprocalGrad").UNARY_GRADIENT_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("ReciprocalGrad")
|
||||
.UNARY_GRADIENT_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes the gradient for the inverse of `x` wrt its input.
|
||||
|
||||
Specifically, `grad = -dy * y*y`, where `y = 1/x`, and `dy`
|
||||
is the corresponding input gradient.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Square").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Square")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes square of x element-wise.
|
||||
I.e., \\(y = x * x = x^2\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Sqrt").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Sqrt")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes square root of x element-wise.
|
||||
I.e., \\(y = \sqrt{x} = x^{1/2}\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("SqrtGrad").UNARY_GRADIENT_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("SqrtGrad")
|
||||
.UNARY_GRADIENT_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes the gradient for the sqrt of `x` wrt its input.
|
||||
|
||||
Specifically, `grad = dy * 0.5 / y`, where `y = sqrt(x)`, and `dy`
|
||||
is the corresponding input gradient.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Rsqrt").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Rsqrt")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes reciprocal of square root of x element-wise.
|
||||
I.e., \\(y = 1 / \sqrt{x}\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Round").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Round")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Rounds the values of a tensor to the nearest integer, element-wise.
|
||||
|
||||
Rounds half to even. Also known as bankers rounding. If you want to round
|
||||
according to the current system rounding mode use std::cint.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("RsqrtGrad").UNARY_GRADIENT_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("RsqrtGrad")
|
||||
.UNARY_GRADIENT_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes the gradient for the rsqrt of `x` wrt its input.
|
||||
|
||||
Specifically, `grad = dy * -0.5 * y^3`, where `y = rsqrt(x)`, and `dy`
|
||||
is the corresponding input gradient.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Exp").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Exp")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes exponential of x element-wise. \\(y = e^x\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Expm1").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Expm1")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes exponential of x - 1 element-wise.
|
||||
I.e., \\(y = (\exp x) - 1\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Log").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Log")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes natural logarithm of x element-wise.
|
||||
I.e., \\(y = \log_e x\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Log1p").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Log1p")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes natural logarithm of (1 + x) element-wise.
|
||||
I.e., \\(y = \log_e (1 + x)\\).
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Sinh").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Sinh")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes hyperbolic sine of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Cosh").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Cosh")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes hyperbolic cosine of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Tanh").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Tanh")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes hyperbolic tangent of `x` element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Asinh").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Asinh")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes inverse hyperbolic sine of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Acosh").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Acosh")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes inverse hyperbolic cosine of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Atanh").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Atanh")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes inverse hyperbolic tangent of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("TanhGrad").UNARY_GRADIENT_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("TanhGrad")
|
||||
.UNARY_GRADIENT_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes the gradient for the tanh of `x` wrt its input.
|
||||
|
||||
Specifically, `grad = dy * (1 - y*y)`, where `y = tanh(x)`, and `dy`
|
||||
is the corresponding input gradient.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Lgamma").UNARY_REAL().Doc(R"doc(
|
||||
REGISTER_OP("Lgamma")
|
||||
.UNARY_REAL()
|
||||
.Doc(R"doc(
|
||||
Computes the log of the absolute value of `Gamma(x)` element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Digamma").UNARY_REAL().Doc(R"doc(
|
||||
REGISTER_OP("Digamma")
|
||||
.UNARY_REAL()
|
||||
.Doc(R"doc(
|
||||
Computes Psi, the derivative of Lgamma (the log of the absolute value of
|
||||
`Gamma(x)`), element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Erf").UNARY_REAL().Doc(R"doc(
|
||||
REGISTER_OP("Erf")
|
||||
.UNARY_REAL()
|
||||
.Doc(R"doc(
|
||||
Computes the Gauss error function of `x` element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Erfc").UNARY_REAL().Doc(R"doc(
|
||||
REGISTER_OP("Erfc")
|
||||
.UNARY_REAL()
|
||||
.Doc(R"doc(
|
||||
Computes the complementary error function of `x` element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Sigmoid").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Sigmoid")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes sigmoid of `x` element-wise.
|
||||
|
||||
Specifically, `y = 1 / (1 + exp(-x))`.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("SigmoidGrad").UNARY_GRADIENT_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("SigmoidGrad")
|
||||
.UNARY_GRADIENT_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes the gradient of the sigmoid of `x` wrt its input.
|
||||
|
||||
Specifically, `grad = dy * y * (1 - y)`, where `y = sigmoid(x)`, and
|
||||
`dy` is the corresponding input gradient.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Sin").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Sin")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes sin of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Cos").UNARY_COMPLEX().Doc(R"doc(
|
||||
REGISTER_OP("Cos")
|
||||
.UNARY_COMPLEX()
|
||||
.Doc(R"doc(
|
||||
Computes cos of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Tan").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Tan")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes tan of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Asin").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Asin")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes asin of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Acos").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Acos")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes acos of x element-wise.
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Atan").UNARY().Doc(R"doc(
|
||||
REGISTER_OP("Atan")
|
||||
.UNARY()
|
||||
.Doc(R"doc(
|
||||
Computes atan of x element-wise.
|
||||
)doc");
|
||||
|
||||
|
|
@ -960,28 +1024,36 @@ beta function.
|
|||
.Attr("T: realnumbertype") \
|
||||
.SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn)
|
||||
|
||||
REGISTER_OP("Less").COMPARISON().Doc(R"doc(
|
||||
REGISTER_OP("Less")
|
||||
.COMPARISON()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of (x < y) element-wise.
|
||||
|
||||
*NOTE*: `Less` supports broadcasting. More about broadcasting
|
||||
[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("LessEqual").COMPARISON().Doc(R"doc(
|
||||
REGISTER_OP("LessEqual")
|
||||
.COMPARISON()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of (x <= y) element-wise.
|
||||
|
||||
*NOTE*: `LessEqual` supports broadcasting. More about broadcasting
|
||||
[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("Greater").COMPARISON().Doc(R"doc(
|
||||
REGISTER_OP("Greater")
|
||||
.COMPARISON()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of (x > y) element-wise.
|
||||
|
||||
*NOTE*: `Greater` supports broadcasting. More about broadcasting
|
||||
[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("GreaterEqual").COMPARISON().Doc(R"doc(
|
||||
REGISTER_OP("GreaterEqual")
|
||||
.COMPARISON()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of (x >= y) element-wise.
|
||||
|
||||
*NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting
|
||||
|
|
@ -1003,14 +1075,18 @@ Returns the truth value of (x >= y) element-wise.
|
|||
"quint8, qint8, qint32, string, bool, complex128}") \
|
||||
.SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn)
|
||||
|
||||
REGISTER_OP("Equal").EQUALITY_COMPARISON().Doc(R"doc(
|
||||
REGISTER_OP("Equal")
|
||||
.EQUALITY_COMPARISON()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of (x == y) element-wise.
|
||||
|
||||
*NOTE*: `Equal` supports broadcasting. More about broadcasting
|
||||
[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("NotEqual").EQUALITY_COMPARISON().Doc(R"doc(
|
||||
REGISTER_OP("NotEqual")
|
||||
.EQUALITY_COMPARISON()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of (x != y) element-wise.
|
||||
|
||||
*NOTE*: `NotEqual` supports broadcasting. More about broadcasting
|
||||
|
|
@ -1048,14 +1124,18 @@ Returns the truth value of NOT x element-wise.
|
|||
.SetIsCommutative() \
|
||||
.SetShapeFn(shape_inference::BroadcastBinaryOpShapeFn)
|
||||
|
||||
REGISTER_OP("LogicalAnd").BINARY_LOGICAL().Doc(R"doc(
|
||||
REGISTER_OP("LogicalAnd")
|
||||
.BINARY_LOGICAL()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of x AND y element-wise.
|
||||
|
||||
*NOTE*: `LogicalAnd` supports broadcasting. More about broadcasting
|
||||
[here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
|
||||
)doc");
|
||||
|
||||
REGISTER_OP("LogicalOr").BINARY_LOGICAL().Doc(R"doc(
|
||||
REGISTER_OP("LogicalOr")
|
||||
.BINARY_LOGICAL()
|
||||
.Doc(R"doc(
|
||||
Returns the truth value of x OR y element-wise.
|
||||
|
||||
*NOTE*: `LogicalOr` supports broadcasting. More about broadcasting
|
||||
|
|
@ -1995,12 +2075,12 @@ Status RangeSize(const Tensor* start_t, const Tensor* limit_t,
|
|||
T limit = limit_t->scalar<T>()();
|
||||
T delta = delta_t->scalar<T>()();
|
||||
if (start > limit && delta > 0) {
|
||||
return errors::InvalidArgument(
|
||||
"Requires start <= limit when delta > 0: ", start, "/", limit);
|
||||
return errors::InvalidArgument("Requires start <= limit when delta > 0: ",
|
||||
start, "/", limit);
|
||||
}
|
||||
if (start < limit && delta < 0) {
|
||||
return errors::InvalidArgument(
|
||||
"Requires start >= limit when delta < 0: ", start, "/", limit);
|
||||
return errors::InvalidArgument("Requires start >= limit when delta < 0: ",
|
||||
start, "/", limit);
|
||||
}
|
||||
if (delta == 0) {
|
||||
return errors::InvalidArgument("Requires delta != 0");
|
||||
|
|
|
|||
|
|
@ -2176,9 +2176,9 @@ Status TopKShapeFn(InferenceContext* c) {
|
|||
DimensionHandle last_dim = c->Dim(input, -1);
|
||||
if (c->ValueKnown(last_dim) && c->ValueKnown(k_dim) &&
|
||||
c->Value(last_dim) < c->Value(k_dim)) {
|
||||
return errors::InvalidArgument(
|
||||
"input must have last dimension >= k = ", c->Value(k_dim), " but is ",
|
||||
c->Value(last_dim));
|
||||
return errors::InvalidArgument("input must have last dimension >= k = ",
|
||||
c->Value(k_dim), " but is ",
|
||||
c->Value(last_dim));
|
||||
}
|
||||
|
||||
// Replace last_dim with k_dim.
|
||||
|
|
@ -2278,9 +2278,9 @@ REGISTER_OP("NthElement")
|
|||
DimensionHandle last_dim = c->Dim(input, -1);
|
||||
if (c->ValueKnown(last_dim) && c->ValueKnown(n_dim) &&
|
||||
c->Value(last_dim) <= c->Value(n_dim)) {
|
||||
return errors::InvalidArgument(
|
||||
"Input must have last dimension > n = ", c->Value(n_dim), " but is ",
|
||||
c->Value(last_dim));
|
||||
return errors::InvalidArgument("Input must have last dimension > n = ",
|
||||
c->Value(n_dim), " but is ",
|
||||
c->Value(last_dim));
|
||||
}
|
||||
|
||||
// Reduce last_dim for output tensor
|
||||
|
|
|
|||
|
|
@ -95,14 +95,13 @@ TEST(NNOpsTest, NthElement_ShapeFn) {
|
|||
INFER_OK(op, "[?,3,?,21];[]", "[d0_0,d0_1,d0_2]");
|
||||
|
||||
INFER_ERROR("Shape must be at least rank 1 but is rank 0", op, "[];[]");
|
||||
INFER_ERROR("Input must have last dimension > n = 20 but is 1", op,
|
||||
"[1];[]");
|
||||
INFER_ERROR("Input must have last dimension > n = 20 but is 1", op, "[1];[]");
|
||||
INFER_ERROR("Input must have last dimension > n = 20 but is 20", op,
|
||||
"[1,2,3,20];[]");
|
||||
n_t = test::AsScalar<int32>(-1);
|
||||
INFER_ERROR(
|
||||
"Dimension size, given by scalar input 1, must be non-negative but is -1",
|
||||
op, "[1,2,3,4];[]");
|
||||
"Dimension size, given by scalar input 1, must be non-negative but is -1",
|
||||
op, "[1,2,3,4];[]");
|
||||
}
|
||||
|
||||
TEST(NNOpsTest, BatchNormWithGlobalNormalization_ShapeFn) {
|
||||
|
|
@ -386,9 +385,8 @@ TEST(NNOpsTest, Dilation2DBackpropFilter_ShapeFn) {
|
|||
}
|
||||
|
||||
TEST(NNOpsTest, MergeBothInputs_ShapeFn) {
|
||||
for (const char* op_name :
|
||||
{"ReluGrad", "Relu6Grad", "EluGrad", "SeluGrad", "SoftplusGrad",
|
||||
"SoftsignGrad"}) {
|
||||
for (const char* op_name : {"ReluGrad", "Relu6Grad", "EluGrad", "SeluGrad",
|
||||
"SoftplusGrad", "SoftsignGrad"}) {
|
||||
ShapeInferenceTestOp op(op_name);
|
||||
|
||||
INFER_OK(op, "?;?", "in0|in1");
|
||||
|
|
|
|||
|
|
@ -329,7 +329,7 @@ REGISTER_OP("DecodeCSV")
|
|||
.Input("records: string")
|
||||
.Input("record_defaults: OUT_TYPE")
|
||||
.Output("output: OUT_TYPE")
|
||||
.Attr("OUT_TYPE: list({float,int32,int64,string})")
|
||||
.Attr("OUT_TYPE: list({float,double,int32,int64,string})")
|
||||
.Attr("field_delim: string = ','")
|
||||
.Attr("use_quote_delim: bool = true")
|
||||
.Attr("na_value: string = ''")
|
||||
|
|
|
|||
|
|
@ -187,8 +187,8 @@ TEST(SparseOpsTest, SparseTensorDenseMatMul_ShapeFn) {
|
|||
|
||||
// second output dim comes from b, depending on adjoint_b value.
|
||||
INFER_OK(op, "?;?;?;?", "[?,?]");
|
||||
INFER_OK(op, "?;?;?;[?,?]", "[?,d3_1]"); // use d3_1, !adjoint_b.
|
||||
INFER_OK(op, "?;?;?;[1,2]", "[?,d3_1]"); // use d3_1, !adjoint_b.
|
||||
INFER_OK(op, "?;?;?;[?,?]", "[?,d3_1]"); // use d3_1, !adjoint_b.
|
||||
INFER_OK(op, "?;?;?;[1,2]", "[?,d3_1]"); // use d3_1, !adjoint_b.
|
||||
INFER_OK(op, "?;?;[2];[1,2]", "[?,d3_1]"); // use d3_1, !adjoint_b.
|
||||
|
||||
set_adjoints(false, true);
|
||||
|
|
|
|||
|
|
@ -45,7 +45,8 @@ static Status StatelessShape(shape_inference::InferenceContext* context) {
|
|||
.SetShapeFn(StatelessShape)
|
||||
|
||||
// This op is exposed through contrib/stateless only. The interface may change.
|
||||
REGISTER_STATELESS_OP("StatelessRandomUniform").Doc(R"doc(
|
||||
REGISTER_STATELESS_OP("StatelessRandomUniform")
|
||||
.Doc(R"doc(
|
||||
Outputs deterministic pseudorandom random values from a uniform distribution.
|
||||
|
||||
The generated values follow a uniform distribution in the range `[0, 1)`. The
|
||||
|
|
@ -60,7 +61,8 @@ output: Random values with specified shape.
|
|||
)doc");
|
||||
|
||||
// This op is exposed through contrib/stateless only. The interface may change.
|
||||
REGISTER_STATELESS_OP("StatelessRandomNormal").Doc(R"doc(
|
||||
REGISTER_STATELESS_OP("StatelessRandomNormal")
|
||||
.Doc(R"doc(
|
||||
Outputs deterministic pseudorandom values from a normal distribution.
|
||||
|
||||
The generated values will have mean 0 and standard deviation 1.
|
||||
|
|
@ -74,7 +76,8 @@ output: Random values with specified shape.
|
|||
)doc");
|
||||
|
||||
// This op is exposed through contrib/stateless only. The interface may change.
|
||||
REGISTER_STATELESS_OP("StatelessTruncatedNormal").Doc(R"doc(
|
||||
REGISTER_STATELESS_OP("StatelessTruncatedNormal")
|
||||
.Doc(R"doc(
|
||||
Outputs deterministic pseudorandom values from a truncated normal distribution.
|
||||
|
||||
The generated values follow a normal distribution with mean 0 and standard
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ limitations under the License.
|
|||
|
||||
// TF_VERSION_SUFFIX is non-empty for pre-releases (e.g. "-alpha", "-alpha.1",
|
||||
// "-beta", "-rc", "-rc.1")
|
||||
#define TF_VERSION_SUFFIX "-rc0"
|
||||
#define TF_VERSION_SUFFIX "-rc1"
|
||||
|
||||
#define TF_STR_HELPER(x) #x
|
||||
#define TF_STR(x) TF_STR_HELPER(x)
|
||||
|
|
@ -117,5 +117,7 @@ extern const char* tf_compiler_version();
|
|||
// The git commit designator when tensorflow was built
|
||||
// If no git repository, this will be "internal".
|
||||
extern const char* tf_git_version();
|
||||
// Value of the _GLIBCXX_USE_CXX11_ABI flag, or 0 if it's not set.
|
||||
extern const int tf_cxx11_abi_flag();
|
||||
|
||||
#endif // TENSORFLOW_CORE_PUBLIC_VERSION_H_
|
||||
|
|
|
|||
|
|
@ -1,4 +1,12 @@
|
|||
# C++ API
|
||||
|
||||
Note: By default [tensorflow.org](http://tensorflow.org) shows docs for the
|
||||
most recent stable version. The instructions in this doc require building from
|
||||
source. You will probably want to build from the `master` version of tensorflow.
|
||||
You should, as a result, be sure you are following the
|
||||
[`master` version of this doc](https://www.tensorflow.org/versions/master/api_guides/cc/guide),
|
||||
in case there have been any changes.
|
||||
|
||||
[TOC]
|
||||
|
||||
TensorFlow's C++ API provides mechanisms for constructing and executing a data
|
||||
|
|
@ -48,7 +56,9 @@ TensorFlow
|
|||
`BUILD` file in the same directory with the following contents:
|
||||
|
||||
```python
|
||||
cc_binary(
|
||||
load("//tensorflow:tensorflow.bzl", "tf_cc_binary")
|
||||
|
||||
tf_cc_binary(
|
||||
name = "example",
|
||||
srcs = ["example.cc"],
|
||||
deps = [
|
||||
|
|
@ -59,8 +69,10 @@ cc_binary(
|
|||
)
|
||||
```
|
||||
|
||||
You should be able to build and run the example using the following command
|
||||
(be sure to run `./configure` in your build sandbox first):
|
||||
Use `tf_cc_binary` rather than Bazel's native `cc_binary` to link in necessary
|
||||
symbols from `libtensorflow_framework.so`. You should be able to build and run
|
||||
the example using the following command (be sure to run `./configure` in your
|
||||
build sandbox first):
|
||||
|
||||
```shell
|
||||
bazel run -c opt //tensorflow/cc/example:example
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# Writing TensorFlow Documentation
|
||||
|
||||
We welcome contributions to the Tensorflow documentation from the community.
|
||||
We welcome contributions to the TensorFlow documentation from the community.
|
||||
This document explains how you can contribute to that documentation. In
|
||||
particular, this document explains the following:
|
||||
|
||||
|
|
@ -8,28 +8,30 @@ particular, this document explains the following:
|
|||
* How to make conformant edits.
|
||||
* How to build and test your documentation changes before you submit them.
|
||||
|
||||
You can view Tensorflow documentation on https://www.tensorflow.org, and you
|
||||
can view and edit the raw files on Github. We're publishing our docs on Github
|
||||
so everybody can contribute. Whatever gets checked in tensorflow/docs_src will
|
||||
be published soon after on https://www.tensorflow.org.
|
||||
You can view TensorFlow documentation on https://www.tensorflow.org, and you
|
||||
can view and edit the raw files on
|
||||
[GitHub](https://www.tensorflow.org/code/tensorflow/docs_src/).
|
||||
We're publishing our docs on GitHub so everybody can contribute. Whatever gets
|
||||
checked in to `tensorflow/docs_src` will be published soon after on
|
||||
https://www.tensorflow.org.
|
||||
|
||||
Republishing TensorFlow documentation in different forms is absolutely allowed,
|
||||
but we are unlikely to accept other documentation formats (or the tooling to
|
||||
generate them) into our repository. If you do choose to republish our
|
||||
documentation in another form, please be sure to include:
|
||||
|
||||
* The version of the API this represents (i.e. r1.0, master, etc.)
|
||||
* The version of the API this represents (for example, r1.0, master, etc.)
|
||||
* The commit or version from which the documentation was generated
|
||||
* Where to get the latest documentation (that is, https://www.tensorflow.org)
|
||||
* The Apache 2.0 license.
|
||||
|
||||
## A Note on Versions
|
||||
## A note on versions
|
||||
|
||||
tensorflow.org, at root, shows documentation for the latest stable binary. This
|
||||
is the documentation you should be reading if you are using `pip` to install
|
||||
TensorFlow.
|
||||
|
||||
However, most developers will contribute documentation into the master Github
|
||||
However, most developers will contribute documentation into the master GitHub
|
||||
branch, which is published, occasionally,
|
||||
at [tensorflow.org/versions/master](https://www.tensorflow.org/versions/master).
|
||||
|
||||
|
|
@ -49,8 +51,9 @@ in the code:
|
|||
To modify the reference documentation, you edit the appropriate code comments.
|
||||
|
||||
Non-reference documentation (for example, the TensorFlow installation guides) is
|
||||
authored by humans. This documentation is located in the `tensorflow/docs_src`
|
||||
directory. Each subdirectory of `docs_src` contains a set of related Tensorflow
|
||||
authored by humans. This documentation is located in the
|
||||
[`tensorflow/docs_src`](https://www.tensorflow.org/code/tensorflow/docs_src/)
|
||||
directory. Each subdirectory of `docs_src` contains a set of related TensorFlow
|
||||
documentation. For example, the TensorFlow installation guides are all in the
|
||||
`docs_src/install` directory.
|
||||
|
||||
|
|
@ -183,7 +186,7 @@ documentation in the `/tmp/tfdocs` dir:
|
|||
|
||||
Note: You must set `src_dir` and `output_dir` to absolute file paths.
|
||||
|
||||
## Generating Python API Documentation
|
||||
## Generating Python API documentation
|
||||
|
||||
Ops, classes, and utility functions are defined in Python modules, such as
|
||||
`image_ops.py`. Python modules contain a module docstring. For example:
|
||||
|
|
@ -216,7 +219,7 @@ the following:
|
|||
Only top level modules (currently just `tf` and `tfdbg`) need to be manually
|
||||
added to the generate script.
|
||||
|
||||
### Sealing Modules
|
||||
### Sealing modules
|
||||
|
||||
Because the doc generator walks all visible symbols, and descends into anything
|
||||
it finds, it will document any accidentally exposed symbols. If a module only
|
||||
|
|
@ -242,7 +245,7 @@ following options for dealing with them:
|
|||
|
||||
We'll discuss these options in detail below.
|
||||
|
||||
#### Private Symbols and Imports
|
||||
#### Private symbols and imports
|
||||
|
||||
The easiest way to conform to the API sealing expectations is to make non-public
|
||||
symbols private (by prepending an underscore _). The doc generator respects
|
||||
|
|
@ -288,7 +291,7 @@ are public. All `@@`s will eventually be removed. If you see them, however,
|
|||
please do not randomly delete them as they are still in use by some of our
|
||||
systems.
|
||||
|
||||
#### Traversal Blacklist
|
||||
#### Traversal blacklist
|
||||
|
||||
If all else fails, you may add entries to the traversal blacklist in
|
||||
`generate_lib.py.` **Almost all entries in this list are an abuse of its
|
||||
|
|
@ -311,7 +314,7 @@ flags, ...) included for platform abstraction can be documented without
|
|||
documenting their interior. Its use beyond this purpose is a shortcut that may
|
||||
be acceptable for contrib, but not for core tensorflow.
|
||||
|
||||
## Op Documentation Style Guide
|
||||
## Op documentation style guide
|
||||
|
||||
Long, descriptive module-level documentation for modules should go in the API
|
||||
Guides in `docs_src/api_guides/python`.
|
||||
|
|
@ -334,7 +337,7 @@ is [here](https://daringfireball.net/projects/markdown/). You are allowed to
|
|||
use [MathJax](https://www.mathjax.org) notation for equations (see above for
|
||||
restrictions).
|
||||
|
||||
### Writing About Code
|
||||
### Writing about code
|
||||
|
||||
Put backticks around these things when they're used in text:
|
||||
|
||||
|
|
@ -375,7 +378,7 @@ Two notes about backticks for code samples in Markdown:
|
|||
However, do NOT indent four spaces and use backticks simultaneously. Use one
|
||||
or the other.
|
||||
|
||||
### Tensor Dimensions
|
||||
### Tensor dimensions
|
||||
|
||||
When you're talking about a tensor in general, don't capitalize the word tensor.
|
||||
When you're talking about the specific object that's provided to an op as an
|
||||
|
|
@ -500,7 +503,7 @@ def foo(x, y, name="bar"):
|
|||
"""
|
||||
```
|
||||
|
||||
## Description of the Docstring Sections
|
||||
## Description of the docstring sections
|
||||
|
||||
This section details each of the elements in docstrings.
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,6 @@ The TensorFlow community has created many great projects around TensorFlow, incl
|
|||
* [Machine Learning with TensorFlow (Book & Code)](http://tensorflowbook.com)
|
||||
* [@jtoy's awesome "Awesome TensorFlow" list of awesome things](https://github.com/jtoy/awesome-tensorflow)
|
||||
* [TensorFlow tutorials](https://github.com/pkmital/tensorflow_tutorials)
|
||||
* [Scikit Flow - Simplified Interface for TensorFlow](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/learn/python/learn)
|
||||
* [Caffe to TensorFlow model converter](https://github.com/ethereon/caffe-tensorflow)
|
||||
* [Bitfusion's` GPU-enabled AWS EC2 TensorFlow AMI](https://github.com/bitfusionio/amis/tree/master/awsmrkt-bfboost-ubuntu14-cuda75-tensorflow) ([Launch AMI](https://aws.amazon.com/marketplace/pp/B01EYKBEQ0))
|
||||
* [Rust language bindings](https://github.com/google/tensorflow-rust)
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ enable TensorFlow for C:
|
|||
OS="linux" # Change to "darwin" for macOS
|
||||
TARGET_DIRECTORY="/usr/local"
|
||||
curl -L \
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.4.0-rc0.tar.gz" |
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.4.0-rc1.tar.gz" |
|
||||
sudo tar -C $TARGET_DIRECTORY -xz
|
||||
|
||||
The `tar` command extracts the TensorFlow C library into the `lib`
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go:
|
|||
TF_TYPE="cpu" # Change to "gpu" for GPU support
|
||||
TARGET_DIRECTORY='/usr/local'
|
||||
curl -L \
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.4.0-rc0.tar.gz" |
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.4.0-rc1.tar.gz" |
|
||||
sudo tar -C $TARGET_DIRECTORY -xz
|
||||
|
||||
The `tar` command extracts the TensorFlow C library into the `lib`
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs:
|
|||
<dependency>
|
||||
<groupId>org.tensorflow</groupId>
|
||||
<artifactId>tensorflow</artifactId>
|
||||
<version>1.4.0-rc0</version>
|
||||
<version>1.4.0-rc1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
|
@ -65,7 +65,7 @@ As an example, these steps will create a Maven project that uses TensorFlow:
|
|||
<dependency>
|
||||
<groupId>org.tensorflow</groupId>
|
||||
<artifactId>tensorflow</artifactId>
|
||||
<version>1.4.0-rc0</version>
|
||||
<version>1.4.0-rc1</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</project>
|
||||
|
|
@ -124,7 +124,7 @@ refer to the simpler instructions above instead.
|
|||
Take the following steps to install TensorFlow for Java on Linux or macOS:
|
||||
|
||||
1. Download
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.4.0-rc0.jar),
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.4.0-rc1.jar),
|
||||
which is the TensorFlow Java Archive (JAR).
|
||||
|
||||
2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
|
||||
|
|
@ -143,7 +143,7 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
|
|||
OS=$(uname -s | tr '[:upper:]' '[:lower:]')
|
||||
mkdir -p ./jni
|
||||
curl -L \
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.4.0-rc0.tar.gz" |
|
||||
"https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.4.0-rc1.tar.gz" |
|
||||
tar -xz -C ./jni
|
||||
|
||||
### Install on Windows
|
||||
|
|
@ -151,10 +151,10 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
|
|||
Take the following steps to install TensorFlow for Java on Windows:
|
||||
|
||||
1. Download
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.4.0-rc0.jar),
|
||||
[libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.4.0-rc1.jar),
|
||||
which is the TensorFlow Java Archive (JAR).
|
||||
2. Download the following Java Native Interface (JNI) file appropriate for
|
||||
[TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.4.0-rc0.zip).
|
||||
[TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.4.0-rc1.zip).
|
||||
3. Extract this .zip file.
|
||||
|
||||
|
||||
|
|
@ -202,7 +202,7 @@ must be part of your `classpath`. For example, you can include the
|
|||
downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
|
||||
as follows:
|
||||
|
||||
<pre><b>javac -cp libtensorflow-1.4.0-rc0.jar HelloTF.java</b></pre>
|
||||
<pre><b>javac -cp libtensorflow-1.4.0-rc1.jar HelloTF.java</b></pre>
|
||||
|
||||
|
||||
### Running
|
||||
|
|
@ -216,11 +216,11 @@ two files are available to the JVM:
|
|||
For example, the following command line executes the `HelloTF` program on Linux
|
||||
and macOS X:
|
||||
|
||||
<pre><b>java -cp libtensorflow-1.4.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
|
||||
<pre><b>java -cp libtensorflow-1.4.0-rc1.jar:. -Djava.library.path=./jni HelloTF</b></pre>
|
||||
|
||||
And the following command line executes the `HelloTF` program on Windows:
|
||||
|
||||
<pre><b>java -cp libtensorflow-1.4.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre>
|
||||
<pre><b>java -cp libtensorflow-1.4.0-rc1.jar;. -Djava.library.path=jni HelloTF</b></pre>
|
||||
|
||||
If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
|
||||
installed TensorFlow for Java and are ready to use the API. If the program
|
||||
|
|
|
|||
|
|
@ -81,22 +81,22 @@ TensorFlow with GPU support, but only if you do the following:
|
|||
You must pick the mechanism by which you install TensorFlow. The
|
||||
supported choices are as follows:
|
||||
|
||||
* [virtualenv](#InstallingVirtualenv)
|
||||
* [Virtualenv](#InstallingVirtualenv)
|
||||
* ["native" pip](#InstallingNativePip)
|
||||
* [Docker](#InstallingDocker)
|
||||
* [Anaconda](#InstallingAnaconda)
|
||||
* installing from sources, which is documented in
|
||||
[a separate guide](https://www.tensorflow.org/install/install_sources).
|
||||
|
||||
**We recommend the virtualenv installation.**
|
||||
**We recommend the Virtualenv installation.**
|
||||
[Virtualenv](https://virtualenv.pypa.io/en/stable/)
|
||||
is a virtual Python environment isolated from other Python development,
|
||||
incapable of interfering with or being affected by other Python programs
|
||||
on the same machine. During the virtualenv installation process,
|
||||
on the same machine. During the Virtualenv installation process,
|
||||
you will install not only TensorFlow but also all the packages that
|
||||
TensorFlow requires. (This is actually pretty easy.)
|
||||
To start working with TensorFlow, you simply need to "activate" the
|
||||
virtual environment. All in all, virtualenv provides a safe and
|
||||
virtual environment. All in all, Virtualenv provides a safe and
|
||||
reliable mechanism for installing and running TensorFlow.
|
||||
|
||||
Native pip installs TensorFlow directly on your system without going
|
||||
|
|
@ -125,26 +125,26 @@ Use that package at your own risk.
|
|||
|
||||
|
||||
<a name="InstallingVirtualenv"></a>
|
||||
## Installing with virtualenv
|
||||
## Installing with Virtualenv
|
||||
|
||||
Take the following steps to install TensorFlow with Virtualenv:
|
||||
|
||||
1. Install pip and virtualenv by issuing one of the following commands:
|
||||
1. Install pip and Virtualenv by issuing one of the following commands:
|
||||
|
||||
<pre>$ <b>sudo apt-get install python-pip python-dev python-virtualenv</b> # for Python 2.7
|
||||
$ <b>sudo apt-get install python3-pip python3-dev python-virtualenv</b> # for Python 3.n</pre>
|
||||
|
||||
2. Create a virtualenv environment by issuing one of the following commands:
|
||||
2. Create a Virtualenv environment by issuing one of the following commands:
|
||||
|
||||
<pre>$ <b>virtualenv --system-site-packages</b> <i>targetDirectory</i> # for Python 2.7
|
||||
$ <b>virtualenv --system-site-packages -p python3</b> <i>targetDirectory</i> # for Python 3.n</pre>
|
||||
|
||||
where <code><em>targetDirectory</em></code> specifies the top of the
|
||||
virtualenv tree. Our instructions assume that
|
||||
Virtualenv tree. Our instructions assume that
|
||||
<code><em>targetDirectory</em></code> is `~/tensorflow`, but you may
|
||||
choose any directory.
|
||||
|
||||
3. Activate the virtualenv environment by issuing one of the following
|
||||
3. Activate the Virtualenv environment by issuing one of the following
|
||||
commands:
|
||||
|
||||
<pre>$ <b>source ~/tensorflow/bin/activate</b> # bash, sh, ksh, or zsh
|
||||
|
|
@ -160,18 +160,18 @@ Take the following steps to install TensorFlow with Virtualenv:
|
|||
<pre>(tensorflow)$ <b>easy_install -U pip</b></pre>
|
||||
|
||||
5. Issue one of the following commands to install TensorFlow in the active
|
||||
virtualenv environment:
|
||||
Virtualenv environment:
|
||||
|
||||
<pre>(tensorflow)$ <b>pip install --upgrade tensorflow</b> # for Python 2.7
|
||||
(tensorflow)$ <b>pip3 install --upgrade tensorflow</b> # for Python 3.n
|
||||
(tensorflow)$ <b>pip install --upgrade tensorflow-gpu</b> # for Python 2.7 and GPU
|
||||
(tensorflow)$ <b>pip3 install --upgrade tensorflow-gpu</b> # for Python 3.n and GPU</pre>
|
||||
|
||||
If the preceding command succeeds, skip Step 6. If the preceding
|
||||
If the above command succeeds, skip Step 6. If the preceding
|
||||
command fails, perform Step 6.
|
||||
|
||||
6. (Optional) If Step 5 failed (typically because you invoked a pip version
|
||||
lower than 8.1), install TensorFlow in the active virtualenv environment
|
||||
lower than 8.1), install TensorFlow in the active Virtualenv environment
|
||||
by issuing a command of the following format:
|
||||
|
||||
<pre>(tensorflow)$ <b>pip install --upgrade</b> <i>tfBinaryURL</i> # Python 2.7
|
||||
|
|
@ -185,10 +185,10 @@ Take the following steps to install TensorFlow with Virtualenv:
|
|||
[here](#the_url_of_the_tensorflow_python_package). For example, if you
|
||||
are installing TensorFlow for Linux, Python 3.4, and CPU-only support,
|
||||
issue the following command to install TensorFlow in the active
|
||||
virtualenv environment:
|
||||
Virtualenv environment:
|
||||
|
||||
<pre>(tensorflow)$ <b>pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
|
||||
If you encounter installation problems, see
|
||||
[Common Installation Problems](#common_installation_problems).
|
||||
|
|
@ -199,14 +199,14 @@ If you encounter installation problems, see
|
|||
After installing TensorFlow,
|
||||
[validate the installation](#ValidateYourInstallation).
|
||||
|
||||
Note that you must activate the virtualenv environment each time you
|
||||
use TensorFlow. If the virtualenv environment is not currently active,
|
||||
Note that you must activate the Virtualenv environment each time you
|
||||
use TensorFlow. If the Virtualenv environment is not currently active,
|
||||
invoke one of the following commands:
|
||||
|
||||
<pre> $ <b>source ~/tensorflow/bin/activate</b> # bash, sh, ksh, or zsh
|
||||
$ <b>source ~/tensorflow/bin/activate.csh</b> # csh or tcsh</pre>
|
||||
|
||||
When the virtualenv environment is active, you may run
|
||||
When the Virtualenv environment is active, you may run
|
||||
TensorFlow programs from this shell. Your prompt will become
|
||||
the following to indicate that your tensorflow environment is active:
|
||||
|
||||
|
|
@ -293,7 +293,7 @@ take the following steps:
|
|||
|
||||
<pre>
|
||||
$ <b>sudo pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp34-cp34m-linux_x86_64.whl</b>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp34-cp34m-linux_x86_64.whl</b>
|
||||
</pre>
|
||||
|
||||
If this step fails, see
|
||||
|
|
@ -480,7 +480,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
|
|||
|
||||
<pre>
|
||||
(tensorflow)$ <b>pip install --ignore-installed --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp34-cp34m-linux_x86_64.whl</b></pre>
|
||||
|
||||
|
||||
<a name="ValidateYourInstallation"></a>
|
||||
|
|
@ -494,11 +494,11 @@ To validate your TensorFlow installation, do the following:
|
|||
|
||||
### Prepare your environment
|
||||
|
||||
If you installed on native pip, virtualenv, or Anaconda, then
|
||||
If you installed on native pip, Virtualenv, or Anaconda, then
|
||||
do the following:
|
||||
|
||||
1. Start a terminal.
|
||||
2. If you installed with virtualenv or Anaconda, activate your container.
|
||||
2. If you installed with Virtualenv or Anaconda, activate your container.
|
||||
3. If you installed TensorFlow source code, navigate to any
|
||||
directory *except* one containing TensorFlow source code.
|
||||
|
||||
|
|
@ -648,14 +648,14 @@ This section documents the relevant values for Linux installations.
|
|||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp27-none-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp27-none-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc0-cp27-none-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc1-cp27-none-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
Note that GPU support requires the NVIDIA hardware and software described in
|
||||
|
|
@ -667,14 +667,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
|
|||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp34-cp34m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc0-cp34-cp34m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc1-cp34-cp34m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
Note that GPU support requires the NVIDIA hardware and software described in
|
||||
|
|
@ -686,14 +686,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
|
|||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp35-cp35m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc0-cp35-cp35m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc1-cp35-cp35m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
|
|
@ -705,14 +705,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
|
|||
CPU only:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc0-cp36-cp36m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.4.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
GPU support:
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc0-cp36-cp36m-linux_x86_64.whl
|
||||
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.4.0rc1-cp36-cp36m-linux_x86_64.whl
|
||||
</pre>
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -13,21 +13,21 @@ Note: As of version 1.2, TensorFlow no longer provides GPU support on macOS.
|
|||
|
||||
You must pick the mechanism by which you install TensorFlow. The supported choices are as follows:
|
||||
|
||||
* virtualenv
|
||||
* Virtualenv
|
||||
* "native" pip
|
||||
* Docker
|
||||
* installing from sources, which is documented in
|
||||
[a separate guide](https://www.tensorflow.org/install/install_sources).
|
||||
|
||||
**We recommend the virtualenv installation.**
|
||||
**We recommend the Virtualenv installation.**
|
||||
[Virtualenv](https://virtualenv.pypa.io/en/stable)
|
||||
is a virtual Python environment isolated from other Python development,
|
||||
incapable of interfering with or being affected by other Python programs
|
||||
on the same machine. During the virtualenv installation process,
|
||||
on the same machine. During the Virtualenv installation process,
|
||||
you will install not only TensorFlow but also all the packages that
|
||||
TensorFlow requires. (This is actually pretty easy.)
|
||||
To start working with TensorFlow, you simply need to "activate" the
|
||||
virtual environment. All in all, virtualenv provides a safe and
|
||||
virtual environment. All in all, Virtualenv provides a safe and
|
||||
reliable mechanism for installing and running TensorFlow.
|
||||
|
||||
Native pip installs TensorFlow directly on your system without going through
|
||||
|
|
@ -53,30 +53,30 @@ However, within Anaconda, we recommend installing TensorFlow with the
|
|||
That is, the TensorFlow team neither tests nor maintains the conda package.
|
||||
Use that package at your own risk.
|
||||
|
||||
## Installing with virtualenv
|
||||
## Installing with Virtualenv
|
||||
|
||||
Take the following steps to install TensorFlow with Virtualenv:
|
||||
|
||||
1. Start a terminal (a shell). You'll perform all subsequent steps
|
||||
in this shell.
|
||||
|
||||
2. Install pip and virtualenv by issuing the following commands:
|
||||
2. Install pip and Virtualenv by issuing the following commands:
|
||||
|
||||
<pre> $ <b>sudo easy_install pip</b>
|
||||
$ <b>pip install --upgrade virtualenv</b> </pre>
|
||||
|
||||
3. Create a virtualenv environment by issuing a command of one
|
||||
3. Create a Virtualenv environment by issuing a command of one
|
||||
of the following formats:
|
||||
|
||||
<pre> $ <b>virtualenv --system-site-packages</b> <i>targetDirectory</i> # for Python 2.7
|
||||
$ <b>virtualenv --system-site-packages -p python3</b> <i>targetDirectory</i> # for Python 3.n
|
||||
</pre>
|
||||
|
||||
where <i>targetDirectory</i> identifies the top of the virtualenv tree.
|
||||
where <i>targetDirectory</i> identifies the top of the Virtualenv tree.
|
||||
Our instructions assume that <i>targetDirectory</i>
|
||||
is `~/tensorflow`, but you may choose any directory.
|
||||
|
||||
4. Activate the virtualenv environment by issuing one of the
|
||||
4. Activate the Virtualenv environment by issuing one of the
|
||||
following commands:
|
||||
|
||||
<pre>$ <b>source ~/tensorflow/bin/activate</b> # If using bash, sh, ksh, or zsh
|
||||
|
|
@ -98,7 +98,7 @@ Take the following steps to install TensorFlow with Virtualenv:
|
|||
|
||||
7. Optional. If Step 6 failed (typically because you invoked a pip version
|
||||
lower than 8.1), install TensorFlow in the active
|
||||
virtualenv environment by issuing a command of the following format:
|
||||
Virtualenv environment by issuing a command of the following format:
|
||||
|
||||
<pre> $ <b>pip install --upgrade</b> <i>tfBinaryURL</i> # Python 2.7
|
||||
$ <b>pip3 install --upgrade</b> <i>tfBinaryURL</i> # Python 3.n </pre>
|
||||
|
|
@ -114,7 +114,7 @@ Take the following steps to install TensorFlow with Virtualenv:
|
|||
TensorFlow in the active Virtualenv is as follows:
|
||||
|
||||
<pre> $ <b>pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc0-py2-none-any.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc1-py2-none-any.whl</b></pre>
|
||||
|
||||
If you encounter installation problems, see
|
||||
[Common Installation Problems](#common-installation-problems).
|
||||
|
|
@ -126,8 +126,8 @@ After installing TensorFlow,
|
|||
[validate your installation](#ValidateYourInstallation)
|
||||
to confirm that the installation worked properly.
|
||||
|
||||
Note that you must activate the virtualenv environment each time you
|
||||
use TensorFlow in a new shell. If the virtualenv environment is not
|
||||
Note that you must activate the Virtualenv environment each time you
|
||||
use TensorFlow in a new shell. If the Virtualenv environment is not
|
||||
currently active (that is, the prompt is not `(tensorflow)`, invoke
|
||||
one of the following commands:
|
||||
|
||||
|
|
@ -139,7 +139,7 @@ tensorflow environment is active:
|
|||
|
||||
<pre> (tensorflow)$ </pre>
|
||||
|
||||
When the virtualenv environment is active, you may run
|
||||
When the Virtualenv environment is active, you may run
|
||||
TensorFlow programs from this shell.
|
||||
|
||||
When you are done using TensorFlow, you may deactivate the
|
||||
|
|
@ -235,7 +235,7 @@ take the following steps:
|
|||
issue the following command:
|
||||
|
||||
<pre> $ <b>sudo pip3 install --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc0-py2-none-any.whl</b> </pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc1-py2-none-any.whl</b> </pre>
|
||||
|
||||
If the preceding command fails, see
|
||||
[installation problems](#common-installation-problems).
|
||||
|
|
@ -344,7 +344,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
|
|||
TensorFlow for Python 2.7:
|
||||
|
||||
<pre> (tensorflow)$ <b>pip install --ignore-installed --upgrade \
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc0-py2-none-any.whl</b></pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc1-py2-none-any.whl</b></pre>
|
||||
|
||||
|
||||
<a name="ValidateYourInstallation"></a>
|
||||
|
|
@ -358,11 +358,11 @@ To validate your TensorFlow installation, do the following:
|
|||
|
||||
### Prepare your environment
|
||||
|
||||
If you installed on native pip, virtualenv, or Anaconda, then
|
||||
If you installed on native pip, Virtualenv, or Anaconda, then
|
||||
do the following:
|
||||
|
||||
1. Start a terminal.
|
||||
2. If you installed with virtualenv or Anaconda, activate your container.
|
||||
2. If you installed with Virtualenv or Anaconda, activate your container.
|
||||
3. If you installed TensorFlow source code, navigate to any
|
||||
directory *except* one containing TensorFlow source code.
|
||||
|
||||
|
|
@ -517,7 +517,7 @@ This section documents the relevant values for Mac OS installations.
|
|||
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc0-py2-none-any.whl
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc1-py2-none-any.whl
|
||||
</pre>
|
||||
|
||||
|
||||
|
|
@ -525,7 +525,7 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc0-py2-none-a
|
|||
|
||||
|
||||
<pre>
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc0-py3-none-any.whl
|
||||
https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.4.0rc1-py3-none-any.whl
|
||||
</pre>
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -355,10 +355,10 @@ Invoke `pip install` to install that pip package.
|
|||
The filename of the `.whl` file depends on your platform.
|
||||
For example, the following command will install the pip package
|
||||
|
||||
for TensorFlow 1.4.0rc0 on Linux:
|
||||
for TensorFlow 1.4.0rc1 on Linux:
|
||||
|
||||
<pre>
|
||||
$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.4.0rc0-py2-none-any.whl</b>
|
||||
$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.4.0rc1-py2-none-any.whl</b>
|
||||
</pre>
|
||||
|
||||
## Validate your installation
|
||||
|
|
@ -447,8 +447,8 @@ Stack Overflow and specify the `tensorflow` tag.
|
|||
**Linux**
|
||||
<table>
|
||||
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
|
||||
<tr><td>tensorflow-1.4.0rc0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.4.0rc0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.5</td><td>6</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.4.0rc1</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.4.0rc1</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.5</td><td>6</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.2.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.2.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.5</td><td>5.1</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.1.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>GCC 4.8</td><td>Bazel 0.4.2</td><td>N/A</td><td>N/A</td></tr>
|
||||
|
|
@ -460,19 +460,19 @@ Stack Overflow and specify the `tensorflow` tag.
|
|||
**Mac**
|
||||
<table>
|
||||
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
|
||||
<tr><td>tensorflow-1.4.0rc0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>ttensorflow-1.2.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>ttensorflow-1.1.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>ttensorflow_gpu-1.1.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>5.1</td><td>8</td></tr>
|
||||
<tr><td>ttensorflow-1.0.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>ttensorflow_gpu-1.0.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>5.1</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.4.0rc1</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow-1.2.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.5</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow-1.1.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.1.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>5.1</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.0.0</td><td>CPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.0.0</td><td>GPU</td><td>2.7, 3.3-3.6</td><td>Clang from xcode</td><td>Bazel 0.4.2</td><td>5.1</td><td>8</td></tr>
|
||||
</table>
|
||||
|
||||
**Windows**
|
||||
<table>
|
||||
<tr><th>Version:</th><th>CPU/GPU:</th><th>Python Version:</th><th>Compiler:</th><th>Build Tools:</th><th>cuDNN:</th><th>CUDA:</th></tr>
|
||||
<tr><td>tensorflow-1.4.0rc0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.4.0rc0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>6</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.4.0rc1</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.4.0rc1</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>6</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.2.0</td><td>CPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
|
||||
<tr><td>tensorflow_gpu-1.2.0</td><td>GPU</td><td>3.5-3.6</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>5.1</td><td>8</td></tr>
|
||||
<tr><td>tensorflow-1.1.0</td><td>CPU</td><td>3.5</td><td>MSVC 2015 update 3</td><td>Cmake v3.6.3</td><td>N/A</td><td>N/A</td></tr>
|
||||
|
|
|
|||
|
|
@ -89,7 +89,7 @@ to all API functions in the same context. For example:
|
|||
* Executing `v = tf.Variable(0)` adds to the graph a @{tf.Operation} that will
|
||||
store a writeable tensor value that persists between @{tf.Session.run} calls.
|
||||
The @{tf.Variable} object wraps this operation, and can be used [like a
|
||||
tensor](#tensor-like-objects), which will read the current value of the
|
||||
tensor](#tensor-like_objects), which will read the current value of the
|
||||
stored value. The @{tf.Variable} object also has methods such as
|
||||
@{tf.Variable.assign$`assign`} and @{tf.Variable.assign_add$`assign_add`} that
|
||||
create @{tf.Operation} objects that, when executed, update the stored value.
|
||||
|
|
@ -100,7 +100,7 @@ to all API functions in the same context. For example:
|
|||
when run, will apply those gradients to a set of variables.
|
||||
|
||||
Most programs rely solely on the default graph. However,
|
||||
see [Dealing with multiple graphs](#dealing-with-multiple-graphs) for more
|
||||
see [Dealing with multiple graphs](#programming_with_multiple_graphs) for more
|
||||
advanced use cases. High-level APIs such as the @{tf.estimator.Estimator} API
|
||||
manage the default graph on your behalf, and--for example--may create different
|
||||
graphs for training and evaluation.
|
||||
|
|
@ -329,7 +329,7 @@ described below.
|
|||
* **`graph`.** By default, a new @{tf.Session} will be bound to---and only able
|
||||
to run operations in---the current default graph. If you are using multiple
|
||||
graphs in your program (see [Programming with multiple
|
||||
graphs](programming-with-multiple-graphs) for more details), you can specify
|
||||
graphs](#programming_with_multiple_graphs) for more details), you can specify
|
||||
an explicit @{tf.Graph} when you construct the session.
|
||||
|
||||
* **`config`.** This argument allows you to specify a @{tf.ConfigProto} that
|
||||
|
|
|
|||
|
|
@ -197,7 +197,7 @@ For example, here is how to make a vector of zeros with the same size as the
|
|||
number of columns in a given matrix:
|
||||
|
||||
``` python
|
||||
zeros = tf.zeros(tf.shape(my_matrix)[1])
|
||||
zeros = tf.zeros(my_matrix.shape[1])
|
||||
```
|
||||
|
||||
### Changing the shape of a `tf.Tensor`
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
# ==============================================================================
|
||||
"""A simple smoke test that runs these examples for 1 training iteraton."""
|
||||
"""A simple smoke test that runs these examples for 1 training iteration."""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
|
|
|
|||
|
|
@ -109,7 +109,7 @@ def do_eval(sess,
|
|||
labels_placeholder)
|
||||
true_count += sess.run(eval_correct, feed_dict=feed_dict)
|
||||
precision = float(true_count) / num_examples
|
||||
print(' Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
|
||||
print('Num examples: %d Num correct: %d Precision @ 1: %0.04f' %
|
||||
(num_examples, true_count, precision))
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ def deepnn(x):
|
|||
W_fc1 = weight_variable([7 * 7 * 64, 1024])
|
||||
b_fc1 = bias_variable([1024])
|
||||
|
||||
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
|
||||
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
|
||||
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
|
||||
|
||||
# Dropout - controls the complexity of the model, prevents co-adaptation of
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
FROM gcr.io/tensorflow/tensorflow:latest
|
||||
MAINTAINER Vincent Vanhoucke <vanhoucke@google.com>
|
||||
LABEL maintainer="Vincent Vanhoucke <vanhoucke@google.com>"
|
||||
|
||||
# Pillow needs libjpeg by default as of 3.0.
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
|
|
|
|||
|
|
@ -262,6 +262,7 @@ _allowed_symbols.extend([
|
|||
'VERSION',
|
||||
'GIT_VERSION',
|
||||
'COMPILER_VERSION',
|
||||
'CXX11_ABI_FLAG',
|
||||
])
|
||||
|
||||
# Remove all extra symbols that don't have a docstring or are not explicitly
|
||||
|
|
@ -280,6 +281,7 @@ _exported_dunders = set([
|
|||
'__version__',
|
||||
'__git_version__',
|
||||
'__compiler_version__',
|
||||
'__cxx11_abi_flag__',
|
||||
])
|
||||
|
||||
# Expose symbols minus dunders, unless they are whitelisted above.
|
||||
|
|
|
|||
|
|
@ -45,6 +45,9 @@ tensorflow::ImportNumpy();
|
|||
// Compiler
|
||||
%constant const char* __compiler_version__ = tf_compiler_version();
|
||||
|
||||
// _GLIBCXX_USE_CXX11_ABI flag value
|
||||
%constant const int __cxx11_abi_flag__ = tf_cxx11_abi_flag();
|
||||
|
||||
// Release the Python GIL for the duration of most methods.
|
||||
%exception {
|
||||
Py_BEGIN_ALLOW_THREADS;
|
||||
|
|
|
|||
|
|
@ -528,6 +528,7 @@ class RunConfig(object):
|
|||
"""Returns a new instance of `RunConfig` replacing specified properties.
|
||||
|
||||
Only the properties in the following list are allowed to be replaced:
|
||||
|
||||
- `model_dir`.
|
||||
- `tf_random_seed`,
|
||||
- `save_summary_steps`,
|
||||
|
|
|
|||
|
|
@ -24,10 +24,12 @@ from tensorflow.python import pywrap_tensorflow
|
|||
__version__ = pywrap_tensorflow.__version__
|
||||
__git_version__ = pywrap_tensorflow.__git_version__
|
||||
__compiler_version__ = pywrap_tensorflow.__compiler_version__
|
||||
__cxx11_abi_flag__ = pywrap_tensorflow.__cxx11_abi_flag__
|
||||
|
||||
VERSION = __version__
|
||||
GIT_VERSION = __git_version__
|
||||
COMPILER_VERSION = __compiler_version__
|
||||
CXX11_ABI_FLAG = __cxx11_abi_flag__
|
||||
|
||||
GRAPH_DEF_VERSION = pywrap_tensorflow.GRAPH_DEF_VERSION
|
||||
GRAPH_DEF_VERSION_MIN_CONSUMER = (
|
||||
|
|
@ -39,7 +41,9 @@ __all__ = [
|
|||
"__version__",
|
||||
"__git_version__",
|
||||
"__compiler_version__",
|
||||
"__cxx11_abi_flag__",
|
||||
"COMPILER_VERSION",
|
||||
"CXX11_ABI_FLAG",
|
||||
"GIT_VERSION",
|
||||
"GRAPH_DEF_VERSION",
|
||||
"GRAPH_DEF_VERSION_MIN_CONSUMER",
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ class DecodeCSVOpTest(test.TestCase):
|
|||
out = sess.run(decode)
|
||||
|
||||
for i, field in enumerate(out):
|
||||
if field.dtype == np.float32:
|
||||
if field.dtype == np.float32 or field.dtype == np.float64:
|
||||
self.assertAllClose(field, expected_out[i])
|
||||
else:
|
||||
self.assertAllEqual(field, expected_out[i])
|
||||
|
|
@ -85,6 +85,17 @@ class DecodeCSVOpTest(test.TestCase):
|
|||
|
||||
self._test(args, expected_out)
|
||||
|
||||
def testDouble(self):
|
||||
args = {
|
||||
"records": ["1.0", "-1.79e+308", '"1.79e+308"'],
|
||||
"record_defaults": [np.array(
|
||||
[], dtype=np.double)],
|
||||
}
|
||||
|
||||
expected_out = [[1.0, -1.79e+308, 1.79e+308]]
|
||||
|
||||
self._test(args, expected_out)
|
||||
|
||||
def testInt64(self):
|
||||
args = {
|
||||
"records": ["1", "2", '"2147483648"'],
|
||||
|
|
|
|||
|
|
@ -336,8 +336,8 @@ def size(input, name=None, out_type=dtypes.int32):
|
|||
# pylint: disable=redefined-builtin
|
||||
"""Returns the size of a tensor.
|
||||
|
||||
This operation returns an integer representing the number of elements in
|
||||
`input`.
|
||||
Returns a 0-D `Tensor` representing the number of elements in `input`
|
||||
of type `out_type`. Defaults to tf.int32.
|
||||
|
||||
For example:
|
||||
|
||||
|
|
@ -349,11 +349,15 @@ def size(input, name=None, out_type=dtypes.int32):
|
|||
Args:
|
||||
input: A `Tensor` or `SparseTensor`.
|
||||
name: A name for the operation (optional).
|
||||
out_type: (Optional) The specified output type of the operation
|
||||
(`int32` or `int64`). Defaults to tf.int32.
|
||||
out_type: (Optional) The specified non-quantized numeric output type
|
||||
of the operation. Defaults to `tf.int32`.
|
||||
|
||||
Returns:
|
||||
A `Tensor` of type `out_type`. Defaults to tf.int32.
|
||||
A `Tensor` of type `out_type`. Defaults to `tf.int32`.
|
||||
|
||||
@compatibility(numpy)
|
||||
Equivalent to np.size()
|
||||
@end_compatibility
|
||||
"""
|
||||
return size_internal(input, name, optimize=True, out_type=out_type)
|
||||
|
||||
|
|
@ -366,11 +370,11 @@ def size_internal(input, name=None, optimize=True, out_type=dtypes.int32):
|
|||
input: A `Tensor` or `SparseTensor`.
|
||||
name: A name for the operation (optional).
|
||||
optimize: if true, encode the size as a constant when possible.
|
||||
out_type: (Optional) The specified output type of the operation
|
||||
(`int32` or `int64`). Defaults to tf.int32.
|
||||
out_type: (Optional) The specified non-quantized numeric output type
|
||||
of the operation. Defaults to `tf.int32`.
|
||||
|
||||
Returns:
|
||||
A `Tensor` of type `out_type`.
|
||||
A `Tensor` of type `out_type`. Defaults to `tf.int32`.
|
||||
"""
|
||||
with ops.name_scope(name, "Size", [input]) as name:
|
||||
if isinstance(input, (sparse_tensor.SparseTensor,
|
||||
|
|
|
|||
|
|
@ -1011,7 +1011,7 @@ def index_table_from_tensor(vocabulary_list,
|
|||
|
||||
Args:
|
||||
vocabulary_list: A 1-D `Tensor` that specifies the mapping of keys to
|
||||
indices. Thetype of this object must be castable to `dtype`.
|
||||
indices. The type of this object must be castable to `dtype`.
|
||||
num_oov_buckets: The number of out-of-vocabulary buckets.
|
||||
default_value: The value to use for out-of-vocabulary feature values.
|
||||
Defaults to -1.
|
||||
|
|
|
|||
|
|
@ -1183,7 +1183,7 @@ def decode_csv(records, record_defaults, field_delim=",",
|
|||
Each string is a record/row in the csv and all records should have
|
||||
the same format.
|
||||
record_defaults: A list of `Tensor` objects with specific types.
|
||||
Acceptable types are `float32`, `int32`, `int64`, `string`.
|
||||
Acceptable types are `float32`, `float64`, `int32`, `int64`, `string`.
|
||||
One tensor per column of the input record, with either a
|
||||
scalar default value for that column or empty if the column is required.
|
||||
field_delim: An optional `string`. Defaults to `","`.
|
||||
|
|
|
|||
|
|
@ -590,7 +590,7 @@ class _VariableStore(object):
|
|||
if reuse is True:
|
||||
raise ValueError("PartitionedVariable %s does not exist, or was not "
|
||||
"created with tf.get_variable(). Did you mean to set "
|
||||
"reuse=None in VarScope?" % name)
|
||||
"reuse=False or reuse=tf.AUTO_REUSE in VarScope?" % name)
|
||||
|
||||
slice_dim, slice_shape = _compute_slice_dim_and_shape(
|
||||
shape.as_list(), partitions)
|
||||
|
|
|
|||
|
|
@ -197,6 +197,14 @@ class Variable(object):
|
|||
ValueError: If the initial value is not specified, or does not have a
|
||||
shape and `validate_shape` is `True`.
|
||||
RuntimeError: If eager execution is enabled.
|
||||
|
||||
@compatibility(eager)
|
||||
`tf.Variable` is not compatible with eager execution. Use
|
||||
`tfe.Variable` instead which is compatable with both eager execution
|
||||
and graph construction. See [the TensorFlow Eager Execution
|
||||
guide](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/g3doc/guide.md#variables-and-optimizers)
|
||||
for details on how variables work in eager execution.
|
||||
@end_compatibility
|
||||
"""
|
||||
if not context.in_graph_mode():
|
||||
raise RuntimeError("tf.Variable not supported in Eager mode. "
|
||||
|
|
|
|||
|
|
@ -17,6 +17,8 @@
|
|||
|
||||
@@get_include
|
||||
@@get_lib
|
||||
@@get_compile_flags
|
||||
@@get_link_flags
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
|
|
@ -24,6 +26,7 @@ from __future__ import print_function
|
|||
|
||||
import os.path as _os_path
|
||||
|
||||
from tensorflow.python.framework.versions import CXX11_ABI_FLAG as _CXX11_ABI_FLAG
|
||||
from tensorflow.python.util.all_util import remove_undocumented
|
||||
|
||||
|
||||
|
|
@ -51,5 +54,30 @@ def get_lib():
|
|||
import tensorflow as tf
|
||||
return _os_path.join(_os_path.dirname(tf.__file__))
|
||||
|
||||
|
||||
def get_compile_flags():
|
||||
"""Get the compilation flags for custom operators.
|
||||
|
||||
Returns:
|
||||
The compilation flags.
|
||||
"""
|
||||
flags = []
|
||||
flags.append('-I%s' % get_include())
|
||||
flags.append('-I%s/external/nsync/public' % get_include())
|
||||
flags.append('-D_GLIBCXX_USE_CXX11_ABI=%d' % _CXX11_ABI_FLAG)
|
||||
return flags
|
||||
|
||||
|
||||
def get_link_flags():
|
||||
"""Get the link flags for custom operators.
|
||||
|
||||
Returns:
|
||||
The link flags.
|
||||
"""
|
||||
flags = []
|
||||
flags.append('-L%s' % get_lib())
|
||||
flags.append('-ltensorflow_framework')
|
||||
return flags
|
||||
|
||||
_allowed_symbols = []
|
||||
remove_undocumented(__name__, _allowed_symbols)
|
||||
|
|
|
|||
|
|
@ -59,6 +59,7 @@ try:
|
|||
from tensorflow.python.pywrap_tensorflow_internal import __version__
|
||||
from tensorflow.python.pywrap_tensorflow_internal import __git_version__
|
||||
from tensorflow.python.pywrap_tensorflow_internal import __compiler_version__
|
||||
from tensorflow.python.pywrap_tensorflow_internal import __cxx11_abi_flag__
|
||||
|
||||
if _use_dlopen_global_flags:
|
||||
pywrap_dlopen_global_flags.reset_dlopen_flags()
|
||||
|
|
|
|||
|
|
@ -19,6 +19,7 @@ from __future__ import division
|
|||
from __future__ import print_function
|
||||
|
||||
import time
|
||||
import math
|
||||
|
||||
from tensorflow.python.framework import dtypes
|
||||
from tensorflow.python.framework import ops
|
||||
|
|
@ -91,6 +92,9 @@ class _StopAfterNEvalsHook(session_run_hook.SessionRunHook):
|
|||
self._num_evals = num_evals
|
||||
self._evals_completed = None
|
||||
self._log_progress = log_progress
|
||||
# Reduce logging frequency if there are 20 or more evaluations.
|
||||
self._log_frequency = (1 if (num_evals is None or num_evals < 20)
|
||||
else math.floor(num_evals / 10.))
|
||||
|
||||
def _set_evals_completed_tensor(self, updated_eval_step):
|
||||
self._evals_completed = updated_eval_step
|
||||
|
|
@ -106,7 +110,9 @@ class _StopAfterNEvalsHook(session_run_hook.SessionRunHook):
|
|||
if self._num_evals is None:
|
||||
logging.info('Evaluation [%d]', evals_completed)
|
||||
else:
|
||||
logging.info('Evaluation [%d/%d]', evals_completed, self._num_evals)
|
||||
if ((evals_completed % self._log_frequency) == 0 or
|
||||
(self._num_evals == evals_completed)):
|
||||
logging.info('Evaluation [%d/%d]', evals_completed, self._num_evals)
|
||||
if self._num_evals is not None and evals_completed >= self._num_evals:
|
||||
run_context.request_stop()
|
||||
|
||||
|
|
|
|||
|
|
@ -76,10 +76,10 @@ string DriverVersionStatusToString(port::StatusOr<DriverVersion> version) {
|
|||
|
||||
port::StatusOr<DriverVersion> StringToDriverVersion(const string &value) {
|
||||
std::vector<string> pieces = port::Split(value, '.');
|
||||
if (pieces.size() != 2 && pieces.size() != 3) {
|
||||
if (pieces.size() < 2 || pieces.size() > 4) {
|
||||
return port::Status{
|
||||
port::error::INVALID_ARGUMENT,
|
||||
port::Printf("expected %%d.%%d or %%d.%%d.%%d form for driver version; got \"%s\"",
|
||||
port::Printf("expected %%d.%%d, %%d.%%d.%%d, or %%d.%%d.%%d.%%d form for driver version; got \"%s\"",
|
||||
value.c_str())};
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -16,6 +16,10 @@ tf_module {
|
|||
name: "COMPILER_VERSION"
|
||||
mtype: "<type \'str\'>"
|
||||
}
|
||||
member {
|
||||
name: "CXX11_ABI_FLAG"
|
||||
mtype: "<type \'int\'>"
|
||||
}
|
||||
member {
|
||||
name: "ConditionalAccumulator"
|
||||
mtype: "<type \'type\'>"
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
path: "tensorflow.sysconfig"
|
||||
tf_module {
|
||||
member_method {
|
||||
name: "get_compile_flags"
|
||||
argspec: "args=[], varargs=None, keywords=None, defaults=None"
|
||||
}
|
||||
member_method {
|
||||
name: "get_include"
|
||||
argspec: "args=[], varargs=None, keywords=None, defaults=None"
|
||||
|
|
@ -8,4 +12,8 @@ tf_module {
|
|||
name: "get_lib"
|
||||
argspec: "args=[], varargs=None, keywords=None, defaults=None"
|
||||
}
|
||||
member_method {
|
||||
name: "get_link_flags"
|
||||
argspec: "args=[], varargs=None, keywords=None, defaults=None"
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:14.04
|
||||
|
||||
MAINTAINER Jan Prach <jendap@google.com>
|
||||
LABEL maintainer="Jan Prach <jendap@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@
|
|||
# ==============================================================================
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:14.04
|
||||
|
||||
MAINTAINER Jan Prach <jendap@google.com>
|
||||
LABEL maintainer="Jan Prach <jendap@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM debian:jessie
|
||||
|
||||
MAINTAINER Jan Prach <jendap@google.com>
|
||||
LABEL maintainer="Jan Prach <jendap@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu14.04
|
||||
|
||||
MAINTAINER Jan Prach <jendap@google.com>
|
||||
LABEL maintainer="Jan Prach <jendap@google.com>"
|
||||
|
||||
# In the Ubuntu 14.04 images, cudnn is placed in system paths. Move them to
|
||||
# /usr/local/cuda
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu14.04
|
||||
|
||||
MAINTAINER Ilya Biryukov <ibiryukov@google.com>
|
||||
LABEL maintainer="Ilya Biryukov <ibiryukov@google.com>"
|
||||
|
||||
# In the Ubuntu 14.04 images, cudnn is placed in system paths. Move them to
|
||||
# /usr/local/cuda
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:14.04
|
||||
|
||||
MAINTAINER Jonathan Hseu <jhseu@google.com>
|
||||
LABEL maintainer="Jonathan Hseu <jhseu@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:14.04
|
||||
|
||||
MAINTAINER Jan Prach <jendap@google.com>
|
||||
LABEL maintainer="Jan Prach <jendap@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:14.04
|
||||
|
||||
MAINTAINER Jan Prach <jendap@google.com>
|
||||
LABEL maintainer="Jan Prach <jendap@google.com>"
|
||||
|
||||
# Copy and run the install scripts.
|
||||
COPY install/*.sh /install/
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ and tests. Click on **Details** to see the results from Jenkins or the internal
|
|||
CI system.
|
||||
|
||||
Results from Jenkins are displayed in the Jenkins UI. For more information,
|
||||
see the [Jenkns documentation](https://jenkins.io/doc/).
|
||||
see the [Jenkins documentation](https://jenkins.io/doc/).
|
||||
|
||||
Results from the internal CI system are displayed in the Build Status UI. In
|
||||
this UI, to see the logs for a failed build:
|
||||
|
|
|
|||
|
|
@ -426,6 +426,72 @@ do_code_link_check() {
|
|||
tensorflow/tools/ci_build/code_link_check.sh
|
||||
}
|
||||
|
||||
# List .h|.cc files changed in the last non-merge git commit that still exist,
|
||||
# i.e., not removed.
|
||||
# Usage: get_clang_files_to_check [--incremental]
|
||||
get_clang_files_to_check() {
|
||||
if [[ "$1" == "--incremental" ]]; then
|
||||
CHANGED_CLANG_FILES=$(get_changed_files_in_last_non_merge_git_commit | \
|
||||
grep '.*\.h$\|.*\.cc$')
|
||||
|
||||
# Do not include files removed in the last non-merge commit.
|
||||
CLANG_FILES=""
|
||||
for CLANG_FILE in ${CHANGED_CLANG_FILES}; do
|
||||
if [[ -f "${CLANG_FILE}" ]]; then
|
||||
CLANG_FILES="${CLANG_FILES} ${CLANG_FILE}"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "${CLANG_FILES}"
|
||||
else
|
||||
find tensorflow -name '*.h' -o -name '*.cc'
|
||||
fi
|
||||
}
|
||||
|
||||
do_clang_format_check() {
|
||||
if [[ $# != "0" ]] && [[ $# != "1" ]]; then
|
||||
echo "Invalid syntax when invoking do_clang_format_check"
|
||||
echo "Usage: do_clang_format_check [--incremental]"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$1" == "--incremental" ]]; then
|
||||
CLANG_SRC_FILES=$(get_clang_files_to_check --incremental)
|
||||
|
||||
if [[ -z "${CLANG_SRC_FILES}" ]]; then
|
||||
echo "do_clang_format_check will NOT run due to --incremental flag and "\
|
||||
"due to the absence of .h or .cc code changes in the last commit."
|
||||
return 0
|
||||
fi
|
||||
elif [[ -z "$1" ]]; then
|
||||
# TODO (yongtang): Always pass --incremental until all files have
|
||||
# been sanitized gradually. Then this --incremental could be removed.
|
||||
CLANG_SRC_FILES=$(get_clang_files_to_check --incremental)
|
||||
else
|
||||
echo "Invalid syntax for invoking do_clang_format_check"
|
||||
echo "Usage: do_clang_format_check [--incremental]"
|
||||
return 1
|
||||
fi
|
||||
|
||||
CLANG_FORMAT=${CLANG_FORMAT:-clang-format-3.8}
|
||||
|
||||
success=1
|
||||
for filename in $CLANG_SRC_FILES; do
|
||||
$CLANG_FORMAT --style=google $filename | diff $filename - > /dev/null
|
||||
if [ ! $? -eq 0 ]; then
|
||||
success=0
|
||||
echo File $filename is not properly formatted with "clang-format "\
|
||||
"--style=google"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ $success == 0 ]; then
|
||||
echo Clang format check fails.
|
||||
exit 1
|
||||
fi
|
||||
echo Clang format check success.
|
||||
}
|
||||
|
||||
do_check_load_py_test() {
|
||||
BUILD_CMD="bazel build ${BAZEL_FLAGS} //tensorflow/tools/pip_package:check_load_py_test"
|
||||
${BUILD_CMD}
|
||||
|
|
|
|||
|
|
@ -28,6 +28,7 @@ if [[ "$1" != "" ]] && [[ "$1" != "--without_cmake" ]]; then
|
|||
fi
|
||||
|
||||
# Install dependencies from ubuntu deb repository.
|
||||
apt-key adv --keyserver keyserver.ubuntu.com --recv 084ECFC5828AB726
|
||||
apt-get update
|
||||
|
||||
if [[ "$ubuntu_version" == "14" ]]; then
|
||||
|
|
@ -41,6 +42,7 @@ apt-get install -y --no-install-recommends \
|
|||
autoconf \
|
||||
automake \
|
||||
build-essential \
|
||||
clang-format-3.8 \
|
||||
curl \
|
||||
ffmpeg \
|
||||
git \
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@
|
|||
# Automatically update TensorFlow version in source files
|
||||
#
|
||||
# Usage:
|
||||
# ./tensorflow/tools/ci_build/update_version.py --version 1.4.0-rc0
|
||||
# ./tensorflow/tools/ci_build/update_version.py --version 1.4.0-rc1
|
||||
# ./tensorflow/tools/ci_build/update_version.py --nightly
|
||||
#
|
||||
"""Update version of TensorFlow script."""
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@
|
|||
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y \
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
# Pick up some TF dependencies.
|
||||
RUN apt-get update && apt-get install -y \
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM jpetazzo/dind
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
RUN apt-get update
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
# Pick up some TF dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
# Pick up some TF dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
|
||||
|
||||
MAINTAINER Gunhan Gulsoy <gunan@google.com>
|
||||
LABEL maintainer="Gunhan Gulsoy <gunan@google.com>"
|
||||
|
||||
# It is possible to override these for releases.
|
||||
ARG TF_BRANCH=master
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
FROM ubuntu:16.04
|
||||
|
||||
MAINTAINER Shanqing Cai <cais@google.com>
|
||||
LABEL maintainer="Shanqing Cai <cais@google.com>"
|
||||
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y \
|
||||
|
|
|
|||
|
|
@ -170,8 +170,16 @@ def write_version_info(filename, git_version):
|
|||
if b"\"" in git_version or b"\\" in git_version:
|
||||
git_version = "git_version_is_invalid" # do not cause build to fail!
|
||||
contents = """/* Generated by gen_git_source.py */
|
||||
#include <string>
|
||||
const char* tf_git_version() {return "%s";}
|
||||
const char* tf_compiler_version() {return __VERSION__;}
|
||||
const int tf_cxx11_abi_flag() {
|
||||
#ifdef _GLIBCXX_USE_CXX11_ABI
|
||||
return _GLIBCXX_USE_CXX11_ABI;
|
||||
#else
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
""" % git_version
|
||||
open(filename, "w").write(contents)
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,15 @@ if [[ $? != 0 ]]; then
|
|||
fi
|
||||
|
||||
cat <<EOF > ${OUTPUT_FILENAME}
|
||||
#include <string>
|
||||
const char* tf_git_version() {return "${GIT_VERSION}";}
|
||||
const char* tf_compiler_version() {return __VERSION__;}
|
||||
const int tf_cxx11_abi_flag() {
|
||||
#ifdef _GLIBCXX_USE_CXX11_ABI
|
||||
return _GLIBCXX_USE_CXX11_ABI;
|
||||
#else
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
EOF
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ from setuptools.dist import Distribution
|
|||
# This version string is semver compatible, but incompatible with pip.
|
||||
# For pip, we will remove all '-' characters from this string, and use the
|
||||
# result for pip.
|
||||
_VERSION = '1.4.0-rc0'
|
||||
_VERSION = '1.4.0-rc1'
|
||||
|
||||
REQUIRED_PACKAGES = [
|
||||
'enum34 >= 1.1.6',
|
||||
|
|
|
|||
8
third_party/aws.BUILD
vendored
8
third_party/aws.BUILD
vendored
|
|
@ -18,6 +18,9 @@ cc_library(
|
|||
"@%ws%//tensorflow:darwin": glob([
|
||||
"aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",
|
||||
]),
|
||||
"@%ws%//tensorflow:linux_ppc64le": glob([
|
||||
"aws-cpp-sdk-core/source/platform/linux-shared/*.cpp",
|
||||
]),
|
||||
"//conditions:default": [],
|
||||
}) + glob([
|
||||
"aws-cpp-sdk-core/include/**/*.h",
|
||||
|
|
@ -57,6 +60,11 @@ cc_library(
|
|||
"ENABLE_CURL_CLIENT",
|
||||
"ENABLE_NO_ENCRYPTION",
|
||||
],
|
||||
"@%ws%//tensorflow:linux_ppc64le": [
|
||||
"PLATFORM_LINUX",
|
||||
"ENABLE_CURL_CLIENT",
|
||||
"ENABLE_NO_ENCRYPTION",
|
||||
],
|
||||
"//conditions:default": [],
|
||||
}),
|
||||
includes = [
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user