pytorch/docs/source
Pruthvi Madugundu fbd08fb358 Introduce TORCH_DISABLE_GPU_ASSERTS (#84190)
- Asserts for CUDA are enabled by default
- Disabled for ROCm by default by setting `TORCH_DISABLE_GPU_ASSERTS` to `ON`
- Can be enabled for ROCm by setting above variable to`OFF` during build or can be forcefully enabled by setting `ROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON`

This is follow up changes as per comment in PR #81790, comment [link](https://github.com/pytorch/pytorch/pull/81790#issuecomment-1215929021)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84190
Approved by: https://github.com/jeffdaily, https://github.com/malfet
2022-11-04 04:43:05 +00:00
..
_static [maskedtensor] add docs (#84887) 2022-10-19 20:44:34 +00:00
_templates Fix left nav (#78552) 2022-06-01 00:49:53 +00:00
community Add General Project Policies (#87385) 2022-10-20 21:02:09 +00:00
elastic Add watchdog to TorchElastic agent and trainers (#84081) 2022-09-07 00:17:20 +00:00
notes Introduce TORCH_DISABLE_GPU_ASSERTS (#84190) 2022-11-04 04:43:05 +00:00
rpc Support Union in TorchScript (#64234) 2021-09-03 06:12:24 -07:00
scripts [ONNX] Update ONNX documentation to include unsupported operators (#84496) 2022-09-16 23:48:37 +00:00
amp.rst Remove deprecated torch.matrix_rank (#70981) 2022-09-22 17:40:46 +00:00
autograd.rst Change torch.autograd.graph.disable_saved_tensors_hooks to be public API (#85994) 2022-10-03 16:25:01 +00:00
backends.rst Add mem efficient backend flag (#87946) 2022-10-28 15:51:10 +00:00
benchmark_utils.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
bottleneck.rst add itt unit test and docstrings (#84848) 2022-09-28 01:39:58 +00:00
checkpoint.rst
complex_numbers.rst Add a note on CUDA 11.6 (#80363) 2022-06-27 21:34:24 +00:00
conf.py Unify SymIntNode and SymFloatNode into SymNode (#87817) 2022-10-27 20:56:02 +00:00
config_mod.rst rename config module file to work with gh pages better 2022-03-10 20:41:44 +00:00
cpp_extension.rst Check clang++/g++ version when compiling CUDA extensions (#63230) 2022-02-24 08:32:32 +00:00
cpp_index.rst
cuda._sanitizer.rst Fix typos under docs directory (#88033) 2022-10-31 19:31:56 +00:00
cuda.rst (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682) 2022-10-12 03:44:21 +00:00
cudnn_persistent_rnn.rst Remove orphan from cuDNN persistent note (#65160) 2021-09-21 11:09:47 -07:00
cudnn_rnn_determinism.rst Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
data.rst Fix typos under docs directory (#88033) 2022-10-31 19:31:56 +00:00
ddp_comm_hooks.rst Fix two small typos in ddp_comm_hooks.rst (#82047) 2022-07-23 19:10:57 +00:00
deploy.rst Delete torch::deploy from pytorch core (#85953) 2022-10-06 07:20:16 +00:00
distributed.algorithms.join.rst Add tutorial link (#62785) 2021-08-05 17:28:02 -07:00
distributed.elastic.rst [1/n][torch/elastic] Move torchelastic docs *.rst (#148) 2021-05-04 00:57:56 -07:00
distributed.optim.rst [distributed][docs] Delete distributed optimimzer section from RPC and add reference to namespace docs page (#68068) 2021-11-09 15:01:54 -08:00
distributed.rst [docs] batch_isend_irecv and P2POp of torch.distributed (#86438) 2022-10-25 00:11:50 +00:00
distributions.rst [Reinstate] Wishart distribution (#70377) 2021-12-30 11:41:46 -08:00
dlpack.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
docutils.conf
fft.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
fsdp.rst [FSDP()][3/N] Refactor public APIs (#87917) 2022-10-31 16:45:21 +00:00
futures.rst Update docs to mention CUDA support for Future (#50048) 2021-05-11 08:26:33 -07:00
fx.rst prepare removal of deprecated functionality in torch.testing (#87969) 2022-11-02 14:04:48 +00:00
hub.rst Add more details to the known limitations section of torchhub docs (#69970) 2021-12-16 02:43:48 -08:00
index.rst Set up new module torch.signal.windows (#85599) 2022-10-14 11:33:32 +00:00
jit_builtin_functions.rst Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
jit_language_reference_v2.rst Fix typos in docs (#80602) 2022-08-29 23:32:44 +00:00
jit_language_reference.rst (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682) 2022-10-12 03:44:21 +00:00
jit_python_reference.rst [JIT] improve documentation (#57991) 2021-05-19 11:47:32 -07:00
jit_unsupported.rst (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682) 2022-10-12 03:44:21 +00:00
jit_utils.rst Create __init__.py (#78629) 2022-06-03 18:14:21 +00:00
jit.rst torch.jit doc link for nvfuser readme.md (#77780) 2022-07-07 23:25:35 +00:00
library.rst Add docs for Python Registration 2022-06-13 23:21:23 +00:00
linalg.rst [Array API] Add linalg.vecdot (#70542) 2022-07-12 14:28:54 +00:00
masked.rst Fix links to tutorial in torch masked docs (#88129) 2022-10-31 21:31:54 +00:00
math-quantizer-equation.png
mobile_optimizer.rst [Vulkan] Add Vulkan Rewrite to Transfer Inputs and Outputs to Vulkan and CPU Backends Respectively (#87432) 2022-10-31 14:18:45 +00:00
model_zoo.rst
monitor.rst torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
multiprocessing.rst Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
name_inference.rst Abladawood patch 1 (#58496) 2021-05-20 10:32:18 -07:00
named_tensor.rst Add torch.unflatten and improve its docs (#81399) 2022-07-29 15:02:42 +00:00
nested.rst Add support for neg to NestedTensor (#88131) 2022-11-03 15:15:57 +00:00
nn.functional.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
nn.init.rst update nn.init doc to reflect the no_grad (#80882) 2022-07-07 17:19:29 +00:00
nn.rst Add Dropout1d module 2022-06-15 14:39:07 +00:00
onnx_supported_aten_ops.rst [ONNX] Update ONNX documentation to include unsupported operators (#84496) 2022-09-16 23:48:37 +00:00
onnx.rst [ONNX] Update user documentation (#85819) 2022-09-30 19:35:34 +00:00
optim.rst [doc] LR scheduler example fix (#86629) 2022-10-11 21:41:50 +00:00
package.rst Fix typos in torch.package documentation (#82994) 2022-08-08 20:19:17 +00:00
pipeline.rst Minor changes in documentation (#68557) 2021-11-18 17:57:16 -08:00
profiler.rst Fix ITT unit-tests if PyTorch is compiled with USE_ITT=OFF (#86199) 2022-10-04 21:57:05 +00:00
quantization-accuracy-debugging.rst Fix typo under docs directory (#87583) 2022-10-24 23:52:44 +00:00
quantization-backend-configuration.rst quantization: autogenerate quantization backend configs for documentation (#75126) 2022-04-04 22:22:30 +00:00
quantization-support.rst Fix typos under docs directory (#88033) 2022-10-31 19:31:56 +00:00
quantization.rst Fix typos under docs directory (#88033) 2022-10-31 19:31:56 +00:00
random.rst
rpc.rst Fix typo under docs directory and RELEASE.md (#85896) 2022-09-29 21:41:59 +00:00
signal.rst Reimplement Kaiser window (#87330) 2022-10-27 21:01:01 +00:00
sparse.rst Fix typo under docs directory (#87583) 2022-10-24 23:52:44 +00:00
special.rst [primTorch] special: j0, j1, spherical_j0 (#86049) 2022-10-04 18:21:46 +00:00
storage.rst Fix typos in docs (#80602) 2022-08-29 23:32:44 +00:00
tensor_attributes.rst Chore: Add 'mps' to the docs of tensor_attributes (#86585) 2022-10-14 19:59:33 +00:00
tensor_view.rst Correcting a minor typo: "Users should pay" instead of "Users should be pay" (#72500) 2022-02-08 23:08:25 +00:00
tensorboard.rst Cleanup all module references in doc (#73983) 2022-03-10 22:26:29 +00:00
tensors.rst Remove deprecated torch.lstsq (#70980) 2022-09-23 00:16:55 +00:00
testing.rst Fix links in torch.testing docs (#80353) 2022-07-11 19:15:53 +00:00
torch.ao.ns._numeric_suite_fx.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.ao.ns._numeric_suite.rst Quantization docs: add pages for Numeric Suite (Eager and FX) (#66380) 2021-10-11 18:47:58 -07:00
torch.overrides.rst Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"" 2022-05-18 18:40:57 +00:00
torch.rst [primTorch] Add a ref for narrow_copy (#86748) 2022-10-17 10:16:05 +00:00
type_info.rst ENH: Convert finfo.tiny to finfo.smallest_normal (#76292) 2022-05-20 00:59:48 +00:00