Commit Graph

75 Commits

Author SHA1 Message Date
Dmytro Dzhulgakov
c25e33789e Lightweight at-most-once logging for API usage (#20745)
Summary:
Resubmit #20698 which got messed up.

Idea is that when PyTorch is used in a custom build environment (e.g. Facebook), it's useful to track usage of various APIs centrally. This PR introduces a simple very lightweight mechanism to do so - only first invocation of a trigger point would be logged. This is significantly more lightweight than #18235 and thus we can allow to put logging in e.g. TensorImpl.

Also adds an initial list of trigger points. Trigger points are added in such a way that no static initialization triggers them, i.e. just linking with libtorch.so will not cause any logging. Further suggestions of what to log are welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20745

Differential Revision: D15429196

Pulled By: dzhulgakov

fbshipit-source-id: a5e41a709a65b7ebccc6b95f93854e583cf20aca
2019-05-23 23:17:59 -07:00
Edward Z. Yang
9b1dbffba5
Re-sync with internal repository (#20702) 2019-05-20 09:22:57 -04:00
Dmytro Dzhulgakov
d3059b9c49 Lightweight logging for once-only API usage 2019-05-19 23:04:40 -07:00
Michael Antonov
698103cdd6 DataLoader docs update to describe how workers are managed, including Windows. (#18091)
Summary:
It's been hard to understand how workers are launched and what code runs in the worker vs. main process, especially on Windows, which leads to many of our samples failing. This explains when workers run an how to make code work on Windows as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18091

Differential Revision: D15083766

Pulled By: soumith

fbshipit-source-id: 8a7e60defc8a72ec63874f657d7d5267d951dccf
2019-04-26 16:01:30 -07:00
SsnL
5e62ee2b97 Fix no SIGCHLD checking in DataLoaderIter._shutdown_workers (#19421)
Summary:
Also

1. Bump multiprocessing test timeout following python core tests
2. Fix one type of flakiness in `test_proper_exit`.
3. Add trace reporting when loader process hangs in `test_proper_exit` using `faulthandler`.
3. Give `test_proper_exit` another try.

I'll heavily retest this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19421

Differential Revision: D15063728

Pulled By: ezyang

fbshipit-source-id: 4e0d992622e11053c44a9ec237b88b9a28a4472c
2019-04-24 08:06:58 -07:00
Stas Bekman
c0a2452ffe multiline KeyError msg python bug workaround (#18557)
Summary:
make multiline KeyError msg readable by working around a python bug https://bugs.python.org/issue2651

discussion: https://github.com/pytorch/pytorch/issues/16647
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18557

Differential Revision: D14681086

Pulled By: soumith

fbshipit-source-id: acbd13a823302c854c3d364028ed414fd8ce6bc8
2019-03-29 07:04:20 -07:00
Daniel
e5742494f6 Minor typo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16980

Differential Revision: D14033686

Pulled By: gchanan

fbshipit-source-id: 9f7967defc6795640e14157d0b701b185061741f
2019-02-12 08:02:04 -08:00
Michael Carilli
0742874643 Allow dataloader to accept a custom memory pinning function (#16743)
Summary:
Renewed attempt at https://github.com/pytorch/pytorch/pull/14171

From the original PR:
> Currently, the pin_memory_batch function in the dataloader will return a batch comprised of any unrecognized type without pinning the data, because it doesn't know how.
>
>This behavior was preventing us from overlapping data prefetching in Mask-RCNN, whose custom collate_fn returns a custom batch type.

The old PR allowed the user to implement batch pinning for custom batch and data types by passing a custom pin function to the dataloader.  slayton58 suggested a cleaner approach:  allow the user to define a `pin_memory` method on their custom types, and have `pin_memory_batch` [check for the presence of that method](https://github.com/pytorch/pytorch/pull/16743/files#diff-9f154cbd884fe654066b1621fad654f3R56) in the incoming batch as a fallback.  I've updated the test and docstrings accordingly.

The old PR was merged but then reverted due to weird cuda OOM errors on windows that may or may not have been related.  I have no idea why my changes would cause such errors (then or now) but it's something to keep an eye out for.

fmassa and yf225 who were my POCs on the old PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16743

Differential Revision: D13991745

Pulled By: ezyang

fbshipit-source-id: 74e71f62a03be453b4caa9f5524e9bc53467fa17
2019-02-10 19:37:53 -08:00
SsnL
4aae89fa7b Make test_proper_exit more robust (#16249)
Summary:
1. Improve error message for better debugging info
2. Increase timeout
3. Also apply the windows worker failure detection mechanism on non-Windows platforms, for better robustness

Attempt to fix #14501

cc ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16249

Differential Revision: D13784702

Pulled By: ezyang

fbshipit-source-id: 09a7cff83ab9edce561ed69f9fb555ab35d1275f
2019-01-25 08:25:05 -08:00
SsnL
9b5ec2a076 Fix TestDataLoader.test_proper_exit (#15665)
Summary:
Currently, in `test_proper_exit`,
1. we do not kill the correct input `pid` in the `kill_pid` function
fe15d6a2c2/test/test_dataloader.py (L325-L329)
2. the Windows command that detects process status doesn't actually work
fe15d6a2c2/test/test_dataloader.py (L641-L646)
3. `worker_error` and `worker_kill` cases (sometimes?) are not tested because the workers may exit naturally due to the pre-fetching mechanism and a too small `dataset size / batch size`.

In this PR, I, in separate commits:
1. Install `psutil` (a python package specifically built for process monitoring) on some CI builds. (Linux builds installation are done in https://github.com/pietern/pytorch-dockerfiles/pull/29 https://github.com/pietern/pytorch-dockerfiles/pull/30  https://github.com/pytorch/ossci-job-dsl/pull/36 and https://github.com/pytorch/pytorch/pull/15795).
2. Rewrite `test_proper_exit` with `psutil` so we

    1. do not rely on the hacky `is_process_alive` fe15d6a2c2/test/test_dataloader.py (L640-L653)
   2. increase the #task per worker so `worker_error` and `worker_kill` properly trigger
   3. test error message content to ensure that the loader exits with correct message corresponding to each exiting scenario.

3. Fix Windows data loader not having any mechanism to detect worker failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15665

Differential Revision: D13615527

Pulled By: soumith

fbshipit-source-id: cfb2f67837d2d87928a53f00b4d20f09754b7949
2019-01-10 08:47:27 -08:00
SsnL
9217bde807 Refactor dataloader.py (#15331)
Summary:
Same as #14668, and was approved there.

ailzhang , please apply this patch to Horizon's `data_streamer.py`: https://gist.github.com/SsnL/020fdb3d6b7016d81b6ba1d04cc41459 Thank you!

Below is the original description at #14668:

As I am working on tasks in https://github.com/pytorch/pytorch/issues/13023, I realized how unreadable the code is because all functions to be run in multiprocessing must be at top global level. Adding more functionalities to `dataloader.py` will only make things worse.

So in this PR, I refactor `dataloader.py` and move much of it into `data._utils`. E.g., the `_worker_loop` and related methods are now in `data._utils.worker`, signal handling code in `data._utils.signal_handling`, collating code in `data._utils.collate`, etc. This split, IMHO, makes code much clearer. I will base my future changes to DataLoader on top of this.

No functionality is changed, except that  I added `torch._six.queue`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15331

Reviewed By: yf225

Differential Revision: D13503120

Pulled By: ailzhang

fbshipit-source-id: 94df16b4d80ad1102c437cde0d5a2e62cffe1f8e
2018-12-19 12:36:03 -08:00
Derek Kim
656b565a0f Trivial comment correction in dataloader (#15276)
Summary:
Trivial comment correction in dataloader
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15276

Differential Revision: D13477324

Pulled By: soumith

fbshipit-source-id: 2a74a014999655d129311d611f2a09411339cb13
2018-12-15 10:59:00 -08:00
Ailing Zhang
38eb1beff5 Revert D13289919: [pytorch][PR] [DataLoader] Refactor dataloader.py
Differential Revision:
D13289919

Original commit changeset: d701bc7bb48f

fbshipit-source-id: c350c491fefa98a0a7c0cf22cb832e78aeb15c3d
2018-12-04 20:25:16 -08:00
SsnL
16558a1e9d Refactor dataloader.py (#14668)
Summary:
As I am working on tasks in https://github.com/pytorch/pytorch/issues/13023, I realized how unreadable the code is because all functions to be run in multiprocessing must be at top global level. Adding more functionalities to `dataloader.py` will only make things worse.

So in this PR, I refactor `dataloader.py` and move much of it into `data._utils`. E.g., the `_worker_loop` and related methods are now in `data._utils.worker`, signal handling code in `data._utils.signal_handling`, collating code in `data._utils.collate`, etc. This split, IMHO, makes code much clearer. I will base my future changes to DataLoader on top of this.

No functionality is changed, except that  I added `torch._six.queue`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14668

Reviewed By: soumith

Differential Revision: D13289919

Pulled By: ailzhang

fbshipit-source-id: d701bc7bb48f5dd7b163b5be941a9d27eb277a4c
2018-12-04 09:53:41 -08:00
Will Feng
5918de8e84 Revert D13166669: [pytorch][PR] Allow dataloader to accept a custom memory pinning function
Differential Revision:
D13166669

Original commit changeset: ca965f9841d4

fbshipit-source-id: 0836b4f50f73ba01c97491a719660f02e36f20ad
2018-11-26 14:55:04 -08:00
Michael Carilli
7557a993ab Allow dataloader to accept a custom memory pinning function (#14171)
Summary:
Currently, the `pin_memory_batch` function in the dataloader will return a batch comprised of any unrecognized type without pinning the data, because it doesn't know how.

This behavior was preventing us from overlapping data prefetching in Mask-RCNN, whose custom `collate_fn` returns a custom batch type.

The present PR adds the ability for the user to pass a `pin_fn` alongside any custom `collate_fn` to handle such custom types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14171

Differential Revision: D13166669

Pulled By: soumith

fbshipit-source-id: ca965f9841d4a259b3ca4413c8bd0d8743d433ab
2018-11-23 08:12:43 -08:00
Tongzhou Wang
034c969f3c Simply exit DataLoader when Python is dying (#12700)
Summary:
I struggled with yet another DataLoader hang for the entire evening. After numerous experiments, I realized that it is unsafe to do anything when Python is shutting down. We also unfortunately implement our DataLaoder cleaning-up logic in `__del__`, a function that may or may not be called during shutdown, and if called, may or may not be called before core library resources are freed.

Fortunately, we are already setting all our workers and pin_memory_thread as daemonic. So in case of Python shutting down, we can just do a no-op in `__del__` and rely on the automatic termination of daemonic children.

An `atexit` hook is used to detect Python exit.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12700

Differential Revision: D10419027

Pulled By: SsnL

fbshipit-source-id: 5753e70d03e69eb1c9ec4ae2154252d51e2f79b0
2018-10-16 22:05:33 -07:00
Tongzhou Wang
11c31aef04 Prevent hanging in data loader altogether
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11985

Differential Revision: D10202374

Pulled By: SsnL

fbshipit-source-id: 1ab1a07185f78a104f9b05930a87ef5a32f431e4
2018-10-09 09:54:19 -07:00
Tongzhou Wang
c30790797f Minor data loader doc improvements
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11821

Differential Revision: D9948292

Pulled By: SsnL

fbshipit-source-id: 01c21c129423c0f7844b403e665a8fe021a9c820
2018-09-19 15:33:25 -07:00
Tongzhou Wang
8e76dcf173 Prevent raising KeyboardInterrupt in worker (#11718)
Summary:
Current behavior is that each process (main and workers) will print trace from `KeyboardInterrupt`. And the main process will also print
```
RuntimeError: DataLoader worker (pid 46045) exited unexpectedly with exit code 1. Details are lost due to multiprocessing. Rerunning with nm_workers=0 may give better error trace.
```
due to our SIGCLD handler.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11718

Differential Revision: D9840844

Pulled By: SsnL

fbshipit-source-id: 1a05060bb02907fef5aac3f274d2c84f9f42d187
2018-09-14 16:09:35 -07:00
Jeff Smith
05e06f7de2 migrating deprecated calls without abc module for containers (#11515)
Summary:
Implementing #10540.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11515

Reviewed By: apaszke

Differential Revision: D9771045

Pulled By: jeffreyksmithjr

fbshipit-source-id: 85ea39abaa9b465805a969f122b626b11fc85ef6
2018-09-13 15:09:22 -07:00
Tongzhou Wang
57f149a861 Only join pin_memory_thread after it started (#11599)
Summary:
Same reason as in #11432 .

Example error:
```
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7fa06963cf28>
Traceback (most recent call last):
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 405, in __del__
    self._shutdown_workers()
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 401, in _shutdown_workers
    self.pin_memory_thread.join()
AttributeError: '_DataLoaderIter' object has no attribute 'pin_memory_thread'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11599

Differential Revision: D9801143

Pulled By: SsnL

fbshipit-source-id: 520590a21f56fa381fcac621457a7544d3fba47e
2018-09-13 09:40:49 -07:00
Tongzhou Wang
560d6efd3a Only join started dataloader workers (#11432)
Summary:
`Process.start()` actually take some time as it needs to start a
process and pass the arguments over via a pipe. Therefore, we
only add a worker to self.workers list after it started, so
that we do not call `.join()` if program dies before it starts,
and `__del__` tries to join it but will get:
    AssertionError: can only join a started process.

Example trace when such error happens:
```py
[unrelated]
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 500, in __iter__
    return _DataLoaderIter(self)
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 292, in __init__
    w.start()
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/process.py", line 112, in start
    self._popen = self._Popen(self)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
    return Popen(process_obj)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/popen_fork.py", line 70, in _launch
    self.pid = os.fork()
KeyboardInterrupt
Exception ignored in: <function _DataLoaderIter.__del__ at 0x7fa704d5aa60>
Traceback (most recent call last):
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 398, in __del__
    self._shutdown_workers()
  File "/private/home/ssnl/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 392, in _shutdown_workers
    w.join()
  File "/private/home/ssnl/miniconda3/lib/python3.7/multiprocessing/process.py", line 139, in join
    assert self._popen is not None, 'can only join a started process'
AssertionError: can only join a started process
```

No test because hard to reliably trigger.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11432

Reviewed By: ezyang

Differential Revision: D9735430

Pulled By: SsnL

fbshipit-source-id: a8912d9bb4063f210d6236267b178173810e2351
2018-09-09 12:55:51 -07:00
Tongzhou Wang
04f381650e Resubmit: Fix dataloader hang when it is not completely iterated (#10366)
Summary:
https://github.com/pytorch/pytorch/pull/9655
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10366

Differential Revision: D9237393

Pulled By: SsnL

fbshipit-source-id: fabfad7f371ba33300098f6b885c0e3f26c3e14a
2018-08-09 00:10:24 -07:00
Tongzhou Wang
a7f183f971 Revert "Fix dataloader hang when it is not completely iterated (#9655)" (#9804)
Summary:
This reverts commit 9ee5133651.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9804

Reviewed By: ezyang

Differential Revision: D8987780

Pulled By: SsnL

fbshipit-source-id: 75ad70b0b8d672d0b35235fa248b187be64b68e5
2018-07-25 10:10:30 -07:00
Tongzhou Wang
9ee5133651 Fix dataloader hang when it is not completely iterated (#9655)
Summary:
second trial of https://github.com/pytorch/pytorch/pull/7140

cc csarofeen Let's see if this works. It passes everything locally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9655

Differential Revision: D8940177

Pulled By: SsnL

fbshipit-source-id: 8d6340fc9f7355c71e1e26b262da166402faa158
2018-07-22 20:38:27 -07:00
tvn
146b951ec5 Fix seeding random module in DataLoader (#7886)
* fix seeding random module

* make base seed int

* follow 0.4 idiom

* add a test for random seeding
2018-05-29 15:55:04 -04:00
Maxim Berman
03767b66db Add FileNotFoundError to torch._six (#7524)
Add FileNotFoundError for compatibility with Python 2 and use in
dataloader. Fixes pytorch/pytorch#6932
2018-05-12 20:54:26 -04:00
vfdev
6363faf184 Fix issue #7209 in DataLoader (#7265) 2018-05-04 10:51:46 +02:00
Thomas Viehmann
1b0ad8678b import *Sampler to utils.data (Better fix than #6982) (#7007) 2018-04-27 10:18:29 +02:00
Will Feng
e8bdbdaa27 Terminate dataloader workers properly when parent process is SIGKILL'ed (#6779)
Reopening #6606 with fix for TEST_CUDA import issue on Windows and improvement to how we wait for manager exit in test_manager_unclean_exit. Loop tested on the Windows CI multiple times to make sure this actually fixes the CUDA OOM issue.

* Terminate dataloader workers properly when parent process is SIGKILL'ed

* Wait for worker processes to finish before shutting down manager process

* Add test for checking proper worker exit

* cosmetic change

* Test only if CUDA exists

* Don't call multiprocessing.set_start_method() in Python 2

* import TEST_CUDA only when we are in __main__

* Tune JOIN_TIMEOUT

* handle os.getppid() == 0 case

* Reset to original JOIN_TIMEOUT

* Use WaitForSingleObject() to check parent process status on Windows

* Fix TEST_CUDA import

* clean up

* Check main process only when index_queue.get() times out

* Change index_queues to multiprocessing.Queue

* Move manager checking logic to watchdog class

* Fix bugs in dataloader

* Fix TEST_CUDA import issue

* Don't import TEST_CUDA from common_nn

* Use event to signal manager exit in test

* fix lint

* Add comments
2018-04-22 23:03:54 -04:00
gchanan
4c5b95a433
Revert "Terminate dataloader workers properly when parent process is SIGKILL'ed (#6606)" (#6772)
This reverts commit 8d6a50aaeb.
2018-04-19 14:28:48 -04:00
Tongzhou Wang
072d49f787 Fix import error sometimes happening in dataloader when exiting Python (#6671)
* Fix import error sometimes happening in dataloader when exiting Python

* address comments
2018-04-19 06:56:39 -04:00
Will Feng
8d6a50aaeb
Terminate dataloader workers properly when parent process is SIGKILL'ed (#6606)
* Terminate dataloader workers properly when parent process is SIGKILL'ed

* Wait for worker processes to finish before shutting down manager process

* Add test for checking proper worker exit

* cosmetic change

* Test only if CUDA exists

* Don't call multiprocessing.set_start_method() in Python 2

* import TEST_CUDA only when we are in __main__

* Tune JOIN_TIMEOUT

* handle os.getppid() == 0 case

* Reset to original JOIN_TIMEOUT

* Use WaitForSingleObject() to check parent process status on Windows

* Fix TEST_CUDA import

* clean up

* Check main process only when index_queue.get() times out

* Change index_queues to multiprocessing.Queue

* Move manager checking logic to watchdog class

* Fix bugs in dataloader

* Fix TEST_CUDA import issue
2018-04-18 20:41:33 -04:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Sanjeev Satheesh
f15f3ca1af Scope variables inside the dataloader (#6673)
* Scope variables inside the dataloader

This clears up the memory consumed by batches inside the dataloader. Its pretty useful for long living data loaders.

* Update dataloader.py
2018-04-17 17:48:12 -04:00
Tongzhou Wang
6b7ec95abb Link relevant FAQ section in DataLoader docs (#6476)
* Link FAQ section on workers returning same random numbers in DataLoader docs

* explicitly mention section names
2018-04-11 13:41:46 -04:00
Tongzhou Wang
60a16e5663 Set dataloader.batch_size = None when batch_sampler is given (#6108) 2018-03-30 10:01:09 +02:00
peterjc123
ae4362bc6a Fix memory leak when using multiple workers on Windows (#5585) 2018-03-28 10:35:28 +02:00
AlexanderRadionov
831780390c Fixed non-determinate preprocessing on DataLoader (#4640)
dded ind_worker_queue parameter to data.DataLoader. It makes preprocessing determinate.

DataLoader in multiprocessing mode may cause non-deterministic issue. Even if radom_seed has frozen, each subprocess may get tasks in unstable order. This is caused by different I/O time while data loads. If you use augmentation while data loading, it makes results unreproduceble. Look at the https://discuss.pytorch.org/t/deterministic-non-deterministic-results-with-pytorch/9087

To fix this issue I have added the individual queue for each worker. In this case each worker get tasks in the stable order. In summary, subprocess produces the stable results.

To reproduce issue you may change ind_worker_queue to False and run the script several times.
Code to reproduce issue is in the corresponding PR.

* TestIndividualWorkerQueue added to DataLoader tests

* Review fixes

* "Simplify" code by removing itertools

* Rebase conflicts fix

* Review fixes

* Fixed shutdown behavior

* Removed ind_worker_queue flag.

* Rebase on master

* Disable tests that use DataLoader with multiple workers (#5322)
2018-03-23 17:43:59 -04:00
Tongzhou Wang
04461fa289 Prefix DataLoaderIter with underscore to discourage subclassing (#5619) 2018-03-08 11:09:51 +01:00
Will Feng
a90b695590 Disallow num_workers > 0 for DataLoader on Windows (#5591)
Using DataLoader with num_workers > 0 is known to cause CUDA out-of-memory issue on Windows.

This issue has already been noted in #4092.
2018-03-07 10:21:03 -05:00
Tongzhou Wang
392fc8885c add faq on cuda memory management and dataloder (#5378) 2018-02-27 18:35:30 -05:00
Achal Dave
8327982904 Set python random seed in workers (#5415)
* Set python random seed in workers

* Import random
2018-02-27 03:16:10 -05:00
Tongzhou Wang
1ff537ca71 Ignore FileNotFoundError when shutting down in data_queue.get (#5380)
* Ignore FileNotFoundError when shutting down in data_queue.get

* Address @apaszke comments
2018-02-24 13:32:13 -05:00
Tongzhou Wang
64a9ecae02 Dataloader issues (#4643)
* EINTR and kill by loader fix

* addressed @apaszke 's comments

* remove EINTR handling and add test if we are in main thread before setting SIGCHLD
2018-01-29 01:18:17 +01:00
Tongzhou Wang
0ac58d53b8 ATen conv param expansion; InstanceNorm use_running_stats fix (#4544)
* fix instancenorm and aten conv param expansion

* addressed colesbury 's comments

* improve conv input shape check
2018-01-10 17:36:26 -05:00
Christian Sarofeen
bc6bd62bd6 Fix distributed dataloader so it pins memory to current GPU not GPU 0. 2017-12-19 13:39:06 +01:00
Tongzhou Wang
5cc26c0c90 Add default PyTorch seeding and worker_init_fn to DataLoader (#4018)
* Add default PyTorch seeding and worker_init_fn to DataLoader

* generate seed using current RNG each time

* worker_seed <- main_proc_RNG_generated_seed + worker_id
2017-12-18 02:19:08 -05:00
Jon Crall
5c13c6962c Raise errors when num_workers == 0 in DataLoader (#4019) 2017-12-05 11:07:43 -08:00